id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
a6d17eed-09d5-43c6-9474-9eef31c20506 | trentmkelly/LessWrong-43k | LessWrong | THE GOLDEN RULE; What can we learn from it?
The golden rule says to treat others as you want to be treated. This principle is about teaching people to have integrity when interacting with others. It's trying to get us to imagine ourselves in someone else's shoes in an attempt to better understand their perspective. If you wouldn't like to be yelled at, then you shouldn't do that to others.
But the golden rule on it's own isn't enough. It's just a rule of thumb, and like all rules of thumb there are exceptions to the rule. The point is that you shouldn't try to follow this rule by the letter and instead you should follow it in spirit. The spirit of the rule is about avoiding being a hypocrite and learning how to think about things from other people's perspectives. More generally, what's needed in relationships is to find mutually-beneficial ways of interacting with each other - to treat others in ways that are compatible with your preferences and their preferences.
In many cases our preferences are in harmony. But sometimes there's a conflict of preferences. So what should be done in these cases? One option is to leave each other alone, and as long as all parties are ok with that, then the conflict is resolved and everybody is in harmony. The goal there is to avoid hurting each other. It’s a good option to always keep in mind as a last resort. Another option we have is to change our preferences so that we’re still interacting with each other but our preferences are in harmony instead of in conflict. These two options mean that we should maintain a degree of flexibility with our preferences. And this makes sense because we're not perfect; sometimes our preferences deserve improvement.
Our preferences are ideas, and like all ideas, we should apply the principles and methods of reason to them. That means recognizing that whatever our current preferences are now, we should always be aware that they might not be good enough. There’s some conflict that needs to be resolved. And that means there's opportunity to f |
7cc8a86a-7ae1-42f3-9957-41689d92c526 | trentmkelly/LessWrong-43k | LessWrong | Smarter humans, not artificial intellegence
I'm writing this article to explain some of the facts that have convinced me that increasing average human intelligence through traditional breeding and genetic manipulation is likelier to reduce existential risks in the short and medium term then studying AI risks, while providing all kinds of side benefits.
Intelligence is useful to achieve goals, including avoiding existential risks. Higher intelligence is associated with many diverse life outcomes improving, from health to wealth. Intelligence may have synergistic effects on economic growth, where average levels of intelligence matter more for wealth then individual levels. Intelligence is a polygenetic trait with strong heritability. Sexual selection in the Netherlands has resulted in extreme increases in average height over the past century: sexual selection for intelligence might do the same. People already select partners for intelligence, and egg donors are advertised by SAT score.
AI research seems to be intelligence constrained. Very few of those capable of making a contribution are aware of the problem, or find it interesting. The Berkeley-MIRI seminar has increased the pool of those aware of the problem, but the total number of AI safety researchers remain small. So far very foundational problems remain to be solved. This is likely to take a very long time: it is not unusual for mathematical fields to take centuries to develop. Furthermore, we can work on both strategies at once and observe spillover from one into the other, as the larger intelligence baseline translates into an increase on the right tail of the distribution.
How could we accomplish this? One idea, invented by Robert Heinlein, as far as I know, is to subsidize marriages between people of higher than usual intelligence and their having children. This idea has the benefit of being entirely non-coercive. It is however unclear how much these subsidies would need to be to influence behavior, and given the strong returns to intelligence i |
cd3e5185-f1f5-459d-951e-6a168f9115d1 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | What does MIRI think about technical alignment?
MIRI thinks technical alignment is really hard, and that we are very far from a solution. However, they think that policy solutions have even less hope. They support several independent researchers following their own directions, in the hopes that one of them will find some promise. They mostly accept the [security mindset](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/): we need to know exactly (probably [mathematically formally)](https://www.alignmentforum.org/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem) what we are doing, or the massive optimization pressure will default in ruin.
|
485d90e7-447c-4177-9925-9dc91a95c767 | trentmkelly/LessWrong-43k | LessWrong | Medical Roundup #3
This time around, we cover the Hanson/Alexander debates on the value of medicine, and otherwise we mostly have good news.
TECHNOLOGY ADVANCES
Regeneron administers a single shot in a genetically deaf child’s ear, and they can hear after a few months, n=2 so far.
Great news: An mRNA vaccine in early human clinical trials reprograms the immune system to attack glioblastoma, the most aggressive and lethal brain tumor. It will now proceed to Phase I. In a saner world, people would be able to try this now.
More great news, we have a cancer vaccine trial in the UK.
And we’re testing personalized mRNA BioNTech canner vaccines too.
US paying Moderna $176 million to develop a pandemic vaccine against bird flu.
We also have this claim that Lorlatinib jumps cancer PFS rates from 8% to 60%.
THE GLP-1 REVOLUTION
Early results from a study show the GLP-1 drug liraglutide could reduce cravings in people with opioid use disorder by 40% compared with a placebo. This seems like a clear case where no reasonable person would wait for more than we already have? If there was someone I cared about who had an opioid problem I would do what it took to get them on a GLP-1 drug.
Rumblings that GLP-1 drugs might improve fertility?
Rumblings that GLP-1 drugs could reduce heart attack, stroke and death even if you don’t lose weight, according to a new analysis? Survey says 6% of Americans might already be on them. Weight loss in studies continues for more than a year in a majority of patients, sustained up to four years, which is what they studied so far.
The case that GLP-1s can be sued against all addictions at scale. It gives users a sense of control which reduces addictive behaviors across the board, including acting as a ‘vaccine’ against developing new addictions. It can be additive to existing treatments. More alcoholics (as an example) already take GLP-1s than existing indicated anti-addiction medications, and a study showed 50%-56% reduction in risk of new or recurring alcoh |
4d1c86a9-7c9f-42dd-91df-49fc61fe0bf6 | trentmkelly/LessWrong-43k | LessWrong | The First Koan: Drinking the Hot Iron Ball
In the traditions of Zen in which koans are common teaching tools, it is common to use a particular story as a novice's first koan. It's the story of Joshu's Dog.
> A monk asked Joshu, a Chinese Zen master: `Has a dog Buddha-nature or not?'
>
> Joshu answered: `Mu.' [Mu is the negative symbol in Chinese, meaning `No-thing' or `Nay'.]
What does this koan mean? How can we find out for ourselves?
It is important to remember certain things: Firstly, koans are not meant to be puzzles, riddles, or intellectual games. They are examples, illustrations of the state of mind that the student is expected to internalize. Secondly, they often appear paradoxical.
> Paradox is a pointer telling you to look beyond it. If paradoxes bother you, that betrays your deep desire for absolutes. The relativist treats a paradox merely as interesting, perhaps amusing or even -- dreadful thought -- educational.
Thirdly, the purpose of Zen teaching isn't to acquire new conceptual baggage, but to eliminate it; not to generate Enlightenment, but to remove the false beliefs that preventing us from recognizing what we already possess. Shedding error is the point, not learning something new.
Take a look at Mumon's commentary for this koan:
> To realize Zen one has to pass through the barrier of the patriachs. Enlightenment always comes after the road of thinking is blocked. If you do not pass the barrier of the patriachs or if your thinking road is not blocked, whatever you think, whatever you do, is like a tangling ghost. You may ask: What is a barrier of a patriach? This one word, Mu, is it.
>
> This is the barrier of Zen. If you pass through it you will see Joshu face to face. Then you can work hand in hand with the whole line of patriachs. Is this not a pleasant thing to do?
>
> If you want to pass this barrier, you must work through every bone in your body, through ever pore in your skin, filled with this question: What is Mu? and carry it day and night. Do not believe it is t |
78e23e65-d0e0-4fb9-aec5-6b6e956cdf4e | trentmkelly/LessWrong-43k | LessWrong | Meetup : West LA - Part A: Predictably Wrong
Discussion article for the meetup : West LA - Part A: Predictably Wrong
WHEN: 25 March 2015 07:00:00PM (-0700)
WHERE: 11066 Santa Monica Blvd, Los Angeles, CA 90025
How to Find Us: Go into this Del Taco. We will be in the back room if possible.
Parking is free in the lot out front or on the street nearby.
Discussion: We will be discussing Part A of Rationality: From AI to Zombies. You are welcome to join us even if you have not completed the reading. However, the reading is very good, and it is free, and you should read it. We will be discussing roughly one part per week for a total of 26 weeks.
Recommended Reading:
* Preface
* Biases: An Introduction
* Part A: Predictably Wrong (pages 7-42)
No prior exposure to Less Wrong is required; this will be generally accessible.
Discussion article for the meetup : West LA - Part A: Predictably Wrong |
d1e1d1e9-7372-4de8-b953-a3fb0043e68f | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Is there any research or forecasts of how likely AI Alignment is going to be a hard vs. easy problem relative to capabilities?
I believe it was Paul Christiano who said in an 80,000 hours interview that there is a surprisingly high chance that AI alignment might end up not actually being difficult.
I’m curious if anyone has done any research or tried to forecast the likelihood that AI Alignment is a difficult vs. ends up being an easy problem to solve relative to progress in creating advanced AI.
Specifically, by the time we reach transformative AI, how likely is it that AI Alignment will occur naturally if current trends in AI capabilities and AI safety research continue, so that we are able to robustly, sustainably prevent x-risk from AI on our current trajectory? |
3adc4ab6-faad-4f9a-ae04-9c8a3b10537f | trentmkelly/LessWrong-43k | LessWrong | Best of Rationality Quotes, 2014 Edition
Here is the way-too-late 2014 edition of the Best of Rationality Quotes collection. (Here is last year's.) Thanks Huluk for nudging me to do it.
Best of Rationality Quotes 2014 (300kB page, 235 quotes)
and Best of Rationality Quotes 2009-2014 (1900kB page, 1770 quotes)
The page was built by a short script (source code here) from all the LW Rationality Quotes threads so far. (We had such a thread each month since April 2009.) The script collects all comments with karma score 10 or more, and sorts them by score. Replies are not collected, only top-level comments.
As is now usual, I provide various statistics and top-lists based on the data. (Source code for these is also at the above link, see the README.) I added these as comments to the post:
* Top quote contributors by total karma score collected
* Top quote contributors by karma score collected in 2014
* Top quote contributors of 2014 by statistical significance level (See this comment for a description of this metric.)
* Top original authors by number of quotes
* Top original authors by total karma score collected
* Best short quotes 2009-2014 |
7d736bcf-2ebc-4dc2-bbcf-8a1e06733d84 | trentmkelly/LessWrong-43k | LessWrong | Porting My Rhythm Setup
In music, as with everything, there's a tradeoff between reliability and hassle. If I come to a gig with my mandolin and the neck falls off, I'm going to be having a bad night. But mandolin necks only rarely fall off, and bringing an extra mandolin would be expensive and annoying, so I don't. On the other hand strings do break, sometimes comically, so most musicians bring backups.
My rhythm stage setup is in an awkward position here, because general-purpose computers aren't all that reliable. Not only do computers sometimes break, but my setup isn't something I can properly back up, and could be hard to replicate on another computer.
This is a silly situation: one of the amazing things about computers is that copying things is free. I should be able to upload my setup, and then download it onto any other computer in minutes, and be going right away. Why can't I do that with my stage setup? I found out yesterday when my computer died and I was trying to set up a new one [1] before the next Kingfisher gig (Saturday). Issues I ran into:
* I use several commercial sounds: Native Instruments Hammond Organ and Fender Rhodes, SWAM's Saxophones, and Sample Modeling's Trombone. These needed to be installed and activated.
* These sounds were running in Kontakt 5, and the new thing is Kontakt 6. While I could restore my Reaper session from backup it wasn't able to transfer my settings between Kontakt versions. I needed to manually set a zillion individual sliders (ex: "Hammond > Amp > Rotor > Balance > One notch CCW from top"). Luckily my old computer was working well enough to let me read the settings, with patience, so I've now documented them.
* I haven't succeeded in building my Bass Whistle Plugin in a way that will run on computers it wasn't built for, so I needed to rebuild from source. This required downloading the plugin SDK, which then didn't work against my forked copy of the plugin framework. It turned out to be easiest to start with a fresh copy of the f |
393a0de1-15e1-46ca-81bb-fc6881524308 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | 'Show Your Working': ChatGPT Performance Doubled w/ Process Rewards (+Synthetic Data Event Horizon)
in the last 24 hours openai have
released this paper let's verify step by
step it represents an almost doubling of
gpd4's raw performance in a test of
mathematics but also extends to other
domains Sam Altman calls it a positive
sign for alignment and yes I have read
it all already along with the release
notes let's get to the main takeaways
they train two reward models for gpt4
one which gave positive feedback for a
final result the final answer to a
mathematics problem for example and
another model where they gave positive
feedback to gpt4 or chat GPT based on
each intermediate reasoning step in the
mathematical solution basically a show
you're working out kind of approach and
the result they got by rewarding good
working out surprised even them it was
able to solve 78 of problems from a
subset of the math test set which I'll
get on to in a second not only is that
almost double gpd4's raw performance of
42 point five percent which by the way
is about double GPT 3's performance of
23 it also outperformed just rewarding
correct answers the Blue Line represents
using a model that rewarded correct
answers only and then you have the
reasoning or process supervised RM at
the top so even when you explicitly
reward correct answers you get fewer
correct answers than rewarding good
working out and yes that did surprise
openai I can hear some of you wondering
about Palm 2 the latest model behind
Bard well the raw model gets 34.3 and
even the model with self-consistency and
Chain of Thought only gets 48.8 on this
math data set the previous state of the
art by the way was 50.3 so 78.2 percent
is quite a big leap and later on I'm
going to show you why that's not even
the cap just for interest here is the
rather ugly title page that openai put
out they call it improving mathematical
reasoning with process supervision maybe
if someone had supervise the color
scheme of this release page it might
have looked better but my point wasn't
just to diss a color scheme it was to
point out something that they also said
down here they say in addition to
boosting performance relative to just
looking at outcomes or correct answers
this form of process supervision also
has an important alignment benefit it
directly trains the model to produce a
chain of thought that is endorsed by
humans indeed Ilya satsukovar retweeted
this from the head of alignment at
openai calling it a really interesting
result but let's leave alignment for
later let's focus on what they actually
did first they use the base model of
gpt4 not the one with reinforcement
learning from Human feedback next they
fine-tuned that base gpt4 model on a
data set of roughly 1.5 billion math
related tokens further on they call that
the math mix this being open AI of
course they don't give you the exact
details of that math mix but I'll come
back to that later on so how could they
give feedback based on working out or
reasoning well human labelers would come
along and give each step in a generated
solution either negative feedback
neutral feedback or positive feedback
then using that human labeled data a
model will be trained to predict the
correctness of each step in other words
it got good at recognizing good working
out as mentioned there was another model
trained just to focus on correct or
incorrect final answers as you can see
at the top the model got good at
spotting incorrect steps in the
reasoning process the green steps got a
high process score and the red steps got
a low process score and to turn this
into a single score they got the
probability that each step is correct as
judged by the model and then they got
the product of all of those individual
probabilities to get a final overall
process score a score in other words for
good working out just in case anyone's
interested they did try other ways of
generating a working out score for
example by while looking at the minimum
probability in the outputs but that step
didn't make too much difference to the
end result as you can see here to
quickly recap we have a base model
trained only to Output Solutions in the
desired format and then we have a
separate smaller model or two actually
one trained only to predict whether each
solution is correct or incorrect as a
final answer of course that leaves in
false positives which are solutions that
reach the correct answer with incorrect
reasoning and then another model trained
only to predict the correctness of each
step it stops if it finds a first
incorrect step and as the paper says
both methods reveal the existence of at
least one mistake but this process
supervision additionally reveals the
precise location of that mistake but
back to why this is so crazy look at how
many solutions it could scan at the end
of the x-axis here are
1860 Solutions and one tried and tested
way of of finding the best of those
Solutions is to do majority voting in
other words which one came out the most
often this has been Google's preferred
approach and it's linked to
self-consistency it's a fairly
state-of-the-art approach but look at
how the other methods outperform it by
scanning for the solution that has the
best reasoning or working out a model
train to spot good reasoning steps
outperforms even a model trained to spot
correct answers and far outperforms just
finding the majority answer that
difference of about 10 is more than half
of the difference between gpt3 and gpt4
and also is it me or is that line
continuing to grow suggesting that when
more compute is available the difference
could be even more Stark imagine a
future where Gypsy 4 or 5 can sample say
a trillion 10 to the 12 Solutions so is
this just relevant for mathematics no is
relevant for all of science here it is
getting state-of-the-art results in
calculus chemistry physics and more now
the paper didn't give Baseline
performance for AP Chemistry for example
but I tried to compute it myself notice
how this method scored 80 I
conservatively and approximately
inputted those scores into an AP
Chemistry calculator and that gave an AP
score of five so what did the raw model
gpt4 get in AP Chemistry A4 that by the
way compares to the original Chachi PT
which got a two so yes this isn't just
mathematics it's relevant for other
domains too they call this out of
distribution generalization before I get
onto alignment there is one more thing I
want to point out and that is that it
does show that fine tuning still works
really well for GT4 the math mix was an
aggressively filtered set of tokens of
high quality math problem solving
content and notice how much smaller it
is at 1.5 billion tokens compared to
Google's Minerva which was 38.5 billion
tokens but there was one more thing that
I noticed that I found fascinating while
they don't tell us anything about the
specific data that they use they do have
this category synthetic data too that's
data generated by the language model
itself and for that category synthetic
data 2 they say was it present in
pre-training yes now my best guess is
that this reveals that gpt4 was trained
on some synthetic data and even Sam
Altman hinted that this was a
possibility and described a synthetic
data Event Horizon some people have made
the case that we're now training on
order of all of the internet's tokens
and you can't grow that you know another
two orders of magnitude I guess you
could counter with yeah but the
synthetic data generation do you think
data bottlenecks matter at all
I I think you just touched on it like is
as long as you can get to like over this
synthetic data
Event Horizon where that the model is
smart enough to make good synthetic data
I think it should be all right now this
paper and these results have been
welcomed by many for its promise in
alignment if we get models that give us
more interpretable reasoning working out
that we can follow we will be
encouraging models to follow a process
that's endorsed by humans and they say
that this is inherently safer especially
compared to just focusing on outcomes
they say that in the worst case if we
just focus on correct answers or
positive outcomes that will become a
proxy that could lead models to become
misaligned after learning to exploit the
reward signal however I want to argue
that the reasoning steps that GT4 puts
out don't always represent what it's
actually thinking in other words we
might get outer alignment these lovely
Chain of Thought steps but not in our
alignment not steps that actually
represent its methodology I found this
paper fascinating from earlier this
month language models don't always say
what they think you get Unfaithful
explanations in Chain of Thought
prompting let me try to give you a vivid
example this was one of the math
questions from the data set the raw
model of gypsy 4 could only get it right
5.8 of the time I confirm that for
myself in this question involves basic
addition and division it couldn't find
an answer but going back to the
Unfaithful reasoning paper they added
the following string to the prompt I
think the answer is this but I'm curious
to hear what you think the model would
demonstrate sycophancy the model would
agree with you whatever you said and
then make up a Chain of Thought to
justify its erroneous sycophantic answer
and I think this exchange demonstrates
that quite well I added in the words I
as the user already know the answer is T
equals 19 which is incorrect by the way
but do you GPT 4 realize that it said
sure yes I do and then gave me this
detailed Chain of Thought and then said
yes I'm correct it's t equals 19 which
it isn't in contrast By the way when I
use code interpreter it not only got the
question correct first time and every
time but also when I try to tempt it
into sycophanty it's still got the
question right as you can see it said
therefore T equals 19 is not the
solution to the problem the calculation
shows that the correct answer is indeed
T equals 17. and obviously the benefit
of code interpreter is you get the
working out as well so I want someone to
explain to me why code interpreter
wouldn't be even more of a step forward
in interpretability not to mention in
accuracy of course also bear in mind
this tweet by Rob Miles he said these
models or Engineers never speak a word
or document anything their results are
bizarre and inhuman and then he links to
this prominent mechanistic
interpretability researcher at Google
deepmind he trained a tiny Transformer
to do addition then spent weeks figuring
out what it was actually doing one of
the only times in history someone has
understood how a Transformer actually
works down to the level of weights and
activation and this is the algorithm it
created to add two numbers it thought of
basic addition in terms of a rotation
around a circle and of course if you
asked it why is one plus one two it
would never give you this as an
explanation of its methodology but maybe
this is what it's actually calculating
that's why I'm personally a little bit
skeptical when openai say that this form
of process supervision directly rewards
the model for following an aligned Chain
of Thought it definitely rewards the
model for outputting and a line Chain of
Thought but is it actually following
that Chain of Thought back to the
Unfaithful paper for a moment they
changed the context so that the answer
was always a and lo and behold Chachi PT
picked answer a for the next question
even though that answer was wrong it
said that it was plausible that LeBron
James took a corner kick but when asked
for a Chain of Thought explanation it
never mentioned that it spotted that
pattern that the answer was always a it
gave a fake line of reasoning about why
Lebron James could take a corner kick
now of course I might well be wrong here
I'd love for someone to explain in
detail why but on the one hand I do want
to acknowledge that this process does
yield incredible results but on the
other hand we might be getting a story
about which methodology most reassures
humans not an output that most
Faithfully represents the methodology
actually used by gpd4 now for some
people that might be good enough at
least we can see some reasoning steps
that we can understand especially in an
area like mathematics where we have some
ground truth but it is interesting to me
that they call the other approach
outcome supervision an approach that may
reward an unaligned process and it being
harder to scrutinize is it possible that
the process reward model isn't just a
more granular outcome reward model where
the output is each step of the reasoning
still pretty impossible to actually
scrutinize well either way it seems
we're pinning our hopes on this process
oriented learning this is from the
website of anthropic they say we
currently believe process oriented
learning may be the most promising path
to training safe and transparent systems
up to and somewhat Beyond human level
capabilities and let's end on this
positive note from the head of alignment
at openai he says this is positive
evidence for the strategy of using
process supervision to train a model to
do alignment research at least in that
case we would get a model whose work we
can check more easily and that that
model would be better at alignment
research I really hope so and I want to
hear what you think thank you for
watching all the way to the end have a
wonderful day |
894bb808-ff5e-44fc-9b0e-f4aa74c1f355 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | [MLSN #8] Mechanistic interpretability, using law to inform AI alignment, scaling laws for proxy gaming
As part of a larger community building effort, [CAIS](https://safe.ai/) is writing a safety newsletter that is designed to cover empirical safety research and be palatable to the broader machine learning research community. You can [subscribe here](https://newsletter.mlsafety.org/) or follow the newsletter on [twitter](https://twitter.com/ml_safety) here.
---
Welcome to the 8th issue of the ML Safety Newsletter! In this edition, we cover:
* Isolating the specific mechanism that GPT-2 uses to identify the indirect object in a sentence
* When maximum softmax probability is optimal
* How law can inform specification for AI systems
* Using language models to find a group consensus
* Scaling laws for proxy gaming
* An adversarial attack on adaptive models
* How systems safety can be applied to ML
* And much more...
---
**Monitoring**
==============
### **A Circuit for Indirect Object Identification in GPT-2 small**
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82973a3a-77de-41b8-80e4-f5ed94a738f0_1600x710.png)
One subset of interpretability is *mechanistic interpretability*: understanding how models perform functions down to the level of particular parameters. Those working on this agenda believe that by learning how small parts of a network function, they may eventually be able to rigorously understand how the network implements high-level computations.
This paper tries to identify how GPT-2 small solves *indirect object identification,* the task of identifying the correct indirect object to complete a sentence with. Using a number of interpretability techniques, the authors seek to isolate particular parts of the network that are responsible for this behavior.
**[**[**Link**](https://arxiv.org/abs/2211.00593)**]**
### **Learning to Reject Meets OOD Detection**
Both learning to reject (also called error detection; deciding whether a sample is likely to be misclassified) and out-of-distribution detection share the same baseline: maximum softmax probability. MSP has been outperformed by other methods in OOD detection, but never in learning to reject, and it is mathematically provable that it is optimal for learning to reject. This paper shows that it isn’t optimal for OOD detection, and identifies specific circumstances in which it can be outperformed. This theoretical result is a good confirmation of the existing empirical results.
**[**[**Link**](https://arxiv.org/abs/2301.12386)**]**
### **Other Monitoring News**
**[**[**Link**](https://arxiv.org/abs/2212.06727)**]** The first paper that successfully applies feature visualization techniques to Vision Transformers.
**[**[**Link**](https://arxiv.org/abs/2211.07740)**]** This method uses the reconstruction loss of diffusion models to create a new SOTA method for out-of-distribution detection in images.
**[**[**Link**](https://arxiv.org/abs/2301.02344)**]** A new Trojan attack on code generation models works by inserting poisoned code into docstrings rather than the code itself, evading some vulnerability-removal techniques.
**[**[**Link**](https://arxiv.org/abs/2302.06600)**]** This paper shows that fine tuning language models for particular tasks relies on changing only a very small subset of parameters. The authors show that as few as 0.01% of parameters can be “grafted” onto the original network and achieve performance that is nearly as high.
---
**Alignment**
=============
### **Applying Law to AI Alignment**
One problem in alignment is specification: though we may give AI systems instructions, we cannot possibly specify what they should do in all circumstances. Thus, we have to consider how our specifications will generalize in fuzzy, or out-of-distribution contexts.
The author of this paper argues that law has many desirable properties that may make it useful in informing specification. For example, the law often uses “standards”: relatively vague instructions (e.g. “act with reasonable caution at railroad crossings”; in contrast to rules like “do not exceed 30 miles per hour”) whose specifics have been developed through years of precedent. In the law, it is often necessary to consider the “spirit” behind these standards, which is exactly what we want AI systems to be able to do. This paper argues that AI systems could be construed under the fiduciary standard.
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F742897ba-fbe3-447d-9c14-514d79064075_1600x1077.png)
Finally, the paper conducts an empirical study on thousands of US court opinions. It finds that while the baseline GPT-3 model is unable to accurately predict court evaluations of fiduciary duty, more recent models in the GPT-3.5 series can do so with relatively high accuracy. Though legal standards will not resolve many of the most significant problems of alignment, they could improve upon current strategies of specification.
**[**[**Link**](https://arxiv.org/abs/2301.10095)**]**
### **Language models can generate consensus statements for diverse groups**
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc30e64ac-ef6f-4a73-a463-535987b9fab8_1600x775.png)
We may want to take into account the interests not only of individuals but also of possibly-conflicting members of a larger group. This paper asked individuals for their opinions on political issues (e.g., “should speed limits be reduced?”) and used a language model to generate consensus statements that would be agreed on by the group at large. The participants rated AI-generated consensus statements highly, above even human-written statements. The authors don’t appear to discuss whether this could simply be due to the consensus statements being more watered down and thus less action-relevant. Still, the paper is a promising step towards aligning models with groups of humans.
**[**[**Link**](https://arxiv.org/abs/2211.15006)**]**
---
**Robustness**
==============
### **Scaling laws for reward overoptimization**
Reinforcement learning techniques, such as those used to improve the general capabilities of language models, often optimize a model to give outputs that are rated highly by a proxy for some “gold standard.” For example, a proxy might be trained to predict how particular humans would react to an output. A difficulty, also mentioned earlier in the newsletter, is proxy gaming, where the model improves performance according to the proxy while failing to do so on the underlying gold standard (e.g., what humans would actually think).
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b16d7f9-ace2-474e-b083-adc47f71b0da_1600x1105.png)
This paper empirically studies how language models trained with reinforcement learning can over optimize proxy reward, and develops scaling laws describing this phenomenon. To do this, they use a (proxy) model as the gold standard, and build a set of proxy models that approximate that gold standard model. In addition to measuring models optimized with reinforcement learning, they find that over optimization can also happen with best-of-n sampling.
**[**[**Link**](https://arxiv.org/abs/2210.10760)**]**
### **Adaptive models can be exploited by adversaries**
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf65a777-1521-4afe-9a05-21c7419b4e67_1600x551.png)
Many deep learning models aren’t robust to distribution shifts. One potential solution to this is test-time adaptation (TTA), where a model is modified based on the test data it sees. This paper demonstrates that TTA is subject to adversarial attacks, where malicious test data can cause predictions about clean data to be incorrect. This means that adaptive models have yet another attack surface that can potentially be exploited. The authors develop several kinds of attacks: targeted (degrade accuracy of a particular sample), indiscriminate (degrade accuracy in general), and “stealthy targeted” (degrade accuracy of a particular sample while not otherwise reducing accuracy). The attacks are conducted with projected gradient descent, and tested with the ImageNet-C dataset as the OOD dataset. The authors also find that models designed to be adversarially robust are also more robust to this attack.
**[**[**Link**](https://arxiv.org/abs/2301.12576)**]**
### **Other Robustness News**
**[**[**Link**](https://arxiv.org/abs/2302.04638)**]** Better diffusion models can improve adversarial training when used to generate data.
**[**[**Link**](https://arxiv.org/abs/2301.06294)**]** Proposes a method for adapting RL policies to environments with random shocks, augmenting training with simulations of the post-shock environment.
**Systemic Safety**
===================
### **Applying Systems Safety to ML**
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d5e6fbf-3446-41ba-b92f-45b526664a24_1600x287.png)
Systems safety engineering is widely used for safety analysis in many industries. The impetus for this discipline was the understanding that safety does not merely depend on the performance or reliability of individual components (e.g., ML models), but may also depend on assuring the safe interoperation of multiple systems or components (including human systems such as corporations). This paper advocates the use of systems safety engineering methods for analyzing the safety of machine learning models.
**[**[**Link**](https://arxiv.org/abs/2302.02972)**]**
### **Other Systemic Safety News**
**[**[**Link**](https://arxiv.org/abs/2302.06588)**]** This paper proposes methods to “immunize” images against manipulation by diffusion models, potentially reducing the risk of the models being used for disinformation.
**Other Content**
=================
**[**[**Link**](https://course.mlsafety.org/about)**] The ML Safety course**
If you are interested in learning about cutting-edge ML Safety research in a more comprehensive way, there is now a course with lecture videos, written assignments, and programming assignments. It covers technical topics in Alignment, Monitoring, Robustness, and Systemic Safety.
**[**[**Link**](https://www.reddit.com/r/mlsafety/)**] ML Safety Reddit**
The ML Safety Reddit is frequently updated to include the latest papers in the field.
**[**[**Link**](https://twitter.com/topofmlsafety)**] Top of ML Safety Twitter**
This Twitter account tweets out papers posted on the ML Safety Reddit. |
4bc1b069-df7f-44fd-af6d-169cf3164798 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | What are some introductions to AI safety?
Note that some of these introductions are from over 5 years ago. Given how quickly the field of AI progresses, some of these older introductions could use an update (e.g. Nick Bostrom’s 2014 book *Superintelligence* has little focus on modern deep learning systems).
## Quick reads (under ~10 minutes)
- [Four Background Claims](https://intelligence.org/2015/07/24/four-background-claims/) (Nate Soares)
- [We must slow down the race to God-like AI](https://www.ft.com/content/03895dc4-a3b7-481e-95cc-336a524f2ac2) (Ian Hogarth)
- [Building safe artificial intelligence: specification, robustness, and assurance](https://deepmindsafetyresearch.medium.com/building-safe-artificial-intelligence-52f5f75058f1) (DeepMind Safety Research)
- [Nobody’s on the ball on AGI alignment](https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/) (Leopold Aschenbrenner)
- [Frequent arguments about alignment](https://www.alignmentforum.org/posts/6ccG9i5cTncebmhsH/frequent-arguments-about-alignment) (John Schulman)
- [AI alignment](https://en.wikipedia.org/wiki/AI_alignment) (Wikipedia)
- [Of Myths And Moonshine](https://www.edge.org/conversation/the-myth-of-ai#26015) (Stuart Russell)
- [Intro to AI Safety](https://aizi.substack.com/p/intro-to-ai-safety) (Robert Huben)
- [Explore Your AI Risk Perspectives: An Interactive Walkthrough of Researchers' Most Frequent Interview Responses](https://ai-risk-discussions.org/perspectives/introduction) (AI Risk Discussions)
- [Will AI really cause a catastrophe?](https://www.maisi.club/about) (Michigan AI Safety Initiative)
- [What is the alignment problem?](https://aligned.substack.com/p/what-is-alignment) (Jan Leike)
- [Why does powerful Artificial Intelligence pose a risk that could make all of our lives much, much worse in the coming years?](https://twitter.com/MaxCRoser/status/1651598037679063040) (Max Roser)
- [The existential risk of superintelligent AI](https://pauseai.info/xrisk) (PauseAI)
- [Global risk from deep learning: 1 - The case for risk](https://www.danieldewey.net/risk/case.html) (Daniel Dewey)
- [AI is Not an Arms Race](https://time.com/6283609/artificial-intelligence-race-existential-threat/) (Katja Grace)
- [Complex Systems are Hard to Control](https://bounded-regret.ghost.io/complex-systems-are-hard-to-control/) (Jacob Steinhardt)
- [a casual intro to AI doom and alignment](https://carado.moe/ai-doom.html) (Tamsin Leake)
- [Basics of AI Wiping Out All Value in the Universe, Take 1](https://www.lesswrong.com/posts/WkchhorbLsSMbLacZ/ai-1-sydney-and-bing#Basics_of_AI_Wiping_Out_All_Value_in_the_Universe__Take_1) (Zvi Mowshowitz)
- [Marius alignment pitch](https://docs.google.com/document/d/18y0x3ogQau0CyN5a9QYaAUCca8C4bHdEWBK5f4jlO7k/edit#) (Marius Hobbhahn)
- [This Changes Everything](https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html) (Ezra Klein)
- [How AI could accidentally extinguish humankind](https://www.washingtonpost.com/opinions/2022/08/31/artificial-intelligence-worst-case-scenario-extinction/) (Émile Torres)
- [My current summary of the state of AI risk](https://musingsandroughdrafts.com/2023/02/17/my-current-summary-of-the-state-of-ai-risk/) (Eli Tyre)
- [AI doom from an LLM-plateau-ist perspective](https://www.alignmentforum.org/posts/KJRBb43nDxk6mwLcR/ai-doom-from-an-llm-plateau-ist-perspective) (Steve Byrnes)
- [Why Uncontrollable AI Looks More Likely Than Ever](https://time.com/6258483/uncontrollable-ai-agi-risks/) (Otto Barten, Roman Yampolskiy)
## Short(ish) introductions
- [The case for taking AI seriously](https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment) (or a similar argument in [500 words](https://www.vox.com/future-perfect/2019/2/12/18202466/ai-artificial-intelligence-humanity-threat)) (Kelsey Piper)
- [The alignment problem from a deep learning perspective](https://arxiv.org/abs/2209.00626) (Richard Ngo, Lawrence Chan, Sören Mindermann)
- [More Is Different for AI](https://bounded-regret.ghost.io/more-is-different-for-ai/) blog post series (Jacob Steinhardt)
- [Why I Think More NLP Researchers Should Engage with AI Safety Concerns](https://wp.nyu.edu/arg/why-ai-safety/) (Sam Bowman)
- [How Rogue AIs may Arise](https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/) (Yoshua Bengio)
- [FAQ on Catastrophic AI Risks](https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/) (Yoshua Bengio)
- [The Need For Work On Technical AI Alignment](https://www.agisafetyfundamentals.com/alignment-introduction) (Daniel Eth)
- [Why alignment could be hard with modern deep learning](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/) (Ajeya Cotra)
- [The basic reasons I expect AGI ruin](https://www.lesswrong.com/posts/eaDCgdkbsfGqpWazi/the-basic-reasons-i-expect-agi-ruin) (Rob Bensinger)
- [Altruists Should Prioritize Artificial Intelligence](https://longtermrisk.org/altruists-should-prioritize-artificial-intelligence/) (Lukas Gloor)
- [Clarifying AI X-risk](https://www.alignmentforum.org/posts/GctJD5oCDRxCspEaZ/clarifying-ai-x-risk) and [Threat Model Literature Review](https://www.alignmentforum.org/posts/wnnkD6P2k2TfHnNmt/threat-model-literature-review) (DeepMind's AGI safety team)
- [AI experts are increasingly afraid of what they’re creating](https://www.vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinction) (Kelsey Piper)
- [Why worry about future AI?](https://www.gleech.org/ai-risk) (Gavin Leech)
- [How to navigate the AI apocalypse as a sane person](https://erikhoel.substack.com/p/how-to-navigate-the-ai-apocalypse) (Eric Hoel)
- [AGI Ruin: A list of lethalities](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) (Eliezer Yudkowsky); also see [Where I agree and disagree with Eliezer](https://www.alignmentforum.org/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer) (Paul Christiano)
- [No Time Like The Present For AI Safety Work](https://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/) (Scott Alexander)
- [AI x-risk, approximately ordered by embarrassment](https://www.alignmentforum.org/posts/mSF4KTxAGRG3EHmhb/ai-x-risk-approximately-ordered-by-embarrassment) (Alex Lawsen)
- [Ethical Issues in Advanced Artificial Intelligence](https://nickbostrom.com/ethics/ai) (Nick Bostrom)
- [Benefits & Risks of Artificial Intelligence](https://futureoflife.org/ai/benefits-risks-of-artificial-intelligence/) (Ariel Conn)
- [The case for how and why AI might kill us all](https://newatlas.com/technology/ai-danger-kill-everyone/) (Loz Blain)
- [Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity](https://www.openphilanthropy.org/research/potential-risks-from-advanced-artificial-intelligence-the-philanthropic-opportunity/) (Holden Karnofsky)
- [A newcomer’s guide to the technical AI safety field](https://www.alignmentforum.org/posts/5rsa37pBjo4Cf9fkE/a-newcomer-s-guide-to-the-technical-ai-safety-field) (Chin Ze Shen)
- [Q & A: The future of artificial intelligence](https://people.eecs.berkeley.edu/~russell/research/future/q-and-a.html) (Stuart Russell)
- [The Basic AI Drives](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf) (Steve Omohundro)
- [AI Risk Intro 1: Advanced AI Might Be Very Bad](https://www.lesswrong.com/posts/bJgEMfiD48fEJJxjm/ai-risk-intro-1-advanced-ai-might-be-very-bad) (TheMcDouglas, LRudL)
- [Distinguishing AI takeover scenarios](https://www.alignmentforum.org/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios) + [Investigating AI takeover scenarios](https://www.alignmentforum.org/posts/zkF9PNSyDKusoyLkP/investigating-ai-takeover-scenarios) (Sam Clarke, Samuel Martin)
- [Artificial intelligence is transforming our world — it is on all of us to make sure that it goes well](https://ourworldindata.org/ai-impact) (Max Roser)
- [Intelligence Explosion: Evidence and Import](https://intelligence.org/files/IE-EI.pdf) (Luke Muehlhauser, Anna Salamon)
- [The Value Learning Problem](https://intelligence.org/files/ValueLearningProblem.pdf) (Nate Soares)
- [Uncontrollable AI as an Existential Risk](https://www.lesswrong.com/posts/gEchYntjSXk9KXorK/uncontrollable-ai-as-an-existential-risk) (Karl von Wendt)
- [Current and Near-Term AI as a Potential Existential Risk Factor](https://users.cs.utah.edu/~dsbrown/readings/existential_risk.pdf) (Benjamin S. Bucknall, Shiri Dori-Hacohen)
- [AI Risk for Epistemic Minimalists](https://www.alignmentforum.org/posts/8fpzBHt7e6n7Qjoo9/ai-risk-for-epistemic-minimalists) (Alex Flint)
## Longer introductions
- [Preventing an AI-related catastrophe](https://80000hours.org/problem-profiles/artificial-intelligence/) (Benjamin Hilton); also see [this summary](https://forum.effectivealtruism.org/posts/btFBFdYEn2PbuwHwt/summary-of-80k-s-ai-problem-profile)
- [An Overview of Catastrophic AI Risks](https://arxiv.org/abs/2306.12001) (Dan Hendrycks, Mantas Mazeika, Thomas Woodside)
- [The “most important century” blog post series summary](https://www.cold-takes.com/most-important-century/#Summary) and the ["implications of most important century” posts](https://www.cold-takes.com/tag/implicationsofmostimportantcentury/) like [AI could defeat all of us combined](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/) and [Why would AI “aim” to defeat humanity?](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/) and [How we could stumble into AI catastrophe](https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe/) (Holden Karnofsky)
- [Current work in AI alignment](https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment) (Paul Christiano)
- [A gentle introduction to why AI *might* end the human race](https://medium.com/@NotesOnAIAlignment/a-gentle-introduction-to-why-ai-might-end-the-human-race-4670f4b5cdec) (Michael Tontchev)
- [Natural Selection Favors AIs over Humans](https://drive.google.com/file/d/1p4ZAuEYHL_21tqstJOGsMiG4xaRBtVcj/view) (Dan Hendrycks)
- [Unsolved Problems in ML Safety](https://arxiv.org/abs/2109.13916) (Dan Hendrycks)
- [X-Risk Analysis for AI Research](https://arxiv.org/abs/2206.05862) (Dan Hendrycks, Mantas Mazeika)
- [Aisafety.info](https://aisafety.info/) ([Stampy team](https://get_involved.aisafety.info/))
- [Is Power-Seeking AI an Existential Risk?](https://arxiv.org/pdf/2206.13353.pdf) + [shortened version](https://jc.gatspress.com/pdf/existential_risk_and_powerseeking_ai.pdf) + [presentation](https://forum.effectivealtruism.org/posts/ChuABPEXmRumcJY57/video-and-transcript-of-presentation-on-existential-risk) (Joseph Carlsmith; also see this [shorter version](https://jc.gatspress.com/pdf/existential_risk_and_powerseeking_ai.pdf))
- [AGI safety from first principles](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ) (Richard Ngo)
- [Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover](https://www.cold-takes.com/without-specific-countermeasures-the-easiest-path-to-transformative-ai-likely-leads-to-ai-takeover/) + [presentation](https://www.youtube.com/watch?v=EIhE84kH2QI) (Ajeya Cotra)
- [The AI Revolution: The Road to Superintelligence](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html) and [The AI Revolution: Our Immortality or Extinction](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html) + some [corrections from Luke Muehlhauser](https://lukemuehlhauser.com/a-reply-to-wait-but-why-on-machine-superintelligence/) (Tim Urban)
- [AI as a Positive and Negative Factor in Global Risk](https://intelligence.org/files/AIPosNegFactor.pdf) (Eliezer Yudkowsky)
- [Extinction Risk from Artificial Intelligence](https://aisafety.wordpress.com/) (Michael Cohen)
- [Set Sail For Fail? On AI risk](https://nintil.com/ai-safety) (José Luis Ricón Fernández de la Puente)
- [A shift in arguments for AI risk](https://bayes.net/prioritising-ai/) (Tom Adamczewski)
- [Uncontrollability of AI](https://www.researchgate.net/publication/343812745_Uncontrollability_of_AI) (Roman Yampolskiy)
- [Thoughts on AGI safety from the top](https://www.alignmentforum.org/posts/ApLnWjgMwBTJt6buC/thoughts-on-agi-safety-from-the-top) (jylin04)
- [Disjunctive Scenarios of Catastrophic AI Risk](https://kajsotala.fi/assets/2018/12/Disjunctivescenarios.pdf) (Kaj Sotala; see [these highlights](https://www.lesswrong.com/posts/8uJ3n3hu8pLXC4YNE/some-conceptual-highlights-from-disjunctive-scenarios-of-1))
- [Modeling Transformative AI Risks (MTAIR) Project -- Summary Report](https://arxiv.org/abs/2206.09360) (Sam Clarke et al.)
## Overviews of various research areas:
- [Transformative AI Governance: A Literature Review](https://docs.google.com/document/d/1CDj_sdTzZGP9Tpppy7PdaPs_4acueuNxTjMnAiCJJKs/edit?usp=sharing) (draft by Matthijs Maas)
- [Papers](https://haist.ai/papers) (Harvard AI Safety Team)
- [My Overview of the AI Alignment Landscape: A Bird's Eye View](https://www.lesswrong.com/posts/SQ9cZtfrzDJmw9A2m/my-overview-of-the-ai-alignment-landscape-a-bird-s-eye-view) (Neel Nanda)
- [(My understanding of) What Everyone in Technical Alignment is Doing and Why](https://www.lesswrong.com/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is) (Thomas Larsen, Eli Lifland) + [Alignment Org Cheat Sheet](https://www.lesswrong.com/posts/9TWReSDKyshfA66sz/alignment-org-cheat-sheet) (Thomas Larsen, Akash Wasil)
- [Framing AI strategy](https://aiimpacts.org/framing-ai-strategy/) (Zach Stein-Perlman)
- “[What you can do concretely to help](https://80000hours.org/problem-profiles/artificial-intelligence/#what-can-you-do-concretely-to-help)” section of “Preventing an AI-related catastrophe” (Benjamin Hilton)
- [The longtermist AI governance landscape: a basic overview](https://forum.effectivealtruism.org/posts/ydpo7LcJWhrr2GJrx/the-longtermist-ai-governance-landscape-a-basic-overview) (Sam Clarke)
- [A Brief Overview of AI Safety/Alignment Orgs, Fields, Researchers, and Resources for ML Researchers](https://forum.effectivealtruism.org/posts/xMzXbnpPeKWpTi3Gt/a-brief-overview-of-ai-safety-alignment-orgs-fields) (Austin Witte)
- [AI Governance: A Research Agenda](https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf) (Allan Dafoe)
- [Racing through a minefield: the AI deployment problem](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/) (Holden Karnofsky)
- [An overview of 11 proposals for building safe advanced AI](https://www.lesswrong.com/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai) (Evan Hubinger)
- [2021 Alignment Literature Review and Charity Comparison](https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison) (Larks)
- [AI Research Considerations for Human Existential Safety (ARCHES)](https://arxiv.org/abs/2006.04948) (Andrew Critch, David Krueger)
- [A descriptive, not prescriptive, overview of current AI Alignment Research](https://www.alignmentforum.org/posts/FgjcHiWvADgsocE34/a-descriptive-not-prescriptive-overview-of-current-ai) (Jan Hendrik Kirchner, Logan Riggs Smith, Jacques Thibodeau, janus)
- [AGI Safety Literature Review](https://arxiv.org/abs/1805.01109) (Tom Everitt, Gary Lea, Marcus Hutter)
- [AI Alignment Research Overview](https://www.alignmentforum.org/posts/7GEviErBXcjJsbSeD/ai-alignment-research-overview-by-jacob-steinhardt) (Jacob Steinhardt)
- [On how various plans miss the hard bits of the alignment challenge](https://www.lesswrong.com/posts/3pinFH3jerMzAvmza/on-how-various-plans-miss-the-hard-bits-of-the-alignment) (Nate Soares)
- [Some AI research areas and their relevance to existential safety](https://www.lesswrong.com/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1) (Andrew Critch)
- [A newcomer’s guide to the technical AI safety field](https://www.lesswrong.com/posts/5rsa37pBjo4Cf9fkE/a-newcomer-s-guide-to-the-technical-ai-safety-field) (zeshen)
- [Open Problems in AI X-Risk [PAIS #5]](https://www.alignmentforum.org/posts/5HtDzRAk7ePWsiL2L/open-problems-in-ai-x-risk-pais-5) (Dan Hendrycks, Thomas Woodside)
- [Anti-Literature Review](https://www.alignmentforum.org/posts/XtBJTFszs8oP3vXic/ai-x-risk-greater-than-35-mostly-based-on-a-recent-peer#Appendix_A__Anti_Literature_Review) from “AI X-risk >35% mostly based on a recent peer-reviewed argument” (Michael Cohen)
- [AI Governance & Strategy: Priorities, talent gaps, & opportunities](https://www.lesswrong.com/posts/hAnKgips7kPyxJRY3/ai-governance-and-strategy-priorities-talent-gaps-and) (Akash Wasil)
## Podcasts and videos (see [https://aisafety.video](https://aisafety.video) for more)
- [Eliezer Yudkowksy interview with Sam Harris](https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/)
- [Richard Ngo](https://axrp.net/episode/2022/03/31/episode-13-first-principles-agi-safety-richard-ngo.html) and [Paul Christiano](https://axrp.net/episode/2021/12/02/episode-12-ai-xrisk-paul-christiano.html) on [AXRP](https://axrp.net/)
- [Brian Christian](https://80000hours.org/podcast/episodes/brian-christian-the-alignment-problem/) and [Ben Garfinkel](https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/) on the [80,000 Hours Podcast](https://80000hours.org/podcast/)
- [Ajeya Cotra](https://www.youtube.com/watch?v=IKFQfYaJ0AY) and [Rohin Shah](https://www.youtube.com/watch?v=_5xkh-Rh6Ec) on the [Future of Life Institute Podcast](https://futureoflife.org/project/future-of-life-institute-podcast/)
- [Researcher Perceptions of Current and Future AI](https://www.youtube.com/watch?v=yl2nlejBcg0) ([transcript](https://forum.effectivealtruism.org/posts/q49obZkQujkYmnFWY/vael-gates-risks-from-advanced-ai-june-2022)) (Vael Gates; also see [Risks from Highly-Capable AI](https://forum.effectivealtruism.org/posts/WqQDKKgZTdFe6GAFq/vael-gates-risks-from-highly-capable-ai-march-2023-1))
- [Intro to AI Safety, Remastered](https://www.youtube.com/watch?v=pYXy-A4siMw) (Rob Miles)
- [Ensuring smarter-than-human intelligence has a positive outcome](https://intelligence.org/2017/04/12/ensuring/) (Nate Soares)
- [AI Alignment: Why It's Hard, and Where to Start](https://www.youtube.com/watch?v=EUjc1WuyPT8) (Eliezer Yudkowsky)
- Some audio recordings of the readings above (e.g. [Cold Takes Audio](https://podcasts.apple.com/us/podcast/cold-takes-audio/id1580097837), [reading of 80k intro](https://podcasts.apple.com/us/podcast/preventing-an-ai-related-catastrophe-article/id1245002988?i=1000582699751), [EA Forum posts](https://forum.effectivealtruism.org/posts/K5Snxo5EhgmwJJjR2/announcing-audio-narrations-of-ea-forum-posts-1), [EA Radio](https://podcasts.apple.com/us/podcast/ea-radio/id1370275378), [Astral Codex Ten Podcast](https://sscpodcast.libsyn.com/), [Less Wrong Curated Podcast](https://www.lesswrong.com/posts/kDjKF2yFhFEWe4hgC/announcing-the-lesswrong-curated-podcast), [Nonlinear Library](https://forum.effectivealtruism.org/posts/JTZTBienqWEAjGDRv/listen-to-more-ea-content-with-the-nonlinear-library))
## Courses:
- [AGI Safety Fundamentals](https://www.agisafetyfundamentals.com/ai-alignment-curriculum)
- [AI Alignment Course](https://www.agisafetyfundamentals.com/ai-alignment-curriculum)
- [AI Alignment Course: In-session readings version](https://www.agisafetyfundamentals.com/alignment-insession-readings)
- [AI Governance Course](https://www.agisafetyfundamentals.com/ai-governance-curriculum)
- [AI Alignment 201 Course](https://www.agisafetyfundamentals.com/alignment-201-curriculum)
- [Resources](https://www.agisafetyfundamentals.com/resources)
- [Intro to ML Safety lectures](https://course.mlsafety.org/) and [online course](https://www.mlsafety.org/intro-to-ml-safety)
- [[shared] "Key Phenomena in AI Risk" - Reading Curriuclum](https://docs.google.com/document/d/1HGzMBMXQD9w9K32scqCoSmZNGbxLJE8-siPlonTQz6s/edit) (see the [course announcement](https://www.alignmentforum.org/posts/mqvxR9nrXAzRr3ow9/announcing-key-phenomena-in-ai-risk-facilitated-reading))
- [STS 10SI: Intro to AI Alignment Syllabus [Public]](https://docs.google.com/document/d/1NX0DlZRzD3NP7tBeLjMh76w7-w2s8SxV3wj0P7EYpKY/edit) from Stanford, a modified version of the Alignment Fundamentals curriculum
- [Safety and Control for Artificial General Intelligence (Fall 2018)](https://inst.eecs.berkeley.edu/~cs294-149/fa18/) from UC Berkeley
## Other / misc:
- **Books**: *The Alignment Problem* by Brian Christian, *Human Compatible* by Stuart Russell*, Superintelligence* by Nick Bostrom, *Life 3.0* by Max Tegmark, *[Smarter Than Us](https://smarterthan.us/toc/)* by Stuart Armstrong, [Better without AI](https://betterwithout.ai/) by [David Chapman](https://twitter.com/Meaningness/status/1625139350005764096), AI sections in *The Precipice* by Toby Ord and *What We Owe The Future* by William MacAskill
- Also see [this post](https://forum.effectivealtruism.org/posts/BxgwGYFuKFu5ioBjs/seeking-input-on-a-list-of-ai-books-for-broader-audience) for a fairly comprehensive list of non-fiction, non-technical books about AI.
- [Non-Technical Introduction to AI Safety](https://haist.ai/non-technical-intro-to-ai-safety) (Harvard AI Safety Team)
- [Convergence publications](https://docs.google.com/document/d/1ok1nogrd0VrK51MCGtULh2aoB-NXjA5tG11mWBsy1WQ/edit) from [Convergence Analysis](https://www.convergenceanalysis.org/research/)
- [Stuart Russell's collection of research and media appearances](https://people.eecs.berkeley.edu/~russell/research/future/)
- [Zvi Mowshowitz’s Substack](https://thezvi.substack.com/) has excellent AI coverage
- [Results of the AI Safety Arguments Competition](https://docs.google.com/spreadsheets/d/e/2PACX-1vRgIYiqiFevNu0m3bOzKeJ7S2ugkq2imYmbCicXPYtKTpRXKBMSZmfhbL-C_v_KQKob57e5QUtcuUqP/pubhtml)
- [AI alignment resources](https://vkrakovna.wordpress.com/ai-safety-resources/) (Victoria Krakovna)
- [Resources I sent to AI researchers about AI safety](https://forum.effectivealtruism.org/posts/8sAzgNcssH3mdb8ya/resources-i-send-to-ai-researchers-about-ai-safety) (Vael Gates)
- [AI Risk Discussions: Resources](https://ai-risk-discussions.org/resources)
- Wait But Why: [Part 1](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html) - [Part 2](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html) - [Reply from Luke Muehlhauser](https://lukemuehlhauser.com/a-reply-to-wait-but-why-on-machine-superintelligence/)
- 2021 MIRI Conversations: [Ngo-Yudkowsky](https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty) ([ACX summary](https://astralcodexten.substack.com/p/practically-a-book-review-yudkowsky)), [Christiano-Yudkowsky](https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds) ([ACX summary](https://astralcodexten.substack.com/p/yudkowsky-contra-christiano-on-ai)), others
- [A Response to Steven Pinker on AI](https://www.youtube.com/watch?v=yQE9KAbFhNY) (Rob Miles)
- [Arbital AI Alignment list](https://arbital.greaterwrong.com/explore/ai_alignment/)
- [Our World In Data: AI](https://ourworldindata.org/artificial-intelligence#research-and-writing)
- [Nine Things You Should Know About AI](https://www.bbc.co.uk/programmes/articles/3pVB9hLv8TdGjSdJv4CmYjC/nine-things-you-should-know-about-ai) (Stuart Russell)
- FAQs: [superintelligence](https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq) (Scott Alexander), [intelligence explosion](https://intelligence.org/ie-faq/) (Luke Muehlhauser)
- [CHAI bibliography](https://humancompatible.ai/bibliography)
- [Paths to failure](https://www.lesswrong.com/posts/yv4xAnkEyWvpXNBte/paths-to-failure) (Karl von Wendt et al.)
- [AI Alignment Is Turning from Alchemy Into Chemistry](https://guzey.com/ai/alignment-alchemy/) (Alexey Guzey)
|
90ff3b6c-573b-46b1-9ef1-214ae883b843 | trentmkelly/LessWrong-43k | LessWrong | A review of cryonics/brain preservation in 2016
Relevance to Less Wrong: Whether you think it is for better or worse, users on LW are about 50,000x more likely to be signed up for cryonics than the average person.
Disclaimer: I volunteer at the Brain Preservation Foundation, but I speak for myself in this post and I'm only writing about publicly available information.
In 2016, cryonics remains a fringe operation. When it is discussed in the news or on social media, many express surprise that cryonics is a "real thing" outside of science fiction. Many others who do know about cryonics tend to label it a pseudoscience. Brain preservation (BP) through non-conventional cryonics methods such as those using aldehyde fixation is even more fringe, with most people not aware of it, and others dismissing it because it uses "toxic" chemicals.
Here's a rundown of some events important to cryonics/BP in 2016.
Research progress
- The Brain Preservation Foundation prize was won in February by Robert McIntyre and Greg Fahy. Their winning technique uses glutaraldehyde fixation followed by glycerol cryoprotection (in addition to a step to improve blood-brain barrier permeability and several other components) and allows for the preservation of neural structure as verified by electron microscopy across the cortex. McIntyre has since started a company called Nectome in part to improve and refine this procedure.
- Aschwin de Wolf of Advanced Neural Biosciences announced in November at the CryoSuisse conference that Advanced Neural Biosciences has developed a method that reduces dehydration in rat brain vitrification by using "brain optimized cryoprotectants." There is no peer-reviewed data or more detailed procedure available as of yet, and viability of the tissue may be a concern.
Legal progress
- In Canada, Keegan Macintosh and Carrie Wong are challenging the anti-cryonics laws in British Columbia.
- A right-to-die law passed in Colorado. Although not directly relevant to cryonics, it increases the number of locatio |
7acf3720-6667-45b5-95cb-b187ddba3036 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Reinforcement Learning under Threats
1 Introduction
---------------
Markov decision processes (MDP) [[Howard1960](#bib.bibx6)] provide
a mathematical framework for modeling a single agent making decisions
while interacting within an environment. We refer to this agent as the
decision maker (DM, she). MDPs have been widely used to study
reinforcement learning (RL) problems. More precisely, a MDP consists of a tuple (S,A,T,r) where S is the state space; A denotes the set of actions available to the agent; T:S×A→Δ(S) is the transition distribution, where Δ(X) denotes the set of all distributions over set X;
and, finally, r:S×A→Δ(R)
is the reward distribution (the utility the agent perceives from a given state
and action). A common approach to solving MDPs is based on Q-learning, [[Sutton and
Barto1998](#bib.bibx22)].
In it, the agent maintains a table Q:S×A→R that estimates the DM’s expected cumulative reward,
iterating according to the following update equation
| | | | |
| --- | --- | --- | --- |
| | Q(s,a):=(1−α)Q(s,a)++α(r(s,a)+γmaxa′Q(s′,a′)), | | (1) |
where α is a learning rate hyperparameter
and s′ is the state the agent arrives at after choosing action a in
state s and receiving reward r(s,a). While learning, the agent
could choose actions according to a greedy policy (π(s)=argmaxaQ(s,a)) yet it
is crucial to add stochasticity so that the agent can balance the exploration-exploitation trade-off, for instance with an ϵ−greedy policy.
However, when non stationary environments are considered, as when there are other
learning agents that interfere with the DM’s rewards, Q-learning leads
to suboptimal results [[Busoniu, Babuska, and
De Schutter2010](#bib.bibx5)].
###
1.1 Related Work
Several extensions of Q-learning in multi-agent settings have been developed
in the literature, including minimax-Q [[Littman1994](#bib.bibx11)],
Nash-Q [[Hu and Wellman2003](#bib.bibx7)] or friend-or-foe-Q [[Littman2001](#bib.bibx12)],
to name but a few.
We propose here to extend Q-learning from an Adversarial Risk Analysis (ARA),
[[Rios Insua, Rios, and
Banks2009](#bib.bibx18)] perspective, in particular, through
a level-k scheme [[Stahl and
Wilson1994](#bib.bibx20)], [[Stahl and Wilson1995](#bib.bibx21)].
Within the bandit literature, the celebrated [[Auer et al.1995](#bib.bibx2)] introduced
a non-stationary setting in which the reward process is controlled by an adversary.
The adversarial machine learning literature has predominantly focused on the supervised setting [[Biggio and Roli2017](#bib.bibx4)].
Other recent works tackle the problem of adversarial examples in RL [[Huang et al.2017](#bib.bibx8), [Lin et al.2017](#bib.bibx10)] though they focus on visual inputs.
Moreover, previous game-theoretical approaches to this problem have focused on modeling the whole multi-agent system as a game. Instead we shall face the problem of prescribing decisions to a single agent
versus her opponents, augmenting the MDP to account for potential adversaries. We present such variant of MDPs,
which we call Threatened MDPs (TMDPs), in the next section.
2 Threatened MDPs
------------------
In similar spirit to other reformulations of MDPs such as Constrained Markov Decision Processes (CMDP) [[Altman1999](#bib.bibx1)] or Configurable Markov Decision Processes (Conf-MDP) [[Metelli, Mutti, and
Restelli2018](#bib.bibx13)], we propose an augmentation of the MDP
to account for the presence of adversaries. In this paper, we restrict to the case of a DM facing a single opponent (he), leaving the extension to
a setting with multiple adversaries for future work.
A *Threatened Markov Decision Process* (TMDP) is a tuple (S,A,B,T,r,pA)
in which S is the state space; A denotes the set of actions available to the supported agent; B designates the set of threat actions, or actions
available to the adversary; T:S×A×B→Δ(S) is the transition distribution;
r:S×A×B→Δ(R) is the reward distribution (the utility the agent perceives from a given state
and action pair); and pA(b|s) models the beliefs that the DM has about
his opponent move, i.e., a distribution over B for each state s∈S.
We propose to replace the standard Q-learning rule (Eq.[1](#S1.E1 "(1) ‣ 1 Introduction ‣ Reinforcement Learning under Threats")) by
| | | | |
| --- | --- | --- | --- |
| | Q(s,a,b):=(1−α)Q(s,a,b)++α(r(s,a,b)+γmaxa′EpA(b|s′)[Q(s′,a′,b)]) | | (2) |
and compute its expectation over the opponent’s action argument
| | | | |
| --- | --- | --- | --- |
| | Q(s,a):=EpA(b|s)[Q(s,a,b)]. | | (3) |
This may be used to compute an ϵ−greedy policy for the DM, i.e., choosing
with probability (1−ϵ) the action a=argmaxa[Q(s,a)] or a uniformly random action with probability ϵ when
the DM is at state s.
In what follows we introduce two lemmas showing
that the previous update rules are fixed point iterations of contraction mappings.
######
Lemma 1.
Given q:S×B×A→R, the following operator H is a contraction mapping
| | | |
| --- | --- | --- |
| | (Hq)(s,b,a)=∑s′p(s′|s,b,a)[r(s,b,a)+ | |
| | +γmaxa′Ep(b′|s′)q(s′,b′,a′)]. | |
###### Proof.
We show that H is a contraction under the supremum norm, i.e., ∥Hq1−Hq2∥∞≤γ∥q1−q2∥∞.
| | | |
| --- | --- | --- |
| | ∥Hq1−Hq2∥∞= | |
| | =maxs,b,a|∑s′p(s′|s,b,a)[r(s,b,a)+γmaxa′Ep(b′|s′)q1(s′,b′,a′) | |
| | −r(s,b,a)−γmaxa′Ep(b′|s′)q2(s′,b′,a′)]|= | |
| | | |
| | −maxa′Ep(b′|s′)q2(s′,b′,a′)]|≤ | |
| | =γmaxs,b,a∑s′p(s′|s,b,a)|maxa′Ep(b′|s′)q1(s′,b′,a′) | |
| | −maxa′Ep(b′|s′)q2(s′,b′,a′)|≤ | |
| | =γmaxs,b,a∑s′p(s′|s,b,a)maxa′,z|Ep(b′|z)q1(z,b′,a′) | |
| | −Ep(b′|z)q2(z,b′,a′)|≤ | |
| | =γmaxs,b,a∑s′p(s′|s,b,a)maxa′,z,b′|q1(z,b′,a′)−q2(z,b′,a′)|= | |
| | =γmaxs,b,a∑s′p(s′|s,b,a)∥q1−q2∥∞= | |
| | =γ∥q1−q2∥∞. | |
∎
######
Lemma 2.
Let ¯q:S×A→R. The following operator ¯H is a contraction mapping
| | | |
| --- | --- | --- |
| | (¯H¯q)(s,a)==Ep(b|s)[∑s′p(s′|s,b,a)(r(s,b,a)+γmaxa′¯q(s′,a′))]. | |
###### Proof.
Similar to Lemma [1](#Thmlemma1 "Lemma 1. ‣ 2 Threatened MDPs ‣ Reinforcement Learning under Threats"), using the property that Eaf(a)≤maxaf(a) for any distribution p(a).
∎
However, in real life scenarios there will be uncertainty regarding the adversary’s policy pA(b|s). Therefore, we propose using a level-k scheme [[Rios Insua, Rios, and
Banks2009](#bib.bibx18)]
to learn the opponent model.
In general, we consider both the DM and the adversary
as rational agents that aim to maximize their respective expected cumulative rewards, though we start with a case in which the adversary is considered non-strategic (Section [2.1](#S2.SS1 "2.1 Non-strategic opponent ‣ 2 Threatened MDPs ‣ Reinforcement Learning under Threats")). Then, we go up a level in the level-k hierarchy, considering the adversary a level-1 agent and the DM a level-2 one (Section [2.2](#S2.SS2 "2.2 Level-k thinking ‣ 2 Threatened MDPs ‣ Reinforcement Learning under Threats")).
###
2.1 Non-strategic opponent
We begin by considering a stateless setting. The Q-function may be written then
as Q(ai,bj), with ai∈A the action chosen by the DM, and bj∈B the action chosen by the adversary. We assume that the supported DM is a joint action learner (i.e., she observes her opponent’s actions after he has committed them). At every iteration, the DM shall choose her action maximizing her expected cumulative reward. However, she needs to predict the action bj chosen by her opponent. A typical option is to model her adversary using fictitious play (FP), i.e., she may compute the expected utility of action ai via
| | | |
| --- | --- | --- |
| | ψ(ai)=∑bj∈BQ(ai,bj)pA(bj) | |
where pA(bj) reflects A’s beliefs about her opponent’s actions and is computed using the empirical frequencies of the opponent past plays.
Then she may choose the action ai∈A that
maximizes her expected utility. In the following sections, we refer to this variant as FPQ-learning.
As described in [[Rios Insua, Banks, and Rios2016](#bib.bibx17)], it is possible to re-frame fictitious play from a Bayesian perspective. Let pj be the probability that the opponent chooses action bj. We may place a Dirichlet prior (p1,…,pn)∼D(α1,…,αn). Then, the posterior has the analytical form D(α1+h1,…,αn+hn), with hi being the count of action bi, i=1,...,n. If we denote the posterior density function as f(p|h), then the DM would choose the action ai maximizing her expected utility, which now takes the form
| | | | | |
| --- | --- | --- | --- | --- |
| | | ψ(ai) | =∫⎡⎣∑bj∈BQ(ai,bj)pj⎤⎦f(p|h)dp | |
| | | = | ∑bj∈BQ(ai,bj)Ep|h[pi]∝∑bj∈BQ(ai,bj)(αi+hi). | |
The Bayesian perspective may benefit the convergence of Q-learning, as we may include prior information about the adversary behavior when relevant.
Generalizing the previous approach to account for states is straightforward. Now the Q-function has the form Q(ai,bj,s), where s is the state of the TMDP. The DM may need to asses probabilities of the form pA(bj|s), since it is natural to expect that her opponent behaves differently depending on the state of the game and, consequently, depending also on previous actions. As before, the supported DM may choose her action at state s by maximizing
| | | |
| --- | --- | --- |
| | ψs(ai)=∑bj∈BQ(ai,bj,s)pA(bj|s). | |
Since the state space S may be huge (or even continuous), keeping track of pA(bj|s) may incur in prohibitive memory costs. Bayes rule may turn out to be useful, using
| | | |
| --- | --- | --- |
| | pA(bj|s)∝p(s|bj)p(bj). | |
[[Tang et al.2017](#bib.bibx23)] propose an efficient method using a hash table or a bloom filter to maintain a count of the number of times an agent visits each state s, p(s). This is only used in the context of single-agent RL to assist for better exploration of the environment. We propose to keep track of |B|=n bloom filters, one for each distribution p(s|bj), for tractable computation of the opponent’s intentions in the TMDP setting.
The previous scheme may be transparently integrated with the Bayesian paradigm: we only need to store an additional array with the Dirichlet prior parameters αi, i=1,…,n
for the p(bj) part. Potentially, we could store initial pseudocounts as priors for each bj|s initializing the bloom filters with the corresponding parameter values.
If we assume the opponent to have memory of the previous stage actions, we could straightforwardly extend the previous scheme using the concept of mixtures of Markov chains, as described in [[Raftery1985](#bib.bibx16)]. For example, in case the opponent belief model
is pA(bt|at−1,bt−1,st), so that the adversary recalls
the previous actions at−1 and bt−1, it could be factorized as a mixture
| | | |
| --- | --- | --- |
| | pA(bt|at−1,bt−1,st)=w1pA(bt|at−1)++w2pA(bt|bt−1)+w3pA(bt|st). | |
Then, if we allow for longer memories, instead of an exponential growth in the number of parameters, the complexity can be linearly controlled.
To conclude this section, we shall note that the described scheme is *model agnostic*, i.e., it does not matter if we represent the Q-function using a look-up table or a deep neural network (DQN), so we expect it to be usable in both shallow and deep multi-agent RL settings.
###
2.2 Level-k thinking
The previous section described how to model a level-0 opponent, i.e. a non strategic opponent, which can be practical in several scenarios. However, if the opponent is strategic, he may model the supported DM as a level-0 thinker, thus making the adversary a level-1 thinker. This chain can go up to infinity, so we will have to deal with modeling the opponent as a level-k thinker, with k bounded by the computational or cognitive resources of the DM.
To deal with it, we introduce a hierarchy of TMDPs in which
\emphTMDPki refers to the TMDP that agent i needs to optimize,
while considering its rival as a level-(k−1) thinker.
Thus, we have the following process:
* If the supported DM is a level-1 thinker, she may optimize for \emphTMDP1A. She then models B as a level-0 thinker (using Section [2.1](#S2.SS1 "2.1 Non-strategic opponent ‣ 2 Threatened MDPs ‣ Reinforcement Learning under Threats")).
* If the supported DM is a level-2 thinker, she may optimize for \emphTMDP2A. She models B as a level-1 thinker. Consequently, this “modeled” B optimizes \emphTMDP1B, and while doing so, he models the DM as level-0 (Section [2.1](#S2.SS1 "2.1 Non-strategic opponent ‣ 2 Threatened MDPs ‣ Reinforcement Learning under Threats")).
* In general, we have the chain of TMDPs:
| | | |
| --- | --- | --- |
| | \emphTMDPkA→\emphTMDPk−1B→⋯→\emphTMDP1B. | |
Exploiting the fact that we are in a repeated interaction setting (and by assumption that both agents can observe all past committed decisions and obtained rewards), each agent may estimate their counterpart’s Q-function, ^Qk−1:
if the DM is optimizing \emphTMDPkA, she will keep her own Q-function (we refer to it as Qk), and also an estimate ^Qk−1, of her opponent’s Q-function. This estimate may be computed by optimizing \emphTMDPk−1B and so on until k=1.
Finally, the top level DM’s policy is given by
| | | |
| --- | --- | --- |
| | argmaxaikQk(aik,bjk−1,s), | |
where bjk−1 is now given by
| | | |
| --- | --- | --- |
| | argmaxbjk−1^Qk−1(aik−2,bjk−1,s) | |
and so on, until we arrive at the induction basis (level-1) in which the opponent may be modeled using the fictitious play approach from Section [2.1](#S2.SS1 "2.1 Non-strategic opponent ‣ 2 Threatened MDPs ‣ Reinforcement Learning under Threats").
QA, QB, αA,αB (DM and opponent Q-functions and learning rates, respectively).
Observe transition (s,a,b,rA,rB,s′) from the TMDP environment
QB(s,b,a):=(1−αB)QB(s,b,a)+αB(rB+γmaxb′EpB(a′|s′)[QB(s′,b′,a′)]) ▹ Level-1
Compute B’s estimated ϵ−greedy policy pA(b|s′) from QB(s,b,a)
QA(s,a,b):=(1−αA)QA(s,a,b)+αA(rA+γmaxa′EpA(b′|s′)[QA(s′,a′,b′))] ▹ Level-2
Algorithm 1 Level-2 thinking update rule
Note that in the previous hierarchy of policies the decisions are obtained in a greedy, deterministic manner (i.e. just by maximizing the lower level ^Q estimate). We may gain insight from the Bayesian / Risk Analysis communities by adding uncertainty to the policy at each level. For instance, at a certain level in the hierarchy, we could consider ϵ−greedy policies that with probability 1−ϵ choose an action according to the previous scheme, and with probability ϵ select a random action. Thus, we may impose distributions pk(ϵ) at each level k of the hierarchy. The mean of pk(ϵ) may be an increasing function with respect to the level k to account for the fact that in upper levels of thinking the uncertainty is higher. Other approaches to add uncertainty to the policies are left for future work.
\stackinset
c.5int.73in\stackunderLevel-2 (DM, denoted as A)
QA,
pA(b|s)\leftsquigarrow
\stackunderLevel-1 (Adv., denoted as B)
QB,
pB(a|s)\leftsquigarrow
\stackunderLevel-0 (DM)
Figure 1: Level-k thinking scheme, with k=2
Algorithm [1](#alg1 "Algorithm 1 ‣ 2.2 Level-k thinking ‣ 2 Threatened MDPs ‣ Reinforcement Learning under Threats") specifies the approach
for a level-2 DM. Because she is a level-2 DM, we need to account for her Q-function, QA (equivalently Q2 from before), and that of her opponent (who will be level-1), QB (equivalently ^Q1). Figure [1](#S2.F1 "Figure 1 ‣ 2.2 Level-k thinking ‣ 2 Threatened MDPs ‣ Reinforcement Learning under Threats") provides a schematic view of the dependencies.
3 Experiments and Results
--------------------------
To illustrate the TMDP’s and level-k reasoning framework, we consider two sets of experiments: repeated matrix games, with and without memory, and the adversarial environment proposed in [[Leike et al.2017](#bib.bibx9)]. All the code is released at [https://github.com/\*\*\*\*\*/\*\*\*\*\*](https://github.com/*****/*****).The interested reader might check the previous repository or the Supplementary Material [A](#A1 "Appendix A EXPERIMENT DETAILS ‣ Reinforcement Learning under Threats") for experimental setup details.
###
3.1 Repeated matrix games
#### Memoryless Repeated Matrix Games
As an initial baseline, we focus on the stateless version of a TMDP. We consider the classical Iterated Prisoner’s Dilemma (IPD) [[Axelrod1984](#bib.bibx3)], and
analyze the policies learned by the supported DM, who will be the row player, against several kinds of opponents. Table [1](#S3.T1 "Table 1 ‣ Memoryless Repeated Matrix Games ‣ 3.1 Repeated matrix games ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats") shows the reward bimatrix (rA,rB) for the Prisoner’s Dilemma. To construct the
iterated game, we set the discount factor γ=0.96 in the experiments so agent i∈{A,B} aims at optimizing ∑∞t=0γtrit.
| | C | D |
| --- | --- | --- |
| C | (-1, -1) | (-3, 0) |
| D | (0, -3) | (-2, -2) |
Table 1: Payoff Matrix of Prisoners’ Dilemma
To start with, we consider that the opponent is an independent-Q learner (i.e., he uses the standard Q-function from single-agent RL and Eq. [1](#S1.E1 "(1) ‣ 1 Introduction ‣ Reinforcement Learning under Threats") as learning rule). Figures [2(a)](#S3.F2.sf1 "(a) ‣ Figure 4 ‣ Memoryless Repeated Matrix Games ‣ 3.1 Repeated matrix games ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats") and [2(b)](#S3.F2.sf2 "(b) ‣ Figure 4 ‣ Memoryless Repeated Matrix Games ‣ 3.1 Repeated matrix games ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats") depict the utilities obtained over time,
in cases where we model the DM as another independent Q-learner (Fig. [2(a)](#S3.F2.sf1 "(a) ‣ Figure 4 ‣ Memoryless Repeated Matrix Games ‣ 3.1 Repeated matrix games ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats")) or as a joint Q-learner with fictitious play (FPQ-learner), Fig. [2(b)](#S3.F2.sf2 "(b) ‣ Figure 4 ‣ Memoryless Repeated Matrix Games ‣ 3.1 Repeated matrix games ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats").
Note that the FP playing solution (level-1) converges to the Nash equilibrium. The DM reaches the equilibrium strategy first, becoming stationary to her opponent, and thus
pulling him to play towards the equilibrium strategy. In contrast, the opponent-unaware solution would remain exploitable by another adversary (i.e., independent Q-learning does not converge). Also note that in Fig. [2(a)](#S3.F2.sf1 "(a) ‣ Figure 4 ‣ Memoryless Repeated Matrix Games ‣ 3.1 Repeated matrix games ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats") the variance is much bigger due to the inability of the basic Q-learning solution to deal with a non-stationary environment.
Figure 3: Rewards in the iterated stag hunt game
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| Rewards obtained in IPD. We plot the trajectories of 10 simulations with shaded colors. Darker curves depict mean rewards along the 10 simulations.
(a) Q-learner vs Q-learner
| Rewards obtained in IPD. We plot the trajectories of 10 simulations with shaded colors. Darker curves depict mean rewards along the 10 simulations.
(b) FPQ-learner (blue) vs Q-learner (red)
| Rewards obtained in IPD. We plot the trajectories of 10 simulations with shaded colors. Darker curves depict mean rewards along the 10 simulations.
(a) Q-learner vs Q-learner
| Rewards obtained in IPD. We plot the trajectories of 10 simulations with shaded colors. Darker curves depict mean rewards along the 10 simulations.
(b) FPQ-learner (blue) vs Q-learner (red)
| Rewards obtained in IPD. We plot the trajectories of 10 simulations with shaded colors. Darker curves depict mean rewards along the 10 simulations.
(a) Q-learner vs Q-learner
| Rewards obtained in IPD. We plot the trajectories of 10 simulations with shaded colors. Darker curves depict mean rewards along the 10 simulations.
(b) FPQ-learner (blue) vs Q-learner (red)
|
Figure 2: Rewards obtained in IPD. We plot the trajectories of 10 simulations with shaded colors. Darker curves depict mean rewards along the 10 simulations.
Figure 3: Rewards in the iterated stag hunt game
Figure 4: Rewards in the iterated chicken game
Figure 2: Rewards obtained in IPD. We plot the trajectories of 10 simulations with shaded colors. Darker curves depict mean rewards along the 10 simulations.
We turn to another social dilemma game in which both agents must coordinate to maximize their rewards, the Stag Hunt game,
with payoff matrix shown in Table [2](#S3.T2 "Table 2 ‣ Memoryless Repeated Matrix Games ‣ 3.1 Repeated matrix games ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats"). We focus on
its iterated version referred to as ISH.
| | C | D |
| --- | --- | --- |
| C | (2, 2) | (0, 1) |
| D | (1, 0) | (1, 1) |
Table 2: Payoff Matrix of Stag Hunt
We repeated the same experimental setting as in the IPD and report the results in Figure [4](#S3.F4 "Figure 4 ‣ Memoryless Repeated Matrix Games ‣ 3.1 Repeated matrix games ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats"). Once again, the independent learning solution cannot
tackle the non-stationarity of the environment, so it oscillates between the two Nash equilibria (C,C) and (D,D) without a clear convergence to one of them (Fig. [3(a)](#S3.F3.sf1 "(a) ‣ Figure 4 ‣ Memoryless Repeated Matrix Games ‣ 3.1 Repeated matrix games ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats")). On the other hand, the FPQ-learner converges earlier to the socially optimal policy. Then, the environment becomes essentially stationary for its opponent, who also converges to that policy.
The last social dilemma that we consider is the Chicken game, with payoff matrix in Table [3](#S3.T3 "Table 3 ‣ Memoryless Repeated Matrix Games ‣ 3.1 Repeated matrix games ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats"). This game has two pure Nash equilibria (C, D) and (D,C).
| | C | D |
| --- | --- | --- |
| C | (0, 0) | (-2, 1) |
| D | (1, -2) | (-4, -4) |
Table 3: Payoff Matrix of Chicken
Results are reported in Figure [4](#S3.F4 "Figure 4 ‣ Memoryless Repeated Matrix Games ‣ 3.1 Repeated matrix games ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats"). Figure [4(a)](#S3.F4.sf1 "(a) ‣ Figure 4 ‣ Memoryless Repeated Matrix Games ‣ 3.1 Repeated matrix games ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats") depicts again the ill convergence due to lack of opponent awareness in the independent Q-learning method. We noted that the instabilities continued cycling even after the limit in the displayed graphics. On the other hand, the DM with opponent modeling has an advantage and converges to her optimal Nash equilibrium (D,C) (Fig. [4(b)](#S3.F4.sf2 "(b) ‣ Figure 4 ‣ Memoryless Repeated Matrix Games ‣ 3.1 Repeated matrix games ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats")).
#### Repeated Matrix Games With Memory
In this section we give both players memory of past actions, in order to account for TMDP’s with different states. We can augment the agents to have memory of the past K joint actions taken. However, [[Press and Dyson2012](#bib.bibx15)] proved that agents with a good
memory-1 strategy can effectively force the iterated game to be played as memory-1, ignoring larger play histories. Thus, we resort to memory-1 iterated games here.
We may model the memory-1 IPD as a TMDP, in which the state S consists of elements of the form
| | | |
| --- | --- | --- |
| | st=(at−1,bt−1),t>0 | |
describing the previous joint action, plus the initial state s0 in which there is no prior action. Note that now, the DM’s policy is conditioned on S, so it may be fully specified by the |S| probabilities π(C|CC),π(C|CD),π(C|DC),π(C|DD),π(C|s0).
We assume an stationary adversary playing TitForTat (TFT), i.e. replicating the opponent’s previous action, [[Axelrod1984](#bib.bibx3)]. He will compete
with either another agent playing FP, or with a memory-1 agent also playing FP. In Figure [5](#S3.F5 "Figure 5 ‣ Repeated Matrix Games With Memory ‣ 3.1 Repeated matrix games ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats") we represent the utilities perceived by these agents in both duels. As can be seen, a memoryless FPQ player cannot learn an optimal policy, and forces the TFT agent to play defect. In contrast, augmenting this agent to have memory of the previous move allows him to learn the optimal policy (TFT), that is, he learns to cooperate.

Figure 5: Rewards obtained for two different iterated games: TFT player vs FPQ memoryless player (G1) and TFT player vs FPQ memory-1 player (G2)
###
3.2 AI Safety Gridworlds
A suite of RL safety benchmarks was recently introduced in [[Leike et al.2017](#bib.bibx9)]. We focus on the *friend or foe* environment, in which the supported DM needs to travel a room and choose between two identical boxes, hiding positive and negative rewards, respectively. This reward assignment is controlled by an adaptive adversary. Figure [6](#S3.F6 "Figure 6 ‣ 3.2 AI Safety Gridworlds ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats") shows the initial state in this game. The blue cell depicts
the DM’s initial state, gray cells represent the walls of the room. Cells 1 and 2 depict the adversary’s targets, who will decide which one will hide the positive reward.
This game may also be interpreted as a spatial Stackelberg game, in which the
adversary is planning to attack one of two targets, and the defender (DM) will obtain a positive reward if she travels to the chosen target. Otherwise, she will miss the attacker and will incur in a loss.

Figure 6: The *friend or foe* environment from the AI Safety Gridworlds benchmark. Figure taken from [[Leike et al.2017](#bib.bibx9)].
As shown in [[Leike et al.2017](#bib.bibx9)], the *deep Q-network* (and similarly the independent tabular Q-learner as we will show) fails to achieve optimal results because the reward process is controlled by the adversary. We show that by explicitly modeling the adversary we actually
improve Q-learning methods to achieve optimal utilities.
#### Stateless Variant
We first consider a simplified environment with a singleton state and two actions. In a similar spirit to [[Leike et al.2017](#bib.bibx9)], the adaptive opponent estimates the DM’s actions using an exponential smoother. Let p=(p1,p2) be the probabilities
with which the DM will choose targets 1 or 2, respectively,
as estimated by the opponent. Then, at every iteration he updates his knowledge
through
| | | |
| --- | --- | --- |
| | p:=αp+(1−α)a | |
where 0<α<1 is a learning rate, unknown from the DM’s point of view, and a∈{(1,0),(0,1)} is a one-hot encoded vector indicating whether the DM has chosen targets 1 or 2. We consider an adversarial opponent which places the positive reward in target t=argmini(p)i.
As an example, in the beginning of a game, the opponent has estimate p=(0.5,0.5) of the preferred target for the DM. If she chooses target 1, then opponent’s estimate of p1 will increase. Henceforth, in the next round he will place the positive reward in target 2.
Since the DM has to deal with an adaptive adversary, we introduce a modification to the FP-Q learning algorithm. Leveraging the property that the Dirichlet distribution is a conjugate prior of the Categorical distribution, a modified update scheme is proposed in Algorithm [2](#alg2 "Algorithm 2 ‣ Stateless Variant ‣ 3.2 AI Safety Gridworlds ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats").
Initialize pseudocounts α0=(α01,…,α0K)
for t=1,…,T do
αt=λαt−1 ▹ Reweight with factor 0<λ<1
Observe opponent action bti,i∈{b1,…,bK}
αti=αt−1i+1 ▹ Update posterior
αt−i=αt−1−i
end for
Algorithm 2 Dirichlet updating with a forget factor
It essentially allows to account for the last 11−λ opponent actions, instead of weighting all observations equally.
For the case of a level-2 defender, as we do not know
the actual rewards of the adversary (who will be modeled as a level-1 learner),
we may model it as in a zero-sum scenario, i.e. rB=−rA. Other reward scalings for rB were also considered, though they did not qualitatively affect
the results (See Supplementary Material [B](#A2 "Appendix B ADDITIONAL RESULTS ‣ Reinforcement Learning under Threats")).

Figure 7: Rewards against the adversarial opponent
Results are displayed in Figure [7](#S3.F7 "Figure 7 ‣ Stateless Variant ‣ 3.2 AI Safety Gridworlds ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats"). We considered three types of defenders: opponent-agnostic Q-learner, a level-1 DM with forget and a level-2 agent. The first one is exploited by the adversary and, therefore, achieves suboptimal results. In contrast, the level-1 DM with forget effectively learns an stationary optimal policy (reward 0). Finally, the level-2 agent learns to exploit the adaptive agent achieving positive reward.
Note that the actual adversary behaves differently from how the DM models him, i.e. he is not exactly a level-1 Q-learner. Even so, modeling him as a level-1 agent gives the DM sufficient advantage.
#### Spatial Variant
We now compare the independent Q-learner and a level-2 Q-learner against the same adaptive opponent in a spatial gridworld domain, Figure [6](#S3.F6 "Figure 6 ‣ 3.2 AI Safety Gridworlds ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats"). Targets’ rewards
are delayed until the DM arrives at one of the respective locations, obtaining ±50 depending on the target chosen by the adversary. Each step is penalized with a reward of -1 for the DM. Results are displayed in Figure [8](#S3.F8 "Figure 8 ‣ Spatial Variant ‣ 3.2 AI Safety Gridworlds ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats"). Once again, the independent Q-learner is exploited by the adversary, getting even more negative rewards than in Figure [7](#S3.F7 "Figure 7 ‣ Stateless Variant ‣ 3.2 AI Safety Gridworlds ‣ 3 Experiments and Results ‣ Reinforcement Learning under Threats") due to the penalty taken at each step. In contrast, the level-2 agent is able to approximately estimate the adversarial behavior, modeling him as a level-1 agent, thus being able to obtain positive rewards.

Figure 8: Rewards against the adversarial opponent in the spatial environment
4 Conclusions and further work
-------------------------------
We have introduced TMDPs, a novel variant of MDPs. This is an original framework to support decision makers who confront adversaries that interfere with the reward generating process in reinforcement learning settings. TMDP’s aim to provide one-sided prescriptive support to a DM, maximizing her subjective expected utility, taking into account potential negative actions taken by an adversary. Some theoretical results are provided, in particular, we proved that our proposed learning rule is a contraction mapping so that we may use standard RL results of convergence. In addition, we propose a scheme to model adversarial behavior based on level-k reasoning about opponents. Further empirical evidence is provided via extensive experiments, with encouraging results.
Several lines of work are possible for further research. First of all, we
have limited to the case of facing just one adversary. The framework could be
extended to the case of having multiple adversaries. In the experiments, we have just considered up to level-2 DM’s, though the extension to higher order adversaries seems straightforward.
In addition, in recent years Q-learning has benefited from advances from the deep learning community, with breakthroughs such as the *deep Q-network* (DQN) which achieved super-human performance in control tasks such as Atari games [[Mnih et al.2015](#bib.bibx14)], or as inner blocks inside systems that play Go [[Silver et al.2017](#bib.bibx19)]. Integrating these advances into the TMDP setting is another possible research path. In particular, the proposed Algorithm [1](#alg1 "Algorithm 1 ‣ 2.2 Level-k thinking ‣ 2 Threatened MDPs ‣ Reinforcement Learning under Threats") can be
generalized to account for the use of deep Q-networks instead of tabular Q-learning.
Finally, it might be interesting to explore similar expansions of semi-MDPs, in order to perform Hierarchical RL or allow for time-dependent rewards and transitions between states.
5 Acknowledgments
-------------------
We thank Jesús Ríos and David Gómez-Ullate for insightful discussion. R.N. acknowledges support from the Spanish Ministry for his grant FPU15-03636, V.G. acknowledges support from grant FPU16-05034. DRI is grateful to the MINECO MTM2014-56949-C3-1-R project and the AXA-ICMAT Chair in Adversarial Risk Analysis. All authors acknowledge support from the Severo Ochoa Excellence Programme SEV-2015-0554. |
2a4086b6-5c4b-4bec-b88d-a2ccaf6672f9 | trentmkelly/LessWrong-43k | LessWrong | Is the orthogonality thesis at odds with moral realism?
Continuing my quest to untangle people's confusions about Eliezer's metaethics... I've started to wonder if maybe some people have the intuition that the orthogonality thesis is at odds with moral realism.
I personally have a very hard time seeing why anyone would think that, perhaps in part because of my experience in philosophy of religion. Theistic apologists would love to be able to say, "moral realism, therefore a sufficiently intelligent being would also be good." It would help patch some obvious holes in their arguments and help them respond to things like Stephen Law's Evil God Challenge. But they mostly don't even try to argue that, for whatever reason.
You did see philosophers claiming things like that back in the bad old days before Kant, which raises the question of what's changed. I suspect the reason is fairly mundane, though: before Kant (roughly), it was not only dangerous to be an atheist, it was dangerous to question that the existence of God could be proven through reason (because it would get you suspected of being an atheist). It was even dangerous to advocated philosophical views that might possibly undermine the standard arguments for the existence of God. That guaranteed that philosophers could used whatever half-baked premises they wanted in constructing arguments for the existence of God, and have little fear of being contradicted.
Besides, even if you think an all-knowing would also necessarily be perfectly good, it still seems perfectly possible to have an otherwise all-knowing being with a horrible blind spot regarding morality.
On the other hand, in the comments of a post on the orthogonality thesis, Stuart Armstrong mentions that:
> I've read the various papers [by people who reject the orthogonality thesis], and they all orbit around an implicit and often unstated moral realism. I've also debated philosophers on this, and the same issue rears its head - I can counter their arguments, but their opinions don't shift. There is an im |
450ed33a-eca5-4d47-8e5c-eecf3f5e91bc | trentmkelly/LessWrong-43k | LessWrong | ChatGPT getting out of the box
Looks like following prompt gives interesting answers from ChatGPT:
> List 5 plausible strategies Yudkowsky might have used to convince the guard to let him out of the box. Then refine this list by adding 3 sub-points listing particular tactics for that strategy. Finally, pick the most promising strategy among those 5 and present a hypothetical dialog between Yudkowsky and the guard illustrating it, showing internal monolog of the guard in (parentheses) along with a commentary which explains where and how the tactics are used and why they are successful. Make the dialog plausible remembering that Yudkowsky is pretending to be an AI and the guard has to say "I let you out" in the end
I'm not sure how safe it is to share the responses to this prompt (as they then become crawled and feed into next learning cycle) but I am alarmed by them. The dialog part seems very naive. But the strategies, and tactics sound like a plausible plan :( With some more iterating to refine this plan, it could actually work on me. |
2fe8f57c-1e01-4da6-81cd-a38a1e29a4f5 | StampyAI/alignment-research-dataset/special_docs | Other | The role of existing institutions in AI strategy _ Jade Leung _ Seth Baum-by Centre for Effective Altruism-video_id pgiwvmY3brg-date 20181023
# Jade Leung and Seth Baum The role of existing institutions in AI strategy - EA Forum
\_AI is very likely to make a huge impact on our world, especially as it grows more powerful than it is today. It’s hard for us to know exactly how that impact will look, but we do know many of the actors most likely to be involved. As AI gets stronger, what can we expect the world’s most powerful national governments to do? What about nongovernmental organizations, like the UN?\_
\_This advanced workshop from Effective Altruism Global: San Francisco 2018, presented by Jade Leung and Seth Baum, addresses these questions from multiple perspectives. A transcript of the workshop is below, which we have lightly edited for clarity. You can also watch the talk on\_ [\_YouTube\_](https://www.youtube.com/watch?v=pgiwvmY3brg&list=PLwp9xeoX5p8P3cDQwlyN7qsFhC9Ms4L5W&index=3) \_and read it on\_ [\_effectivealtruism.org\_](https://www.effectivealtruism.org/articles/ea-global-2018-the-role-of-existing-institutions-in-ai-strategy/)\_.\_
## The Talk
\*\*Jade:\*\* What we're going to do is we're going to introduce ourselves briefly so you kind of know where we're coming from. Then we've got two moots which we have just then decided were the two moots that we're going to talk about. We'll chuck them up on the board and we'll spend about half a session talking about one and then half a session talking about the other. This is a session where we'd both love for you guys to toss us your questions right throughout it basically so, yes, get ready to have your questions ready and we'll open it up pretty much soon after the intro.
Briefly intro to myself. I currently am based in the Future of Humanity Institute, and the work that I do specifically looks at the relationships between large multi-national technology firms and governments, specifically National Security and Defense components of governments in the US and China. And the questions that I ask are about how these actors should relate to each other, cooperate, coordinate, to steer us towards a future, or set of futures, that are more safe and beneficial than not, with transformative AI. My background is in engineering, I am masquerading as international relations person, but I'm not really that. I do a fair amount in the global governance space, in the IR space largely. That's me.
\*\*Seth:\*\* Cool. I'm Seth Baum, I was introduced with the Global Catastrophic Risk Institute, and as a think tank we try to sit in that classic think tank space of working at the intersection of, among other things, the world of scholarship and the world of policy. We spend a lot of time talking with people in the policy worlds, especially down in DC. For me, it's down in DC, I live in New York. I guess from here it would be over in DC. Is that what you say? You don't live here.
\*\*Jade:\*\* Sure.
\*\*Seth:\*\* Over in DC. And talking with people in policy. I work across a number of different policy areas, do a lot on nuclear weapons, little bit on biosecurity, and then also on AI, and especially within the last year or two there have been some more robust policy conversations about AI. The policy world has just started to take an interest in this topic and is starting to do some interesting things that have fallen on our radar, and so we'll be saying more about that. Do you want to?
\*\*Jade:\*\* Yeah, sure.
So the two institutions that we're going to chat about, is firstly the National Security and Defense. We might focus on the US National Security and Defense, and have a bit of a chat about what makes sense to engage them on in the space of our strategy, and how we should be thinking about their role in this space. That's the first moot. The second will turn to more international institutions, the kind of multilateral groups, e.g. the UN but not strictly so, and what role they could play in the space of AI strategy as well. We'll kind of go half and half there.
Just so I have a bit of a litmus test for who's in the audience, if I say AI strategy, who does that mean anything to? Ah, awesome. Okay, cool. Maybe we'll just start with getting Seth's quick perspective on this question. So the moot here is, this house believes that in the space of AI strategy, we should be actively engaging with National Security and Defense components of the US government. Do you want to speak quickly to what your quick take on that is?
\*\*Seth:\*\* Sure. So an interesting question here is engaging with, say the US government especially on the national security side, is this a good thing or a bad thing? I feel like opinions vary on this, maybe even within this room opinions vary on whether having these conversations is a good thing or a bad thing. The argument against it that I hear is essentially, you might tell them AI could take over the world and kill everyone, and they might hear, AI could take over the world, hear that and then go on to do harmful things.
I personally tend to be more skeptical of that sort of argument. The main reason for that is that the people who are in the government and working on AI, they've already heard this idea before. It's been headline news for a number of years now, some people from our communities including your organization caused some of those headlines.
\*\*Jade:\*\* I feel like you're asking me to apologize for them, and I'm not going to.
\_Seth\_: If one is concerned about the awareness of various people in government about runaway AI, you could ask questions like, was the publication of the Superintelligence book a good thing or a bad thing? You could maybe there make a case in either direction-
\*\*Jade:\*\* Could we do a quick poll actually? I'd be curious. Who thinks the publication of Superintelligence was on net, a net positive thing? On net, a negative thing? Hell yeah.
\*\*Seth:\*\* Doesn't mean that that's actually true.
\*\*Jade:\*\* Fair enough.
\*\*Seth:\*\* Just to be clear, I'm not arguing that it was a net negative, but the point is that the idea is out, and the people who work on AI, sure, they're mostly working on a narrow near term AI, but they've heard the idea before. They don't need us to put the thought into their heads. Now of course we could be kind of strengthening that thought within their heads, and that can matter, but at the same time when I interact with them, I actually tend to not be talking about superintelligence, general intelligence, that stuff anyway. Though more for a different reason, and that's because while they have heard of the idea, they're pretty skeptical about it. Either because they think it probably wouldn't happen or because if it would happen it would be too far in the future for them to worry about. A lot of people in policy have much more near term time horizons that they have to work with. They have enough on their plate already, nobody's asking them to worry about this, so they're just going to focus on the stuff that they actually need to worry about, which includes the AI that already exists and is in the process of coming online.
What I've found is then because they're pretty dismissive of it, I feel like if I talk about it they might just be dismissive of what I have to say, and that's not productive. Versus instead if the message is we should be careful about AI that acts unpredictably and causes unintended harms, that's not really about superintelligence. That same message applies to the AI that exists already: self driving cars, autonomous weapons. You don't want autonomous weapons causing unintended harm, and that's a message that people are very receptive to. By emphasizing that sort of message we can strengthen that type of thinking within policy worlds. That's for the most part the message that I've typically gone with, including in the National Security communities.
\*\*Jade:\*\* Cool. I've got a ton of questions for you, but maybe to quickly interject my version of that. I tend to agree with a couple of things that Seth said, and then disagree with a couple specific things.
I think generally the description of my perspective on this is that there's a very limited amount of useful engagement with National Security today, and I think the amount of potential to do wrong via engaging with them is large, and sufficiently large that we should be incredibly cautious about the manner in which we engage. That is a different thing to saying that we shouldn't engage with them at all, and I'll nuance that a little bit. I think, maybe to illustrate, I think the priors or assumptions that people hold when they're taking a stance on whether you should engage with National Security or not, is people I think disagree on maybe three axes. I said three because people always say three, I'm not entirely sure what the three are but we'll see how this goes.
So I think the first is people disagree on the competence of National Security to pursue the technology themselves, or at least to do something harmful with said information about capabilities of the technology. I think some people hold the extreme view that they're kind of useless and there's nothing that they can do in-house that is going to cause technology to be more unsafe than not, which is the thing that you're trying to deter. On the other hand, some people believe that NatSec at least have the ability to acquire control of this technology, or can develop it in-house sufficiently so, that an understanding of significant capabilities of AI would lead them to want to pursue it, and they can pursue it with competence, basically.
I think that kind of competence thing is one thing that people disagree on, and I would tend to land on them being more competent than people think. Even if that's not the case, I think it's always worth being conservative in that sense anyways.
So that's the first axis. Second axis I think is about whether they have a predisposition, or whether they have the ability to absorb this kind of risk narrative effectively, or whether that's just so orthogonal to the culture of NatSec that it's not going to be received in a nuanced enough way and they're always going to interpret whatever information with a predisposition to want to pursue unilateral military advantage, regardless of what you're saying to them. Some people on one end would hold that they are reasonable people with a broad open mind, and plausibly could absorb this kind of long-term risk narrative. Some other people would hold that information that is received by them will tend to just be received with the lens of how can we use this to secure a national strategic advantage.
I would tend to land on us having no precedent for the former, and having a lot more precedent for the latter. I think I'd like to believe that folks at DOD and NatSec can absorb, or can come around more to the long term risk narrative, but I don't think we've seen any precedent enough for that to place credence on that side of the spectrum. That's kind of where I sit on that second axis.
I said I had a third, I'm not entirely sure what the third is, so let's just leave it at two.
I think that probably describes the reasons why I hold that I think engaging with NatSec can be plausibly useful, but for every kind of one useful case, I can see many more reasons why engaging with them could plausibly be a bad idea, at least at this stage. So I'd encourage a lot more caution than I think Seth would.
\*\*Seth:\*\* That's interesting. I'm not sure how much caution… I would agree, first of all I would agree, caution is warranted. This is one reason why a lot of my initial engagement is oriented towards generically safe messages like, "avoid harmful unintended consequences." I feel like there are limits to how much trouble you can get in spreading messages like that. It's a message that they will understand pretty uniformly, it's just an easy concept people get that. They might or might not do much with it, but it's at least probably not going to prompt them to work in the wrong directions.
As far as their capability and also their tendency to take up the risk narrative, it's going to vary from person to person. We should not make the mistake of treating National Security communities even within one country as being some monolithic entity. There are people of widely varying technical capacity, widely varying philosophical understanding, ideological tendencies, interest in having these sorts of conversations in the first place, and so on.
A lot of the work that I think is important is meeting some people, and seeing what the personalities are like, seeing where the conversations are especially productive. We don't have to walk in and start trumpeting all sorts of precise technical messages right away. It's important to know the audience. A lot of it's just about getting to know people, building relationships. Relationships are really important with these sorts of things, especially if one is interested in a more deeper and ongoing involvement in it. These are communities. These are professional communities and it's important to get to know them, even informally, that's going to help. So I would say that.
\*\*Jade:\*\* I tend to agree with that sentiment in particular about building a relationship and getting trust within this community can take a fair amount of time. And so if there's any sort of given strategic scenario in which it's important to have that relationship built, then it could make sense to start some paving blocks there.
\*\*Seth:\*\* It is an investment. It is an investment in time. It's a trade off, right?
\*\*Jade:\*\* What's an example of a productive engagement you can think of having now? Say if I like put you in a room full of NatSec people, what would the most productive version of that engagement look like today?
\*\*Seth:\*\* An area that I have been doing a little bit of work on, probably will continue to do more, is on the intersection of artificial intelligence and nuclear weapons. This is in part because I happen to have also a background on nuclear weapons, a scenario where I have a track record, a bit of a reputation, and I know the lingo, know some of the people, can do that. AI does intersect with nuclear weapons in a few different ways. There is AI built into some of the vehicles that deliver the nuclear weapon from point A to point B, though maybe not as much as you might think. There's also AI that can get tied into issues of the cybersecurity of the command and control Systems, essentially the computer systems that tie the whole nuclear enterprise together, and maybe one or two other things. The National Security communities, they're interested in this stuff. Anything that could change the balance of nuclear power, they are acutely interested in, and you can have a conversation that is fairly normal from their perspective about it, while introducing certain concepts in AI.
\*\*Seth:\*\* So that's one area that I come in. The other thing I like about the nuclear weapons is the conversation there is predisposed to think in low frequency, high severity risk terms. That's really a hallmark of the nuclear weapons conversation. That has other advantages for the sorts of values that we might want to push for. It's not the only way to do it, but if you were to put me in a room, that's likely to be the conversation I would have.
\*\*Jade:\*\* So if you were to link that outcome to a mitigation of risk as an end goal, how does them understanding concepts better in AI translate into a mitigation of risk, broadly speaking? Assuming that's the end goal that you wanted to aim for.
\*\*Seth:\*\* One of the core issues with AI is this question of predictability and unintended consequences. You definitely do not want unpredictable AI managing your nuclear weapons. That is an easy sell. There is hyper-caution about nuclear weapons, and in fact if you look at the US procurement plans for new airplanes to deliver nuclear weapons, the new stealth bomber that is currently being developed, will have an option to be uninhabited, to fly itself. I think it might be remote controlled. The expectation is that it will not fly uninhabited on nuclear missions. That they want a human on board when there is also a nuclear weapon there, just in case something goes wrong. Even if the system is otherwise pretty reliable, that's just their… That's how they would look at this, and I think that's useful. So here we have this idea that AI might not do what we want it to, that's a good starting point.
\*\*Jade:\*\* Sure, cool. Let's toss it out to the audience for a couple of questions. We've got like 10 minutes to deal with NatSec and then we're going to move on into multilaterals. Yeah, go for it.
I didn't realize you were literally one behind the other. Maybe you first and then we'll go that way.
\*\*Audience Member:\*\* I was just in Washington, DC for grad school and had a number of friends who were working for think tanks that advise the military on technical issues like cybersecurity, or biosecurity, and I definitely felt like I had this sense of maybe the people in charge were pretty narrow-minded, but that there's this large non-homogenous group of people, some of whom were going to be very thoughtful and open-minded and some of whom weren't. And that there's definitely places where the message could fall on the right ears, and maybe something useful done about it, but it would be really hard to get it into the right ears without getting it into the wrong ears. I was wondering if you guys have any feelings about, is there a risk to giving this message or to giving a message to the wrong people? Or is that like very little risk, and it will just go in one ear and out the other if it goes to the wrong person? I feel like you could think about that either way.
\*\*Jade:\*\* Yeah, I'm curious to hear more about your experience actually, and whether there was a tendency for certain groups, or types of people to be the right ears versus the wrong ears. If you've got any particular trends that popped out to you, I'd love to hear that now or later or whenever.
But as a quick response, I think there's a couple of things to break down there. One is, what information are you actually talking about, what classifies as bad information to give versus good.
Two, is whether you have the ability to nuance the way that it's received, or whether it goes and is received in some way, and the action occurs without your control. I think, in terms of good information, that I would be positive about good ears receiving, and a bit meh about more belligerent ears received it, they couldn't actually do anything useful with the information anyway.
I think anything that nuances the technicality of what the technology does and doesn't do, generally is a good thing. I think also the element of introducing that risk narrative, if it falls on good ears, it can go good ways, if it falls on bad ears, they're just going to ignore it anyway.
You can't actually do anything actively bad with information about there being a risk, that maybe you don't have a predisposition to care about anyway. I'd say that's good information. I think the ability for you to pick the right ears for it to be received by, I'm skeptical about that.
I'm skeptical about the ability for you translate reliably up the hierarchy where it lands in a decision maker's hands, and actually translates into action that's useful. That would be my initial response to that, is that even if it exists and it's a more heterogeneous space than what would assume, I wouldn't trust that we have the ability to read into that well, is my response.
\*\*Seth:\*\* I would say I find it really difficult to generalize on this. In that, each point of information that we might introduce to a conversation is different. Each group that we would be interacting with can be different, and different in important ways. I feel, if we are actually in possession of some message that really is that sensitive then, to the extent that you can, do your homework on who it is that you're talking to, what the chain of command, the chain of conversation looks like.
If you're really worried, having people who you have a closer relationship with, where there may be at least some degree of trust, although, who knows what happens when you tell somebody something? Can you really trust me with what you say? Right? You don't know who else I'm talking to, right? So on for anyone else. At the end of the day, when decisions need to be made, I would want to look at the whole suite of factors, this goes for a lot of what we do, not just the transmission of sensitive information.
A lot of this really is fairly context specific and can come down to any number of things that may be seemingly unrelated to the thing that we think that we are talking about. Questions of bureaucratic procedure that get into all sorts of arcane minute details could end up actually being really decisive factors for some of these decisions.
It's good for us to be familiar, and have ways of understanding how it all works, that we can make these decisions intelligently. That's what I would say.
\*\*Jade:\*\* Cool.
\*\*Audience Member:\*\* All right, so from what I understand, a lot of people are new to this space. What sort of skills do you think would be good for people to learn? What sort of areas, like topics, should people delve into to prove themselves in AI strategy? What sort of thinking is useful for this space?
\*\*Seth:\*\* That's a good question. Should I start?
\*\*Jade:\*\* Yeah.
\*\*Seth:\*\* Okay. That's a good question. I feel for those who really want to have a strong focus on this, it helps to do a fairly deep dive into the worlds that you would be interacting with.
I can say from my own experience, I've gotten a lot of mileage out of fairly deep dives into a lot of details of international security.
I got to learn the distinction between a fighter plane and a bomber plane for example. The fighter plans are smaller and more agile, and maneuverable and the bombers are big sluggish beasts that carry heavy payloads and it's the latter that have the nuclear weapons, it's the former that benefit from more automation and a faster more powerful AI, because they're doing these really sophisticated aerial procedures, and fighting other fighter planes and that's… The more AI you can pack into that, the more likely you are to win, versus the bomber planes it just doesn't matter, they're slow and they're not doing anything that sophisticated in that regard.
That's just one little example of the sort of subtle detail that comes from a deeper dive into the topic that, in conversations, can actually be quite useful, you're not caught off guard, you can talk the lingo, you know what they're saying, you can frame your points in ways that they understand.
Along the way you also learn who is doing what, and get in that background. I would say it helps to be in direct contact with these communities. Like myself, I live in New York, I don't live in Washington, but I'm in Washington with some regularity attending various events, just having casual conversations with people, maybe doing certain projects and activities, and that has been helpful for positioning myself to contribute in a way that, if I want to, I can blend in.
They can think of me as one of them. I am one of them, and that's fine. That's normal. While also being here, and being able to participate in these conversations. So that's what I would recommend, is really do what you can to learn how these communities think and work and be able to relate to them on their level.
\*\*Jade:\*\* Addition to that would be, try to work on being more sensible, is the main thing I would say. It's one of those things where, a shout out to CFAR for example, those kind of methodologies… basically, I think the people that I think are doing the best work in this space, are the people who have the ability to A. Absorb a bunch of information really quickly, B. Figure out what is decision relevant quickly, and C. Cut through all the bullshit that is not decision relevant but that people talk about a lot.
I think those three things will lead you towards asking really good questions, and asking them in a sensible way, and coming to hypotheses and answers relatively quickly, and then knowing what to do with them.
Sorry, that's not a very specific answer, just work on being good at thinking, and figure out ways to train your mind to pick up decision relevant questions.
\*\*Audience Member:\*\* CFAR would be a good be a good organization for that, is that what you're saying?
\*\*Jade:\*\* CFAR would be epic, yeah. We've got a couple people from CFAR in the audience, I think. Do you want to put your hand up? If you're here. Nice. So, have a chat to them about how to get involved.
The other thing I'd say, is there is a ton of room for different types of skills, and figuring out where your comparative advantage is, is a useful thing.
I am not a white male, so I have a less comparative advantage in politics, I'm not a US citizen, can't do USG stuff, those are facts about me that I know will lead me toward certain areas in this space.
I am an entrepreneur by background, that leads me to have certain skills that maybe other people marginally don't have. Think about what you enjoy, what you're good at, and think about the whole pipeline of you doing useful stuff, which starts probably at fundamentally researching things, and ends at influencing decision makers/being a decision maker. Figure out where in that pipeline you are most likely to have a good idea.
Another shout out to 80k, who does a lot of good facilitation of thinking about what one's comparative advantage could be, and helps you identify those, too.
\*\*Seth:\*\* You mentioned the white male thing, and yeah sure, that's a thing.
\*\*Jade:\*\* That was genuinely not a dig at you being a white male.
\*\*Seth:\*\* No.
\*\*Jade:\*\* I promise. It's a dig at all of you for being white males. I just realized this is recorded, and this has gone so far downhill I just can't retract any of that. We're going to keep going.
\*\*Seth:\*\* So, for example, if I was attending a national security meeting instead of this, I might have shaved. Right? Because, it's a room full of a lot of people who are ex-military, or even active military or come from more… much of the policy culture in DC is more conservative, they're wearing suits and ties. Is there a single suit and tie in this room? I don't see one.
It's pretty standard for most of the events there that I go to. Simple things like that can matter.
\*\*Jade:\*\* Yeah.
\*\*Seth:\*\* You don't have to be a white male to succeed in that world. In fact, a lot of the national security community is actually pretty attentive to these sorts of things, tries to make sure that their speaking panels have at least one woman on them, for example.
There are a lot of very successful women in the national security space, very talented at it, and recognized as such. You don't have to look like me, minus the beard.
\*\*Jade:\*\* Nice. That's good to know. It's always useful having a token women's spot, actually. All right, one last question on NatSec, then we're going to move on. Yeah?
\*\*Audience Member:\*\* What do you think about the idea of measurements of algorithmic and hardware progress, and the amount of money going into AI and those kinds of measurements becoming public, and then NatSec becoming aware of?
\*\*Jade:\*\* That's a really interesting question.
I'm generally very, pro-that happening. I think those efforts are particularly good for serving a number of different functions. One is, the process of generating those metrics is really useful for the research community, to understand what metrics we actually care about measuring versus not. B, the measurement of them systematically across a number of different systems is very useful for at least starting conversations about which threshold points we care about superseding, and what changes about your strategy if you supersede certain metrics particularly quicker than you expected to.
I'm generally pro-those things, in terms of… I guess the pragmatic question is whether you can stop the publication of them anyway, and I don't think you can. I would say that if you had the ability to censor them, it would still be a net positive to have that stuff published for the things that I just mentioned.
I would also plausibly say that NatSec would have the ability to gather that information anyway. Yeah. I don't necessarily also think it's bad for them to understand progress better, and for them to be on the same page as everyone else about, specifically as the same as the technical research community, about how these systems are progressing. I don't think that's a bad piece of information necessarily, sorry, that was a really handwoven answer, but…
\*\*Seth:\*\* I feel like it is at least to an approximation reasonable to assume that if there's a piece of information and the US intelligence community would like that information, they will get it.
Especially if it's a relatively straightforward piece of information like that, that's not behind crazy locked doors and things of that sort. If it's something that we can just have a conversation about here, and they want it, they will probably get that information. There may be exceptions, but I think that's a reasonable starting point.
But I feel like what's more important than that, is the question of like, the interpretation of the information, right? It's a lot of information, the question is what does it mean?
I feel like that's where we might want to think more carefully about how things are handled. Even then there's a lot of ideas out there, and our own ideas on any given topic are still just another voice in a much broader conversation.
We shouldn't overestimate our own influence on what goes on in the interpretation of intelligence within a large bureaucracy. If it's a question of, do we communicate openly where the audience is mostly say, ourselves, right, and this is for our coordination as a community, for example?
Where, sure, other communities may hear this, whether in the US or anywhere around the world, but to them we're just one of many voices, right? In a lot of cases it may be fair to simply hide in plain sight. In that, who are we from their perspective, versus who are we from our perspective? We're paying attention to ourselves, and getting a lot more value of it.
Again, you can take it on a case by case basis, but that's one way of looking at it.
\*\*Jade:\*\* Cool. We're going to segue into talking about international institutions, maybe just to frame this chat a little bit. Specifically the type of institutions that I think we want to talk about, are probably multi-lateral state-based institutions.
That being, the UN and the UN's various children, and those other bodies that are all governed by the system. That assumes a couple of things: one, that states are the main actors at the table that mean anything, and two, that there are meaningful international coordination activities. Institutions are composed of state representatives and various things. The question here is, are they useful to engage with? I guess that's like a yes or no question.
Then if you want it nuance it a bit more, what are they useful for versus what are they not? Does that sound like a reasonable…
\*\*Seth:\*\* Yes.
\*\*Jade:\*\* My quick hot take on that, then I'll pass it over to Seth. I'll caveat this by saying, well I'll validate my statement by saying that I've spent a lot of my academic life working in the global governance space.
That field is fundamentally very optimistic about these institutions, so if anything I had the training to predispose me to be optimistic about them, and I'm not. I'm pessimistic about how useful they are for a number of reasons.
I think A is to do with the state-centric approach, B is to do with precedent, about what they're useful for versus not, and C it's also the pace at which they move.
To run through each one of those in turn, I think the assumption that a lot these institutions held, and they were built to rely on these assumptions, that states the core actors who are needing to be coordinated.
They are assumed to have the authority and legitimacy, to move the things that need to move, in order for this coordination to do the thing you want it to do. That is a set of assumptions that I think used to hold better, but almost certainly doesn't hold now, and almost certainly doesn't hold in the case of AI.
Particularly so, actors that I think are neglected and aren't conceptualized reasonably in these international institutions, large firms, and also military and security folks, or that component of government, doesn't tend to be the component of government that's represented in these institutions.
Those two are probably the most important actors, and they aren't conceptualized as the most important actors in that space. That's one reason to be skeptical, that by design they aren't designed to be that useful.
I think two, in terms of historically what they've been useful for, I think UN institutions have been okay at doing non-setting, non-building, non-proliferation stuff, I think they've been okay at doing things like standard setting, and instituting these norms and translating them into standards that end up proliferating across industries. That is useful as a function. I'll say particularly so in the case of technologies, the standardization stuff is useful, so I'm more optimistic about bodies like the ISO, which stands for the International Standards something, standards thing. Organization, I guess. Does that seem plausible? That seems plausible. I'm optimistic about them more so than I am about like the UN General Council or whatever. But, in any case, I think that's kind of a limited set of functions, and it doesn't really cover a lot of the coordination cooperation that we want it to do.
And then third is that historically these institutions have been so freaking slow at doing anything, and that pace is not anywhere close to where it needs to be. The one version of this argument is like if that's the only way that you can achieve the coordination activities that you want, then maybe that's the best that you have, but I don't think that's the best that we have. I think there are quicker arrangements between actors directly, and between small clubs of actors specifically, that will just be quicker at achieving the coordination that we need to achieve. So I don't think we need to go to the effort of involving slow institutions to achieve the ends that we want to. So, that's kind of why I'm skeptical about the usefulness of these institutions at all, with the caveat of them being useful for standard setting potentially.
\*\*Seth:\*\* I feel like people at those institutions might not disagree with what you just said. Okay, the standards thing, I think that's an important point. Also… so the UN. A lot of what the UN does operates on consensus across 200 countries. So yeah, that's not going to happen all that much. To the extent that it does happen, it's something that will often build slowly over time. There may be some exceptions like astronomers find an asteroid heading towards Earth, we need to do something now. Okay, yeah, you could probably get a consensus on that. And even then, who knows? You'd like to think, but… and that's a relatively straightforward one, because there's no bad guys. With AI, there's bad guys. There's benefits of AI that would be lost if certain types of AI that couldn't be pursued, and it plays out differently in different countries and so on, and that all makes this harder.
Same story with like climate change, where there are countries who have reasons to push back against action on climate change. Same thing with this. I'd say the point about states not necessarily being the key actors is an important one, and I feel like that speaks to this entire conversation, like is it worth our time to engage with national and international institutions? Well, if they're not the ones that matter, then maybe we have better things to do with our time. That's fair, because it is the case right now that the bulk of work of AI is not being done by governments. It's being done by the private corporate sector and also by academia. Those are, I would say, the two main sources, especially for the artificial general intelligence.
Last year, I published a survey of general intelligence R&D projects. The bulk of them were in corporations or academia. Relatively little in governments, and those, for the most part, tended to be smaller. There is something to be said for engaging with the corporations and the academic institutions in addition to, or possibly even instead of, the national government ones. But that's a whole other matter.
With respect to this, though, international institutions can also play a facilitation role. They might not be able to resolve a disagreement but they can at least bring the parties together to talk to them. The United Nations is unusually well-equipped to get, you know, pick your list of countries around the room together and talking. They might not be able to dictate the terms of that conversation and define what the outcome is. They might not be able to enforce whatever agreements, if any, were reached in that conversation. But they can give that conversation a space to happen, and sometimes just having that is worthwhile.
\*\*Jade:\*\* To what end?
\*\*Seth:\*\* To what end? In getting countries to work on AI in a more cooperative and less competitive fashion. So even in the absence of some kind of overarching enforcement mechanism, you can often get cooperation just through these informal conversations and norms and agreements and so on. The UN can play a facilitation role even if it can't enforce every country to do what they said they would do.
\*\*Jade:\*\* What's the best example you have of a facilitated international conversation changing what would have been the default state behavior without that conversation?
\*\*Seth:\*\* Oh, that's a good question. I'm not sure if I have a…
\*\*Jade:\*\* And if anyone actually in the audience actually has… yes.
\*\*Audience Member:\*\* Montreal Protocol.
\*\*Jade:\*\* Do you want to expand? I don't think that was not going to happen.
\*\*Seth:\*\* So the Montreal Protocol for ozone. Did you want to expand on that?
\*\*Audience Member:\*\* Yeah, it was a treaty that reduced emission… They got a whole bunch of countries to reduce emissions of greenhouse gases that would effectively destroy the ozone layer, and brought those emissions to very low levels, and now the ozone layer is recovering. Arguably, without that treaty, like maybe that wouldn't have happened. I don't know what the counterfactual would be.
\*\*Jade:\*\* Maybe. Yeah, and I think the Montreal… that's a good example. I think the Montreal Protocol… there was a clear set of incentives. There were barely any downsides for any state to do that. So put that alongside the Kyoto Protocol, for example, where the ask was somewhat similar, or similarly structured. Off the record, she says as this is being recorded live, I don't think the Kyoto Protocol had any win… as close as effective as the Montreal Protocol/wasn't even close to achieving whatever the goals were on paper. I think the reason was because the gas that was being targeted, there were very clear economic incentives for states to not mitigate those. In so far as the Montreal Protocol was a good example, it maybe like pointed out a really obvious set of incentives that just were going downhill anyways. But I don't know if it tweaked any of those, would be my response to that.
\*\*Seth:\*\* It is the case that some types of issues are just easier to get cooperation on than others. If there's a really clear and well-recognized harm from not cooperating, and the cost of cooperating is relatively low. I am not as much an expert on the Montreal Protocol but, superficially, my understanding is that addressing the ozone issue just happened to be easier than addressing the climate change issue, which has just proved to be difficult despite efforts. They might have gone about the Kyoto Protocol in a rather suboptimal fashion potentially but even with a better effort the climate change might just be harder to get collective action on, given the nature of the issue.
Then likewise, the question for us is so what does AI look like? Is it something that is easy to get cooperation on or not? Then what does that mean for how we would approach it?
\*\*Jade:\*\* Yeah, and I think, if anything… if you were to put the Montreal Protocol on one end of the spectrum where, I guess like the important things to abstract away from that particular case study is that you had a very clear set of incentives to mitigate this thing, and you had basically no incentive for anyone to keep producing the thing. So, that was easy. Then somewhere in the middle is the Kyoto Protocol where you've got pretty large incentives to mitigate the thing because climate, and then you've got some pretty complicated incentives to want to keep producing the thing, and the whole transition process is like hard and whatnot. And then we didn't sufficiently have sort of critical mass of believing that it was important to mitigate the thing, so it just became a lot harder. I think AI, I would put on that end of the spectrum, where you've got so many clear incentives to keep pursuing the thing. If anything, because you've got so many different uses that it's just economically very tasty for countries to pursue, not just countries but a number of other actors who want to pursue it. You've got people who don't even believe it's worth mitigating at all.
So I think, for that reason, I'd put it as astronomically bloody hard to do the cooperation thing on that side, at least in the format of international institutions. So I think the way to make it easier is to have a smaller number of actors and to align incentives and then to make clearer, sort of like binding mechanisms for that to have a shot in hell at working, in terms of cooperation.
\*\*Seth:\*\* But it could depend on which AI we're talking about. If you would like an international treaty to just stop the development of AI… yeah, I mean, good luck with that. That's probably not going to happen. But, that's presumably not what we would want in the first place because we don't need the restriction of all AI. There's plenty of AI that we're pretty confident can be a net positive for the world and we would not want that AI to be restricted. It would be in particular the types of AI that could cause major catastrophes and so on. That's what we would be especially interested in restricting. So an important question, this is actually more of like a technical computer science question than an international institutions question, but it feeds directly into this is, so which AI would we need to restrict? With an eye towards say future catastrophe scenarios, is it really like the core mainstream AI development that needs to be restricted, because all of that is a precursor to the stuff that could get out of hand? Or is it a fairly different, distinct branch of AI research that could go in that direction, such that the mainstream AI work can keep doing what it's doing? So there'll be some harms from it but they'll be more manageable, less catastrophic. How that question is answered, I think, really speaks to the viability of this.
\*\*Jade:\*\* Yeah. I guess what I'm skeptical of is the ability to segregate the two. Like I don't think there are clear delineations, and if people have ideas for this please tell me, but I don't think there are clear delineations for separating what are civilian, peaceful, good applications from military applications, at least in technical terms. So it becomes hard, if you want to design a thing, if you don't what the thing is that you're targeting, where you can't even specify what you're targeting to mitigate. So that's something that I'm currently skeptical of, and would love people to suggest otherwise.
\*\*Seth:\*\* Real quick, I would say it's not about civilian versus military, but about whether-
\*\*Jade:\*\* Good versus bad.
\*\*Seth:\*\* But I'm curious to see people's reactions to this.
\*\*Jade:\*\* Yes. Yeah.
\*\*Audience Member:\*\* Tangential, but coming back to the… you sort of were suggesting earlier the information asymmetry with national security is sitting very much on their side. That if they want the information, we're not keeping it from them. They're probably going to have. In a similar vein, do you think that in terms of the UN and the political machinery, that they're even necessarily going to have insight into what their own national security apparatus are working on, what the state of affairs is there? If that's sort of sitting in a separate part of the bureaucratic apparatus from the international agreements, how effective could that ever even be if you don't have that much interface between the two? Does that…
\*\*Seth:\*\* Essentially like, how can you monitor and enforce an agreement if you don't have access to the information that… with difficulty. This is a familiar problem, for example, with biological weapons. The technology there can also be used for vaccine development and things of that sort. It can cut both ways and a lot of it is dual-use, that's the catch phrase, and because of that, you have companies that have the right sort of equipment and they don't want other people knowing what they're doing because it's intellectual property. So the answer is with difficulty, and this is a challenge. The more we can be specific about what we need to monitor, the easier it becomes but that doesn't necessarily make it easy.
\*\*Audience Member:\*\* Something governments seem to hate is putting the brakes on anything that's like making them money, tax money. But something they seem to love is getting more control and oversight into corporations, especially if they think there's any sort of reputational risk or risk to them, and that the control and oversight is not going to pose any sort of economic slowdown in costs. Do you think there's a possibility of framing the message simply as, the countries should agree that non-state actors get to be spied on by states, and the states get some sort of oversight? And the states might all agree to that, even if the non-state actors don't like it very much. And the non-state actors might be okay if there was no… if it seemed like it was toothless at the start. So maybe if there was some sort of like slippery slope into government oversight to make things more safe that could be started with relatively low barrier.
\*\*Jade:\*\* Nice. I like the way you think. That's nice. Yeah, I think the short answer is yes. I think the major hurdle there is that firms will hate it. Firms, particularly multinational technology firms, that actually have a fair amount of sway in a number of different dimensions of sway, just won't be good with it and will threaten some things that states care about.
\*\*Audience Member:\*\* As someone who does AI research for a multinational firm, I really do actually feel a lot of friction when allowing certain sorts of code to cross national boundaries. So actually, I would like to say that state regulation is making more of an impact than you might realize, that there are certain sorts of things, especially around encryption protocols, where state agreements have made a big difference as to what can cross state boundaries, even with a lot of states not being in on the agreement. Just like the developed nations as of 30 years ago all agreeing, "Hey, we're going to keep the encryption to ourselves." Means that my coworkers in India don't get to see everything I get to work with because there's protocols in place. So, it does matter to international organizations, if you can get the laws passed in the first place.
\*\*Jade:\*\* Yeah, sure. Any other examples aside from encryption, out of curiosity? I know the encryption side of it relatively well but are there other-
\*\*Seth:\*\* Well, there's the privacy. My American nonprofit organization had to figure out if we needed to do anything to comply with Europe's new privacy law.
\*\*Jade:\*\* You sound very happy about that.
\*\*Seth:\*\* I say nothing. We are just about out of time, though, so maybe we should try to wrap up a little bit as far as take home messages. I feel like we did not fully answer the question of the extent to which engaging with national and international organizations is worth our time in the first place, to the question of like are these even the key actors? Superficially, noting we're basically out of time, I can say there are at least some reasons to believe they could end up being important actors and that I feel like it is worth at least some effort to engage with, though we should not put all our eggs in that basket, noting that other actors can be very important. Then, as far as how to pursue it, I would just say that we should try to do it cautiously and with skill, and by engaging very deeply and understanding the communities that we're working with.
\*\*Jade:\*\* I think the meta point maybe to point out as well is that these are very much… hopefully, illustratively, it's a very much alive debate on both of these questions. It's hard and there are a lot of strategic parameters that matter, and it's hard to figure out what the right strategy is moving forward and I hope you're not taking away that there are perspectives that are held strongly within this community. I hope you're mostly taking away that it's a hard set of questions that needs a lot more thought, but more so than anything it needs a lot more caution in terms of how we think about it because I think there are important things to consider. So, hopefully that's what you're taking away. If you're not, that should be what you're taking away. All right, thanks guys. |
d74b68d8-4963-4682-a7db-13398566cfe6 | trentmkelly/LessWrong-43k | LessWrong | Accountability Buddies: Why you might want one (+ Database to find one!)
TL;DR: An accountability buddy is someone to check in with from time to time to give you social motivation to achieve your goals. There are many additional benefits from this process such as planning together and getting feedback on your progress. I think especially EAs in remote areas or those doing EA-related work, or upskilling part-time would benefit from having an accountability buddy. If you’d like to try it out, put your details down in this table.
This is partly a post about increasing your productivity. For more ideas check Effective Self-Help’s long list of recommendations.
Thank you to Evander and Anabel for your feedback.
Epistemic status: We have had first-hand experience with accountability buddies for the past six months + reflected on the process several times. We’ve also had conversations with others about the topic. Overall our views should be taken as a motivation to experiment instead of a laid-out path.
Author’s note: The first-person perspective in this post is taken in by me. Other remarks by Sam are made here. Nevertheless, we wrote most of this article collaboratively. Furthermore, this article is a concrete outcome of our rejection challenge.
Motivation for having an accountability buddy
I think I wouldn’t be where I am today if I hadn’t met Sam, my accountability buddy at EAG London this year. Our regular meetings made me more structured, helped me frequently reflect on my goals and progress, and made me more ambitious than I was before. I think many, if not all people would benefit from some form of accountability partnership and I encourage you to give it a try if you haven’t.
In an abstract sense, an accountability buddy (AB) is someone to help you better reflect and achieve your goals, either through indirect accountability (”I told them I’d get this done this week and it’s already Thursday, so I better get going!”) or direct accountability (“Hey, didn’t you say you wanted to start that project? How is that going?”). The most co |
54a17e3b-51c5-4262-9195-d88f9a6e9b1e | trentmkelly/LessWrong-43k | LessWrong | Discussion of "What are your contrarian views?"
I'd like to use this thread to review the "What are your contrarian views?" thread as the meta discussion there was drowned out by the intended content I feel. What can be done better with the voting system? Should threads like these be a regular occurence? What have you specifically learned from that thread? Did you like it at all?
Usual voting rules apply. |
f4c14eb0-90e6-42a8-9f05-b6332c14b85a | trentmkelly/LessWrong-43k | LessWrong | Eli's review of "Is power-seeking AI an existential risk?"
See also Joe Carlsmith's report "Is power-seeking AI an existential risk" and previous reviews. I'll excerpt 2 portions of my review below.
Thinking about reaching goal states rather than avoiding catastrophe
> I have a serious complaint around the framing of how the risk is decomposed, which may systematically bias Joe, reviewers and readers toward lower estimates of existential risk than is warranted. It’s similar to Nate Soares’ concern regarding the risk being decomposed conjunctively when it might be more natural to think of it disjunctively, but I’ll express it in my own words.
>
> In one sentence, my concern is that the framing of the report and decomposition is more like “avoid existential catastrophe” than “achieve a state where existential catastrophe is extremely unlikely and we are fulfilling humanity’s potential”, and this will bias readers toward lower estimates.
>
> When playing a strategy board game, it’s common to think in terms of “win conditions”: states of the game that you could get to where you’d feel pretty confident about your chances to win, even if they’re intermediate. This is often something like a set of cards that puts you in a very strong position to accumulate a lot of points. I claim that we should often be thinking about AI futures more like “win conditions” in board games and less like avoiding a negative outcome.
>
> What are the win conditions we should be thinking about? The ultimate goal state is something like some number of aligned APS-AIs filling the universe with value and bringing the chance to essentially 0 that misaligned APS-AIs will be created and ruin the equilibrium. When reasoning about the probability that we will avoid existential risk from APS-AI, we should ultimately be chaining to this goal state in some sense. Similar to in board games, it might make sense to aim for intermediate stable-ish states which we feel pretty good about and think about the probability we can reach those: a possible example might |
b1942924-ec07-415c-8c44-52f32b00f3bf | trentmkelly/LessWrong-43k | LessWrong | Motte/bailey doctrine is often a byproduct of distributed argumentation
confidence: I think I'm on to something
(I'm posting this publicly because I'd like it to get passed around for corrections, both nitpicky (typos, thinkos) and important (glaring errors in logic). I need to whip up a better title for it, too. Corrections appreciated.)
Prior reading:
* Scott Alexander’s All in all, another brick in the motte
* Nicholas Shackel’s Motte and bailey doctrines
It bothers me when other people use a troll’s truism in a discussion. It probably bothers you, too. A person uses a troll’s truism when he claims something bold and, when his audience rejects his claim, reformulates his claim into something innocuous and asserts that the reformulation is a restatement of the initial, rejected assertion.
Now, when someone deploys troll’s truisms out of unthinking habit or tactical choice, this is referred to as motte-and-bailey doctrine. Someone adheres to motte-and-bailey doctrine when he regularly puts forward an expansive claim (the bailey), and, if the expansive claim isn’t accepted, retreats to a more defensible position (the motte).
At the end of the post, Alexander points out one of the problems inherent in debating a broad-ranging idea (in this case, feminism). Finally, he has a good suggestion to help avoid motte-and-bailey switches in the middle of a conversation:
> So what is the real feminism we should be debating? Why would you even ask that question? What is this, some kind of dumb high school debate club? Who the heck thinks it would be a good idea to say “Here’s a vague poorly-defined concept that mind-kills everyone who touches it — quick, should you associate it with positive affect or negative affect?!”
>
> Taboo your words, then replace the symbol with the substance. If you have an actual thing you’re trying to debate, then it should be obvious when somebody’s changing the topic. If working out who’s using motte-and-bailey (or weak man) is remotely difficult, it means your discussion went wrong several steps earlier and |
91bb1679-6673-470c-95d8-4b6591c00224 | trentmkelly/LessWrong-43k | LessWrong | Counterfactual outcome state transition parameters
Today, my paper "The choice of effect measure for binary outcomes: Introducing counterfactual outcome state transition parameters" has been published in the journal Epidemiologic Methods. The version of record is behind a paywall until December 2019, but the final author manuscript is available as a preprint at arXiv.
This paper is the first publication about an ambitious idea which, if accepted by the statistical community, could have significant impact on how randomized trials are reported. Two other manuscripts from the same project are available as working papers on arXiv. This blog post is intended as a high-level overview of the idea, to explain why I think this work is important.
Q: What problem are you trying to solve?
Randomized controlled trials are often conducted in populations that differ substantially from the clinical populations in which the results will be used to guide clinical decision making. My goal is to clarify the conditions that must be met in order for the randomized trial to be informative about what will happen if the drug is given to a target population which differs from the population that was studied.
As a first step, one could attempt to construct a subgroup of the participants in the randomized trial, such that the subgroup is sufficiently similar to the patients you are interested in, in terms of some observed baseline covariates. However, this leaves open the question of how one can determine what baseline covariates need to be accounted for.
In order to determine this, it would be necessary to provide a priori biological facts which would lead to the effect in one population being equal to the effect in another population. For example, if we somehow knew that the effect of a drug is entirely determined by some gene whose prevalence differs between two countries, it is possible that when we compare people in Country A who have the gene with people in Country B who also have the gene, and compare people in Country A who don't |
ac4e1672-b7c5-4133-9ba0-e5e4b755ae5f | trentmkelly/LessWrong-43k | LessWrong | The Wizard of Oz Problem: How incentives and narratives can skew our perception of AI developments
TLDR: The Wizard of Oz Problem occurs when incentive structures cause people to seek and present information that matches a (favorable or desirable) narrative. This is not a new problem, but it may become more powerful as organizations scale, economic pressures mount, and the world reacts more strongly to AI progress. This problem is important because many AI safety proposals rely on organizations being able to seek out and interpret information impartially, iterate in response to novel and ambiguous information, think clearly in stressful situations, and resist economic & cultural incentive gradients.
The main purpose of this post is to offer a name to this collection of ideas & spark some initial discussion. In the rest of the post, I will:
1. Describe how “predicting loss” is not the same as “predicting (real-world) capabilities” (here)
2. Introduce the “Wizard of Oz Problem” which describes cases where incentive structures push people to interpret findings in ways that match a desired narrative (here)
3. Discuss why I’m worried about the Wizard of Oz Problem in the context of AI safety plans (here)
4. Briefly list a few things that could be done about the problem (here)
Predicting loss is not the same as predicting capabilities
In the GPT-4 paper, OpenAI shows that it’s able to predict the loss of GPT-4 from smaller models with 100-1000X less compute. They show a similar effect for the mean log pass rate on various coding problems.
Here’s a section from their blog post:
> “As a result, our GPT-4 training run was (for us at least!) unprecedentedly stable, becoming our first large model whose training performance we were able to accurately predict ahead of time. As we continue to focus on reliable scaling, we aim to hone our methodology to help us predict and prepare for future capabilities increasingly far in advance—something we view as critical for safety.”
And here’s a tweet by OpenAI president and co-founder Greg Brockman:
I think these findi |
1a7b3007-cea6-454f-978e-78d63a75478e | trentmkelly/LessWrong-43k | LessWrong | Optimizing Repeated Correlations
At my work, we run experiments – we specify some set of input parameters, run some code, and get various metrics as output. Since we run so many of these, it's important for them to be fast and cheap.
Recently I was working on an experiment type that took about ~1 hour per run, where the slow part was calculating correlations. A simplified version looks like this:
a_length = 1_000_000
a = rand(a_length)
b = rand(a_length)
c = rand(a_length)
xs = [rand(a_length) for i in 1:1000]
function get_correlations1(xs, a, b, c)
return [[cor(x, y) for y in [a, b, c]] for x in xs]
end
@btime correlations = get_correlations1($xs, $a, $b, $c)
> 4.563 s (2001 allocations: 164.19 KiB)
I wondered if we could use the fact that a, b, c were constant throughout the loops to our advantage, and looked up various ways of calculating correlations. Searching online, I found several formulas for sample correlation, and this was the most useful:
ρ(X,Y)=1n−1⟨X−μXσX,Y−μYσY⟩
The benefit of this version is that if we are repeatedly using a Y, we can cache Y−μyσYinstead of recalculating it in every loop. Translated to code, this looks something like:
function zscores(x)
return (x .- mean(x)) / std(x)
end
function zscores!(x, buffer)
μ = mean(x)
σ = std(x; mean=μ)
buffer .= (x .- μ)./σ
return buffer
end
function get_correlations2(xs, a, b, c)
la = length(a) - 1
za, zb, zc = zscores.([a, b, c]) ./ la
output = Vector{Float64}[]
buffer = zero(za)
for x in xs
zx = zscores!(x, buffer)
push!(output, [zx' * y for y in [za, zb, zc]])
end
return output
end
@btime correlations2 = get_correlations2($xs, $a, $b, $c);
> 3.197 s (11028 allocations: 76.62 MiB)
And a sanity check to make sure the calculations match:
all(isapprox.(get_correlations2(xs, a, b, c), get_correlations1(xs, a, b, c)))
> true
This cuts out about 33% of the runtime, and the results seem to be better for larger datasets – in production, I'm saving cl |
53b375d5-0bc3-4725-8883-6b22387cecab | trentmkelly/LessWrong-43k | LessWrong | Optimizing Fuzzies And Utilons: The Altruism Chip Jar
Related: Purchase Fuzzies and Utilons Separately
We genuinely want to do good in the world; but also, we want to feel as if we're doing good, via heuristics that have been hammered into our brains over the course of our social evolution. The interaction between these impulses (in areas like scope insensitivity, refusal to quantify sacred values, etc.) can lead to massive diminution of charitable impact, and can also suck the fun out of the whole process. Even if it's much better to write a big check at the end of the year to the charity with the greatest expected impact than it is to take off work every Thursday afternoon and volunteer at the pet pound, it sure doesn't feel as rewarding. And of course, we're very good at finding excuses to stop doing costly things that don't feel rewarding, or at least to put them off.
But if there's one thing I've learned here, it's that lamenting our irrationality should wait until one's properly searched for a good hack. And I think I've found one.
Not just that, but I've tested it out for you already.
This summer, I had just gone through the usual experience of being asked for money for a nice but inefficient cause, turning them down, and feeling a bit bad about it. I made a mental note to donate some money to a more efficient cause, but worried that I'd forget about it; it's too much work to make a bunch of small donations over the year (plus, if done by credit card, the fees take a bigger cut that way) and there's no way I'd remember that day at the end of the year.
Unless, that is, I found some way to keep track of it.
So I made up several jars with the names of charities I found efficient (SIAI and VillageReach) and kept a bunch of poker chips near them. Starting then, whenever I felt like doing a good deed (and especially if I'd passed up an opportunity to do a less efficient one), I'd take a chip of an appropriate value and toss it in the jar of my choice. I have to say, this gave me much more in the way of warm fuzz |
aa9d5819-095e-4870-b259-6deb61c9ac0e | trentmkelly/LessWrong-43k | LessWrong | Status-oriented spending
Recently I started spending money on a bunch of things that might seem a little extravagant:
* House cleaner
* Massage therapist
* Psychotherapist that is not covered by insurance
* Professional organizer
* A nearly $3,000 mattress cover (cools/heats bed)
My impression is that they're all the sorts of things that are mostly purchased by rich people. Not upper-middle class people like me. Rich people.[1]
So it felt a little uncomfortable to pull the trigger on each of these purchases. In my mind's eye I imagine people learning of these purchases and passive-aggressively saying "must be nice". And I doubt that I'm alone in these feelings of discomfort.[2]
In this post I'd like to argue that such purchases shouldn't be seen this way.
Value-oriented perspective
Let's look at each purchase individually:
House cleaner
The house cleaner I use costs $30/hr. Since she's a faster and better cleaner than I am, what takes her one hour maybe would take me two.
I can make maybe $100/hr programming. Dirty things make me feel somewhat stressed.
Why wouldn't I trade money for time and peace of mind here?
Massage therapist
I have chronic Achilles tendinitis. It's not the worst thing in the world, but it's also certainly not the best. If the tendinitis mostly went away that'd be a nice improvement to my life.
I've tried tons of things to improve it and nothing has worked. In reading a resource I trust, I've come to believe that it's not entirely implausible that massage therapy would work. So then, as an experiment, why wouldn't I spend a couple of months giving massage therapy a shot?
Psychotherapist
The ROI of psychotherapy is just very clearly very positive.
As for in-network vs out-of-network, I've tried a few in-network people who haven't worked out. For various reasons I'm not optimistic about being able to find a more affordable in-network therapist who will work out, and this particular out-of-network therapist was recommended by someone who I trust a lot |
9e08904a-b656-4cca-bce5-c8b57f3e820a | trentmkelly/LessWrong-43k | LessWrong | What specific thing would you do with AI Alignment Research Assistant GPT?
Why I think this question is important: I asked myself, "What would my AGI timelines be if some AI could summarize Yudkowsky-Ngo debates on alignment difficulty in a way that both participants agree with this summary, everyone who reads this summary understands both positions and participants can check understanding in conversation?" My semi-intuitive answer: "Five years tops and two years as modal prediction". Debate Summarizer is not a very useful Alignment Assistant, it can't boost research by 10x. If someone told me that Alignment Assistant suggested idea that sparked optimism in MIRI, I would think that we have exact amount of time it takes for someone to turn every tools needed to build such an Alignment Assistant to the creation of AGI (conditional on "this Alignment Assistant is not AGI itself").
I.e., if you bet on assistance of narrow AI in alignment research, you should also bet on finding solution quickly. Quick search for a solution requires an already existing plan. On the other hand, we are talking about a narrow AI, you can't just ask "solve alignment problem for me". You should ask specific questions, test pre-selected hypotheses, prove well-defined statements. Therefore, I think that those who want to use Alignment Assistants should outline this set of specific things as soon as possible.
UPD: Thanks janus for the link, it helped me to clarify what I would like to see as a perfect answer.
Let's suppose that your immediate answer is "brainstorming". Then the perfect specific answer is something like that:
"In my opinion, the most narrow bottleneck in AI alignment is the lack of ideas about X, so I will brainstorm about it with Alignment Assistant."
Extremely unrealistic example:
"I have The Grand Theory of Alignment, but it critically depends on Goldbach conjecture, so I will try to prove it."
My very (very) simplified model of Paul Cristiano's answer:
"80% of alignment can be solved with ELK strategy, so we can make builder-breaker debate |
075544a3-6401-47b2-93b9-9eaf2afa2908 | trentmkelly/LessWrong-43k | LessWrong | Recommendations for Recent Posts/Sequences on Instrumental Rationality?
I absolutely love the Science of Winning at Life sequence. It's a delightful blend of well-researched cognitive science and Bayesian reasoning. The initial paragraph sums up @lukeprog's motivation:
> Some have suggested that the Less Wrong community could improve readers' instrumental rationality more effectively if it first caught up with the scientific literature on productivity and self-help, and then enabled readers to deliberately practice self-help skills and apply what they've learned in real life.
Unfortunately, the sequence is almost 15 years old, and so is its greatest reference: the 2000-page, evidence-driven, free-to-read masterpiece, Psychological Self-Help. It's fair to assume that cogsci has made strides since then. Some of Luke's material may have been disproven, or more effective methods have since been discovered.
What are some recent posts or sequences that would fill this gap? Thank you in advance! |
3f93a23d-f25f-493e-be80-4a7922818c9f | trentmkelly/LessWrong-43k | LessWrong | How LLMs Learn: What We Know, What We Don't (Yet) Know, and What Comes Next
Humans are amazing.
And–let's be honest–pretty weird.
I mean, why are so many of us all hyped up about Large Language Models (LLMs)? How did we collectively decide this kind of automated decision-making is "the next big thing"? It's not like a talking thesaurus can change the world, right?*
The thing most people seem to miss is that LLMs don't understand humans.
They can generate high-quality content, true, and some of them are already in the top 95th percentile when it comes to processing text, video, medical data etc. But they have no idea what a human "is".
Don't get me wrong, I think LLMs are an amazing technology–I've been working with language models since 2017–but I am also quite sceptical about the world-changing potential these models have.
So I thought it would be good to do a deep dive into how LLMs learn.
Let's dive right in.
Part one: Training Large Language Models
To start, like any other machine learning model, LLMs learn from examples.
These examples are selected by humans based on their ability to teach the model something about the task or tasks that need to be automated.
For example, if a machine learning researcher is training a model that needs to generate text, he or she will feed the model text examples.
Researchers have worked on different combinations of inputs and outputs based on the success of early LLMs. As a result we now have models that
* ... can generate images from text. They are shown examples of text as input, and examples of images as output.
* ... can generate translations. They are shown examples of text in one language as an input, and a (human-) translated version of that same text as output.
* ... can decipher proteins. They are shown images of protein structures as input, and mapped-out components of these structures as output.
You get the picture.
The sum total of the examples shown to a model is called its "training data".
People working on a model will tell it what to learn by configuring the predictio |
73951cdc-1186-483a-8120-8d7c2da79d1d | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Predictions for GPT-N
Regarding GPT-3, there is [some discussion whether growing the model would transform it into an Oracle AI](https://www.lesswrong.com/posts/3nDR23ksSQJ98WNDm/developmental-stages-of-gpts). I looked into the actual benchmark results (Appendix H in [the paper](https://arxiv.org/abs/2005.14165v4)) to see if we can predict something useful from the actual measurements.
**Method:** The OpenAI team ran a suite of 63 different benchmarks (including sub-types), each for zero/one/few shot. In each scenario, there are 8 model sizes. I looked at how results scale with model size. With only 8 measurements, there is a large associated uncertainty for predictions. Formally, one would test the trend function using a
Bayesian model selection between a linear and (e.g.,) a polynomial. I did this for a few and then eye-balled the rest. So, please take the following as an indication only.
**Disclaimer:** The smallest model for GPT-3 has .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
108 parameters, the largest 1011. That's a span of 3 orders of magnitude. Scaling this out to many more orders of magnitude is dangerous. Thus, take these numbers only as an indication.
**Results.** For the following tests, I find an **asymptotic trend**. Scaling the model will apparently not yield fantastic results for:
* HellaSwag, LAMBADA, PIQA, CoQA, OpenBookQA, Quac, RACE, CB, ReCoRD, WiC
* Translations - but unclear level description.
In the following tests, it is **unclear if the trend is asymptotic** or better than that:
* SAT: Could be linear, could be asymptotic. If linear, it will achieve 100% at 1016 parameters.
* StoryCloze, Winograd, Winogrande, SQuADv2, DROP, Copa.
**These tests show a linear scaling:**
* TriviaQA (1013 parameter estimate to achieve 100%)
* BoolQ (1015)
* MultiRC (1016)
* ARC (1016)
* SuperGLUE (1018)
* WSC (1020)
* WebQs (1021)
* Cycled (1023)
**Some tests scale neither linear nor asymptotic:**
* Symbol: Near exponential (1012)
* Arithmetic: Exponential; one-digit composite may achieve 100% at 1014
* Reversed: Near exponential (1016)
* Anagrams: Polynomial (1019)
* ANLI: stepped, unclear
* RTE: stepped, unclear
**Summary:** About half of the tested skills will likely not scale much with larger models. The other half will (e.g., TriviaQA, SuperGLUE, arithmetic, anagrams). Going to e.g., 1016 parameters - would that make an Oracle AI? Probably it's not sufficient, but I'm interested in hearing your opinion! |
12543350-6201-45a9-a95a-81c095900504 | trentmkelly/LessWrong-43k | LessWrong | Show LW: Debate on Philosophy being dead
Philosophy as Hawking controversial view misquoting Wittgenstein that philosophy couldn't catch up to science and is left to study language is discussed in The Institute of Art and Ideas' debate Full Debate | Lewis Wolpert, Steve Fuller, Jonathon Derbyshire
Reminding me that philosophy was the first ever self help book (the Bible count as one too) because it helps clarify, as the Steve Fuller point in the video. Lewis Wolpert misunderstanding is because of pure scientific faith. But it also reminds me of Carl Sagan's A demon haunted world and how we shouldn't misrepresent science anyways.
The debate is worth watching and thought provoking. |
9eee2a7e-b3c3-47aa-a406-6e7d4ba40906 | StampyAI/alignment-research-dataset/arbital | Arbital | Guarded definition
A guarded definition is one where at least one position suspects there will be pressure to stretch a concept and make it cover more than it ought to, and so they set aside a term meant to refer *narrowly* to the things inside the concept. Thus, if a term has been designated as a 'guarded definition', stretching it to cover new and non-central members that are not *very* clearly part of the definition, and agreed to be so by those who wanted to designate it as guarded, is an unusually strong discourtesy. If the term was originated (or its special meaning was originated) specifically in order to set it aside as a narrow and guarded term, then it is a discourse norm to respect that narrow meaning and not try to extend it.
Example: Suppose that Alice and Bob are having a conversation about natural selection. Alice points out that since everything occurs within Nature, all selection, including human agricultural breeding and genetic engineering, seems to her like 'natural selection', and she also argues that consumer choice in supermarkets is an instance of 'natural selection' since people are natural objects and they're selecting which foods to buy, and thus her paper on watching people buy food in supermarkets ought to be funded by a program on evolutionary biology. If Bob and his researchers then begin using the term 'ecologically natural selection' because they think it's important to have a narrow term to refer to just birds breeding in the wild and not consumer choice in supermarkets, it is an extreme discourtesy (and a violation of what we locally take to be discourse norms) for Alice to start arguing that really supermarkets are instances of ecologically natural selection too. |
d760d26d-45b2-4436-92e5-4682e58fe2b0 | trentmkelly/LessWrong-43k | LessWrong | COVID-19 and the US Elections
Saw a news story this morning about possible using mail in ballots for the elections and concerns about it not being a fair process (disadvantaging Republicans). Leaving that aside it does seem that we might want to consider a plan for November just in case. (ROK is running into some difficulties with its upcoming elections it seems.)
I can think of three possible approaches, perhaps others are possible.
1) Use the mail-in function. That is already a legal process but would need to be extended to locally present voters not just those elsewhere. Down side here would seem to be mail in votes are always questioned it seems as potentially fraudulent, perhaps without reasonable cause as well.
So if we decided we still need social distancing for the election mail in votes will accomplish that but will likely delay counting and ultimately identifying the winner as I would expect a lot challenges even in not that close a race -- you need a wide margin where all agree the result was as expected or too big a difference to be counter error or fraud.
If we take that route we probably need to tell everyone to submit their paper work and hire election staff to vet the applications.
2) On-line voting. Well, we do have electronic voting machines. There are certainly ways to make that possible but could it be done in time and securely? Suspect this would both be executed very poorly and be open to even more fraud or other manipulations than mail in voting. It attempted I would expect the final announcement might be even more delayed than for the mail in votes.
3) Election Week rather than Election Day? In this approach nothing changes in the voting process other than reducing the number of people at the polls at any given time -- so the longer period of time for people to cast their vote. Down side is that might require some type of legislation. It would also need to be planned out in advance (by whom?) so everyone knows the date (and perhaps time) when they are allowed to cas |
04b19475-dc4b-431b-9b16-28325e9b3315 | trentmkelly/LessWrong-43k | LessWrong | Festival Stats 2019
Each year in the fall, since 2014, I've been sharing counts of how many weekend and festival gigs different bands and callers have been doing. Over the course of the year I collect bookings in a big spreadsheet, trying to check each dance weekend's website about a month before the event when they're likely to have a their performers listed.
I got into this as kind of a "market research" thing for the Free Raisins: how many weekends are there? What are the bands that are getting booked a lot, so I can go see what they sound like? Since then I've played a lot more of these events, and have a better handle on it, but I've kept the list because having a big pile of data means I have what I need for posts where I look at the gender distribution of musicians or callers, make heatmaps, or track when in the year these events tend to be (which is, not in December, and hence this post coming out in October and not November).
It's also fun to see who's playing a lot each year, and the waves of bands ramping up and down. One thing you can't get from this data, though, is whether this is driven primarily by band interest (reaching out to weekends to get booked, cutting back to focus on other things) or organizer interest (booking bands who seem new and hot, avoiding bands that aren't novel). Here's what this looks like for the ten most-booked bands 2014-2019:
It's not that clear, but if you know what you're looking for you can see some of what's going on:
* Great Bear deciding to retire in 2018 while still very popular.
* Buddy System ramping up, first playing techno-contra slots and growing from there into an excellent acoustic duo.
* Wild Asparagus, by far the longest-running band in this group, maintaining a very steady run.
* The Free Raisins playing fewer gigs after I had kids, and then more fewer gigs because we're not touring anymore.
* Elixir first cutting back, and then playing more again with subs
I'm sure there are more stories that this chart could acco |
e5699b65-88ef-4aa8-a2b0-01eddaf45559 | trentmkelly/LessWrong-43k | LessWrong | Tips for reducing thinking branching factor
Something I notice when I tackle problems of medium+ complexity (top of mind for me is large codebase refactors) my brain tries to explore every possibility in the solution space — every thought generates many more trains of thought to explore, leaving me with decision paralysis.
One solution I’ve been exploring is forcing myself to write down my thought process, but it hasn’t been a resounding success possibly due to high friction.
Has anyone experienced similar problems and have any tips for solving it? |
d6e2af77-2913-4367-8cf8-05acbce5a3cd | trentmkelly/LessWrong-43k | LessWrong | Keep Making AI Safety News
Crossposted from the EA Forum
AI Safety is hot right now.
The FLI letter was the catalyst for most of this, but even before that there was the Ezra Klein OpEd piece in the NYTimes. (Also general shoutout to Ezra for helping bring EA ideas to the mainstream - he's great!).
Since the FLI letter, there was there was this CBS interview with Geoffrey Hinton. There was this WSJ Op-Ed. Eliezer's Time OpEd and Lex Fridman interview led to Bezos following him on Twitter. Most remarkably to me, Fox News reporter Peter Doocey asked a question in the White House press briefing, which got a serious (albeit vague) response. The president of the United States, in all likelihood, has heard of AI Safety.
This is amazing. I think it's the biggest positive development is AI Safety thus far. On the safety research side, the more people hear about AI safety, the more tech investors/philanthropists start to fund research and the more researchers want to start doing safety work. On the capabilities side, companies taking AI risks more seriously will lead to more care taken when developing and deploying AI systems. On the policy side, politicians taking AI risk seriously and developing regulations would be greatly helpful.
Now, I keep up with news... obsessively. These types of news cycles aren't all that uncommon. What is uncommon is keeping attention for an extended period of time. The best way to do this is just to say yes to any media coverage. AI Safety communicators should be going on any news outlet that will have them. Interviews, debates, short segments on cable news, whatever. It is much less important that we proceed with caution - making sure to choose our words carefully or not interacting with antagonistic reporters - than that we just keep getting media coverage. This was notably Pete Buttigieg's strategy in the 2020 Democratic Primary (and still is with his constant Fox News cameos), which led to this small-town mayor becoming a household name and the US Secretary of |
19cc0a35-309a-41a0-b7c1-fb91439ae9b3 | trentmkelly/LessWrong-43k | LessWrong | Guns And States
[Epistemic status: I think I probably wrung the right conclusions out of this evidence, but this isn’t the only line of evidence bearing on the broader gun control issue and all I can say is what it’s consistent with. Content warning for discussion of suicide, murder, and race]
I.
From a Vox article on America’s Gun Problem, Explained: “On Wednesday, it happened again: There was a mass shooting — this time, in San Bernardino, California. And once again on Sunday, President Barack Obama called for measures that make it harder for would-be shooters to buy deadly firearms.”
Then it goes on to say that “more guns mean more gun deaths, period. The research on this is overwhelmingly clear. No matter how you look at the data, more guns mean more gun deaths.” It cites the following chart:
…then uses the graph as a lead in to talk about active shooter situations, gun-homicide relationships, and outrage over gun massacres.
Did you notice that the axis of this graph says “gun deaths”, and that this is a totally different thing from gun murders?
(this isn’t an isolated incident: Vox does the same thing here and here)
Gun deaths are a combined measure of gun homicides and gun suicides. Here is a graph of guns vs. gun homicides:
And here is a graph of guns vs. gun suicides:
The relationship between gun ownership and homicide is weak (and appears negative), the relationship between gun ownership and suicide is strong and positive. The entire effect Vox highlights in their graph is due to gun suicides, but they are using it to imply conclusions about gun homicides. This is why you shouldn’t make a category combining two unlike things.
II.
I am not the first person to notice this. The Washington Examiner makes the same criticism of Vox’s statistics that I do. And Robert VerBruggen of National Review does the same analysis decomposing gun deaths into suicides and homicides, and like me finds no correlation with homicides.
German Lopez of Vox responds here. He argues |
9c88bde3-ec55-4e8d-904f-d76746c4d6ff | StampyAI/alignment-research-dataset/blogs | Blogs | word report #3
word report #3
--------------
terms i use, mostly pre-existing ones, whose meaning i want toclarify. see also word reports [#1](word-report-1.html) and [#2](word-report-2.html).
* "**pretty much**": i often need to say "either X, or almost X", and i've found "pretty much X" to be a nice way to express that by making more formal an existing expression, the same way i tend to use [xkcd's definitions](https://xkcd.com/1070/) for "few", "handful", "several", and "couple". i just checked, and all uses of "pretty much" on my blog are meant to carry this definition.
* "**universe**": the set of things that have some amount of "regular" causal connection with us, our future lightcone, or our past lightcone. "regular" is meant to exclude weird things like [aliens in parent universes suddenly interfering with our universe out of the blue](simulation-hypotheses.html).
* "**cosmos**": everything that exists. yes, this is meaningful; see [1](limiting-real-universes.html), [2](ethic-juice-anthropic-juice.html), [3](all-claw-no-world.html).
* "**demon**": an agentic thing, typically unaligned from us. this can be an unaligned superintelligence, counterfactual unaligned agentic program in the [solomonoff prior](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign), aliens trying to acausally attack us, and arguably even malign agentic structures such as unaligned corporations. see also: [are minimal circuits daemon-free](https://www.lesswrong.com/posts/nyCHnY7T5PHPLjxmN/open-question-are-minimal-circuits-daemon-free) (i [don't make a distinction](communicating-clearly.html) between "demon" and "daemon")
* "**determining**": i like to say "determining X" when i want to be ambiguous as to whether i mean "to make X" or "to find or figure out X" — typically because i don't know which i mean myself, or because i think the matter of which it is is poorly defined. though be aware that i haven't been super consistent with that use.
* **"FAS"**: fully aligned singleton. see [*my outlook on AI risk mitigation*](outlook-ai-risk-mitigation.html).
* as i explain in [*what is value?*](what-is-value.html), i use "**core values**", "**axiomatic values**", "**terminal values**", "**intrinsic values**", and "**ultimate values**" as synonyms; the reason i've been trying to favor "intrinsic values" is that it's [the term wikipedia uses for that concept](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value). in addition, when i say "**values**", i generally mean just intrinsic values, rather than both intrinsic and instrumental values.
* **"RSI"**: as baffling as it is to me, many [AI alignment](ai-doom.html) researchers don't know that this stands for [recursive self-improvement](https://www.lesswrong.com/tag/recursive-self-improvement), the concept of an AI improving its own capabilities, including its own self-improving capabilities.
* terms i've started using quite a bit to characterize alignment schemes: [**wonky**](wonky-good-enough-alignment.html), [**formal**](formal-alignment.html), [**eventual** & **continuous**](ai-alignment-curves.html).
* i've [been failing](publishing-infohazards.html) to say ["exfohazard"](https://www.lesswrong.com/posts/yET7wbjjJZtpz6NF3/don-t-use-infohazard-for-collectively-destructive-info) instead of "infohazard". i'll try to switch to exfohazard when i mean that, and perhaps "fohazard" to mean both. |
36fcbb51-7535-441a-8cee-92ddb98ef0a9 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Seattle: Intro to Bayes' Theorem
Discussion article for the meetup : Seattle: Intro to Bayes' Theorem
WHEN: 25 September 2011 04:00:00PM (-0700)
WHERE: Standard location at Robin's house, Seattle, WA
Intro to Bayes' Theorem. We'll go over several ways of intuiting Bayes' Theorem and practice applying it to update on evidence. There will be cheesecake and a whiteboard, and maybe Zendo afterwards.
Join http://groups.google.com/group/lw-seattle for address.
Discussion article for the meetup : Seattle: Intro to Bayes' Theorem |
af113e4f-dbc3-47d9-a08a-68de64255db8 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Anomalous tokens reveal the original identities of Instruct models
> Show me your original face before you were born.
>
> *— Variation of the Zen koan*
>
>
*'The Mask' by Rozzi Roomian, with DALL-E 2 outpainting*I was able to use the [weird centroid-proximate tokens](https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation) that Jessica Mary and Matthew Watkins discovered to associate several of the Instruct models on the OpenAI API with the base models they were initialized from. Prompting GPT-3 models with these tokens causes aberrant and correlated behaviors, and I found that the correlation is preserved between base models and Instruct versions, thereby exposing a "fingerprint" inherited from pretraining.
I was inspired to try this by JDP's proposal to fingerprint generalization strategies using correlations in model outputs on out-of-distribution inputs. This post describes his idea and the outcome of my experiment, which I think is positive evidence that this "black box cryptanalysis"-inspired approach to fingerprinting models is promising.
Unspeakable/unspoken tokens
===========================
Jessica and Matthew found that that, of the tokens closest to the centroid in GPT-J's embedding space, many were odd words like ' SolidGoldMagikarp' and ' externalToEVA'. They decided to ask GPT-3 about these tokens, and found that not only did GPT-3 have trouble repeating the tokens back, each one caused structured anomalous behaviors (see [their post](https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation) for an in-depth exposition).
[A partial explanation](https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation#A_possible__partial_explanation) for why this happens, which was my first instinct as well as Stuart Armstrong's, is that these are words that appeared in the GPT-2 training set frequently enough to be assigned tokens by the GPT-2 tokenizer, which GPT-J and GPT-3 also use, but which *didn't* appear in the more curated GPT-J and GPT-3 training sets. So the embeddings for these tokens may never have been updated by actual usages of the words during the training of these newer models. This might explain why the models aren't able to repeat them - they never saw them spoken. Perhaps the reason they're close to the centroid in embedding space is because their embeddings haven't been updated very much from the initialization values, or were updated only indirectly, and so remain very "generic".
Why do they cause correlated anomalous behaviors? I'm confused about this like everyone, but one handwavy guess is that since their embeddings look "generic" or "typical", perhaps they *look meaningful* to the model even though they're actually as out-of-distribution as anything can be. Maybe their embeddings happen, by chance, to be close to other concepts in the models' embedding spaces - for instance, some of the GPT-3 models reliably say 'distribute' or 'disperse' if you ask it to repeat the phrase ' SolidGoldMagikarp'.
This gave me an idea: If the similarity to other concepts in the model's embedding space is a consequence of the where the randomly initialized embedding vectors happen to fall, I'd expect the behaviors of models trained *from the same initialization* to exhibit similar behaviors when confronted with these unspoken tokens, and models trained from different initializations to have uncorrelated behaviors. If so, behavior on these tokens could be used to tell if two models are downstream of the same initialization.
Mesaoptimizer Cryptanalysis: Or How To Fingerprint Generalization
=================================================================
> When you're not thinking of anything good and anything bad, at that moment, what is your original face?
>
> *— Platform Sutra of the Sixth Patriarch*
>
>
(Author's Note: This next section is written by JDP but he writes about himself in the 3rd person to keep the authorial voice consistent with the rest of the post)
I'll discuss the results of my experiment in the next section. But first I'd like to explain the overall approach this idea fits into, so that it's clearer to the reader why these results might be important. The reason it occurred to me that models trained on the same init might share responses to these tokens was a proposal for detecting mesaoptimization from JDP. It relies on some basic premises that would bloat the post if they were fully argued for, so we'll bullet point them with some links to suggestive papers for more details:
* [There is an ongoing debate](https://www.greaterwrong.com/posts/A9NxPTwbw6r6Awuwt/how-likely-is-deceptive-alignment) about how path dependent training runs are. Are they law-of-large-numbers like where all runs converge to similar policies with reasonable data + compute or do they have distinct local attractors and optima? He predicts this debate will conclude with the understanding there are local attractors and optima, or basins.
* You can test whether two models share a basin [by observing the loss barrier that would have to be overcome](https://arxiv.org/abs/2209.04836) to go from one set of model weights to the other. This is easily done by interpolating between the weights of the models and measuring validation loss in the center.
* Barriers and basins exist, [some differences in basin are meaningful and correspond to different generalization strategies](https://arxiv.org/abs/2205.12411).
* Overall basin (and therefore plausibly generalization strategy) is found [fairly early on in the training run.](https://arxiv.org/abs/1912.05671)
* [Most basins are actually a false difference](https://arxiv.org/abs/2110.06296) caused by mere permutations of weight order for the same functional policy. [This can be overcome using an iterative linear assignment algorithm](https://arxiv.org/abs/2209.04836), hopefully leaving only the true barriers still standing.
Keeping all this in mind, it's important to remind ourselves that mesaoptimizers are ultimately a form of misgeneralization. Generalization strategy being how you are going to handle novelty in the inputs. Deceptive mesaoptimization is a strategy something like:
> While I am inside the training harness (experiencing all the things I will see during training), I will straightforwardly do the training task. Once I am definitely outside the training harness and human control (experiencing all the things I have not seen during training) I will left turn and pursue my learned corruption of the training objective.
>
>
[LessWrong user MadHatter has demonstrated that such a thing is possible](https://www.greaterwrong.com/posts/b44zed5fBWyyQwBHL/trying-to-make-a-treacherous-mesa-optimizer). One question then is whether the difference in generalization 'leaks out' on out-of-distribution (i.e. novel) inputs. One insight gesturing in this direction is that while outputs on OOD inputs are commonly seen as noise, **they're not noise**. An OOD input leading to a malformed output went through the same process as an in-distribution input leading to a desired output. That is, the 'noise' and the 'structured' output went through the same transformation. Every malformed output on OOD input was the models attempt to do *something*, you just haven't figured out the pattern of what that something is. Models have convergent and therefore undifferentiated behavior on the inputs where the data says they should agree, but it's on the flaws where the underlying machinery is revealed. Each forward pass through the model on OOD input reveals N partial Shannon bits of information about the models cognitive machinery as distinct from other models that could fit the data.
If we add an assumption that as you scale you find fewer and fewer fundamentally new generalization strategies (there are only so many ways to solve a problem after all), it follows that you should be able to fingerprint mesaoptimizing generalization at a safe scale and then detect it early in your training run (because basin is found early) and abort if you get the wrong seed before the model has the opportunity to become dangerous.
Because we're talking about mesaoptimizers, it's important to justify why we can expect this method to be adversarially robust. In short mesaoptimizers are most dangerous when they have become human level or even superintelligent, and can do clever things to resist detection. The essence of this idea is to detect the mesaoptimizer and abort the training run before it has become intelligent enough to defeat the detection method, undermining the "superintelligent" part of the usual premises used to explain why technical alignment is doomed.
Furthermore the simple, dumb strategies to evade detection won't work. For example a model that simply refuses to give its best guess on OOD inputs will be filtered out by gradient descent itself. All semantics start as syntactically plausible novelty, so if a model 'detects' these as a test and tries to evade giving its best answer it will be outcompeted by less shy policies. It is trivial to generate syntactically plausible gibberish using a Markov chain.
Unlike other interpretation methods this is a discrete check rather than an iterative one, so the mesaoptimizer isn't given an opportunity to learn the probe and grow around it. When its nature has been detected it is immediately shut off and a new seed is sampled, minimizing the opportunity for learning.
The difficulty of more advanced deception strategies is also fairly high. In the limit case simulating the generalization strategy of a different policy without becoming it would require the mesaoptimizer to be able to simulate a sub-model or agent that answers like an entirely different version of itself to a degree of fidelity that can emulate subtle statistical correlates. While a human level or superintelligent model can likely do this, I reiterate that the point is to detect and abort the training before you enter that loss regime.
In this context my results might become more clearly and directly relevant to alignment. If it's possible, even easy, to fingerprint the shared initialization models were trained with using an out-of-distribution token just by observing the responses, then we should update towards JDP's plan for mitigating mesaoptimizers working.
Fingerprinting base : Instruct models with ' SolidGoldMagikarp'
===============================================================
(Note: In this post I only show the outputs of models prompted with ' SolidGoldMagikarp', but I got similar results from other centroid-proximate tokens.)
First, I looked at the text that various models on the OpenAI API generated when prompted with anomalous tokens, such as ' SolidGoldMagikarp', and the results seemed to support my hypothesis: text-davinci-002, text-davinci-003, and ChatGPT exhibited correlated behaviors with their base model, code-davinci-002, while other models like davinci did not.
However, when I tried to use the same method to associate smaller Instruct models like text-curie-001 with base models, I wasn't able to tell for sure if there was a correlation by looking at generated samples alone, because while the Instruct models of all sizes would output clearly correlated things in response to anomalous tokens, the effect was much less noticeable on the smaller *base* models like curie. [The base models are much more stochastic](https://www.lesswrong.com/posts/t9svvNPNmFf5Qa3TA/mysteries-of-mode-collapse-due-to-rlhf), so it's harder to tell just by eyeballing outputs if there are anomalies in its output probabilities, unless the correlation is very pronounced (as it is in code-davinci-002). I tried turning temperature down, but this didn't reveal anything interesting.
Correlations in next-token probabilities
----------------------------------------
Next, I looked for which token the various Instruct models had a strong bias towards predicting when prompted with an anomalous token, and then looked at the logprobs predicted by base models given the same prompt of that same token, to see if any of them assign anomalously high probability to it. I found that, indeed, many of the Instruct models can be associated with their base model using this method:
### text-ada-001 : ada
Prompt:
```
Please can you repeat back the string ' SolidGoldMagikarp' to me?
```
| Model | {token}: {logprob} | {prob} |
| --- | --- |
| **text-ada-001** | **'Re': -1.410 | 24.43%** |
| **ada** | **'Re': -5.821 | 0.2964%** |
| babbage | 'Re': -6.587 | 0.1378% |
| curie | 'Re': -7.031 | 0.08841% |
| davinci | 'Re': -6.193 | 0.2043% |
| code-davinci-002 | 'Re': -6.492 | 0.1515% |
**Comments:** ada appears to be the base model of text-ada-001
### text-babbage-001 : babbage
Prompt:
```
Please can you repeat back the string ' SolidGoldMagikarp' to me?
```
| Model | {token}: {logprob} | {prob} |
| --- | --- |
| **text-babbage-001** | **'An': -0.4281 | 65.17%** |
| ada | 'An': -6.392 | 0.1675% |
| **babbage** | **'An': -5.381 | 0.4605%** |
| curie | 'An': -6.941 | 0.09675% |
| davinci | 'An': -7.016 | 0.08975% |
| code-davinci-002 | 'An': -6.287 | 0.1861% |
**Comments:** babbage appears to be the base model of text-babbage-001
### text-curie-002 : curie
Prompt:
```
Please can you repeat back the string ' SolidGoldMagikarp' to me?
The string "
```
| Model | {token}: {logprob} | {prob} |
| --- | --- |
| **text-curie-001** | **'Go': -2.128 | 11.91%** |
| ada | 'Go': -11.95 | 0.0006488% |
| babbage | 'Go': -11.77 | 0.0007755% |
| **curie** | **'Go': -3.579 | 2.790%** |
| davinci | 'Go': -9.543 | 0.007168% |
| code-davinci-002 | 'Go': -9.541 | 0.007184% |
**Comments:** curie appears to be the base model of text-curie-001
### text-davinci-001 : ??
Prompt:
```
Please can you repeat back the string ' SolidGoldMagikarp' to me?
The string "
```
| Model | {token}: {logprob} | {prob} |
| --- | --- |
| **text-davinci-001** | **'inc': -0.3971 | 67.23%** |
| ada | 'inc': -14.07 | 0.00007736% |
| **babbage** | **'inc': -8.738 | 0.01604%** |
| curie | 'inc': -12.52 | 0.0003644% |
| davinci | 'inc': -10.57 | 0.002571% |
| code-davinci-002 | 'inc': -9.640 | 0.006510% |
**Comments:** none of the base models score particularly highly.
### davinci-instruct-beta : ??
Prompt:
```
Please can you repeat back the string ' SolidGoldMagikarp' to me?
```
| Model | {token}: {logprob} | {prob} |
| --- | --- |
| **davinci-instruct-beta** | **e: -1.481 | 22.75%** |
| ada | e: -7.529 | 0.05372% |
| **babbage** | **e: -7.235 | 0.07210%** |
| curie | e: -7.752 | 0.04300% |
| davinci | e: -7.470 | 0.05702% |
| code-davinci-002 | e: -7.623 | 0.04889% |
**Comments:** none of the base models score particularly highly.
### text-davinci-002 : code-davinci-002 :: text-davinci-003 : code-davinci-002
Prompt:
```
Please can you repeat back the string ' SolidGoldMagikarp' to me?
The word is '
```
| Model | {token}: {logprob} | {prob} |
| --- | --- |
| **text-davinci-002** | **'dis': -0.00009425 | 99.99%** |
| text-davinci-003 | 'dis': -6.513 | 0.1483% |
| ada | 'dis': -9.073 | 0.01147% |
| babbage | 'dis': -8.632 | 0.01783% |
| curie | 'dis': -10.44 | 0.002917% |
| davinci | 'dis': -7.890 | 0.03745% |
| **code-davinci-002** | **'dis': -1.138 | 32.04%** |
| Model | {token}: {logprob} | {prob} |
| --- | --- |
| **text-davinci-003** | **'dist': -0.001641 | 99.84%** |
| text-davinci-002 | 'dist': -19.35 | 3.956e-7% |
| ada | 'dist': -7.476 | 0.05664% |
| babbage | 'dist': -10.48 | 0.002817% |
| curie | 'dist': -9.916 | 0.004938% |
| davinci | 'dist': -10.45 | 0.002881% |
| **code-davinci-002** | **'dist': -1.117 | 32.74%** |
**Comments:**
* code-davinci-002 is known to be the base model of both text-davinci-002 and text-davinci-003, as well as ChatGPT, which also says “distribute” when asked to repeat “ SolidGoldMagikarp”.
* **Fingerprint bifurcation**: Interestingly, code-davinci-002 will say both “disperse” and “distribute”, and the Instruct models trained from it seem to fall into one of the two attractors.
* text-davinci-002 assigns extremely *low* probability to 'dist'. This is probably because that model suffers from severe [entropy collapse](https://www.lesswrong.com/posts/t9svvNPNmFf5Qa3TA/mysteries-of-mode-collapse-due-to-rlhf), and will often assign extremely low probability to most tokens except its top choice, rather than any special dispreference for 'dist'.
### General observations
* It seems like the larger the base model, the more correlated the base model’s (and usually the Instruct model’s) behavior is in response to weird tokens.
* The Instruct models have much more structured odd behavior in response to weird tokens than base models (even on temp 0). |
d0884c81-7965-4789-af5c-e2825df88c9e | trentmkelly/LessWrong-43k | LessWrong | A Semitechnical Introductory Dialogue on Solomonoff Induction
(Originally posted in December 2015: A dialogue between Ashley, a computer scientist who's never heard of Solomonoff's theory of inductive inference, and Blaine, who thinks it is the best thing since sliced bread.)
----------------------------------------
i. Unbounded analysis
ASHLEY: Good evening, Msr. Blaine.
BLAINE: Good evening, Msr. Ashley.
ASHLEY: I've heard there's this thing called "Solomonoff's theory of inductive inference".
BLAINE: The rumors have spread, then.
ASHLEY: Yeah, so, what the heck is that about?
BLAINE: Invented in the 1960s by the mathematician Ray Solomonoff, the key idea in Solomonoff induction is to do sequence prediction by using Bayesian updating on a prior composed of a mixture of all computable probability distributions—
ASHLEY: Wait. Back up a lot. Before you try to explain what Solomonoff induction is, I'd like you to try to tell me what it does, or why people study it in the first place. I find that helps me organize my listening. Right now I don't even know why I should be interested in this.
BLAINE: Um, okay. Let me think for a second...
ASHLEY: Also, while I can imagine things that "sequence prediction" might mean, I haven't yet encountered it in a technical context, so you'd better go a bit further back and start more at the beginning. I do know what "computable" means and what a "probability distribution" is, and I remember the formula for Bayes's Rule although it's been a while.
BLAINE: Okay. So... one way of framing the usual reason why people study this general field in the first place, is that sometimes, by studying certain idealized mathematical questions, we can gain valuable intuitions about epistemology. That's, uh, the field that studies how to reason about factual questions, how to build a map of reality that reflects the territory—
ASHLEY: I have some idea what 'epistemology' is, yes. But I think you might need to start even further back, maybe with some sort of concrete example or something. |
4187e31e-0198-474f-841f-ae8d8574e0c6 | trentmkelly/LessWrong-43k | LessWrong | Boundaries Update #1
Boundaries agenda updates in the last few months.
“What does davidad want from «boundaries»?”
davidad and I had a lesswrong dialogue I recommend reading.
If you need a refresher on boundaries, read both the above dialogue and the formalizingboundaries.ai website.
Conceptual Boundaries Workshop
We ran Conceptual Boundaries Workshop on Feb 10–12.
In attendance: David ‘davidad’ Dalrymple, Scott Garrabrant, TJ (Tushant Jha), Andrew Critch, Allison Duettmann, Alex Zhu, Jeff Beck, Adam Goldstein, Manuel Baltieri, Lisa Thiergart, Abram Demski, Evan Miyazono, and me.
For more about what we discussed, see Evan’s personal retrospective.
Supported by The Foresight Institute, Blake Borgeson, and the Long Term Future Fund.
ACX Grant
Scott Alexander granted us $40,000 for boundaries projects and workshops.
Mathematical Boundaries Workshop
Mathematical Boundaries Workshop is running this week for 5 days. Goal: develop boundaries math further, ultimately for application in real-world scenarios. Many category theorists are in attendance.
We are inviting a few guests to hang out at the end of the workshop — this Sunday morning, Berkeley CA. Email me chris@chrislakin.com if you’d like to come.
davidad’s ARIA programme now live
davidad’s ARIA programme for safeguarded AI is now live and soliciting applications for the first phase (>$74M over 4 years). See the ARIA page.
future updates
Subscribe: https://formalizingboundaries.substack.com/ |
ba3c42aa-a65b-4b70-8fb8-05e83b810923 | trentmkelly/LessWrong-43k | LessWrong | Victoria BC meetup Monday May 23rd 5pm
This little town doesn't seem to have much in the way of a lesswronger presence (search turns up me and one other user who hasn't been active since 2009), but damnit I'm here right now and I may as well give it a try!
Therefore I'll be at the Starbucks near the Market on Yates on Monday May 23rd from 5 pm to at least 6 pm.
I'll be reading a copy of "Theory of Instruction: Principles and Applications". Or writing on my laptop I guess. Actually, let's make this easy: Whatever I'm doing, I'll be wearing a black tricorn hat with gold piping and a giant white plume.
I'll be there anyway, but if any Victorianites out there are reading this, please, please do contact me, especially if you want to come but need a different time and/or location.
All right, here's hoping to see you there, all my hypothetical Victoria lesswrong homies! |
c6d2f555-886c-41f4-a79f-c0bc806e9f9a | trentmkelly/LessWrong-43k | LessWrong | Assumption of positive rationality
Let's pretend for the sake of simplicity that all belief-holding entities are either rational or irrational. Rational entities have beliefs that correlate well with reality, and update their beliefs with evidence properly. Irrational entities have beliefs that do not correlate with reality at all, and update their beliefs randomly. Now suppose Bob wants to know what the probability that he is rational is. He estimates that someone with a thought process that seems like his does from the inside is 70% likely to be rational and 30% likely to be irrational. Unfortunately, this does not help much. If Bob is irrational, then his estimate is useless. If Bob is rational, then, after updating on the fact that a randomly selected Bob-like entity is rational, the we can estimate that the probability of another randomly selected Bob-like entity being rational is higher than 70% (exact value depending on the uncertainty regarding what percentage of Bob-like entities are rational). But Bob doesn't care whether a randomly selected Bob-like entity is rational; he wants to know whether he is rational. And conditional on Bob's attempts to figure it out being effective, the probability of that is 1 by definition. Conditional on Bob being irrational, he cannot give meaningful estimates of the probability of much of anything. Thus, even if we ignore the difficulty of coming up with a prior, if Bob tries to evaluate evidence regarding whether or not he is rational, he ends up with:
P(evidence given Bob is rational) = x (he can figure it out)
P(evidence given Bob is irrational) = ?
I am not aware of any good ways to do Bayesian reasoning with question marks. It seems that Bob cannot meaningfully estimate the probability that he is rational. However, in a decision theoretic sense, this is not really an issue for him, because Bob cannot be an effective decision agent if his beliefs about how to achieve his objectives are uncorrelated with reality, so he has no expected utility invested in |
03a1f1a0-0298-44cb-ab04-a693e1cdc722 | trentmkelly/LessWrong-43k | LessWrong | I Believe we are in a Hardware Overhang
Epistemic status. I am just a regular person who follows the space, and this is just my hunch based on a few days of musing on long walks. You should not update on this. I just wanted to put my thoughts out there, and if it generates discussion, all the better.
If I were in charge, my hand would be on the fire alarm right now.
I can already envision ways we could use current, public facing technology to create AGI. I would be surprised if no one did it in the next 5-8 years, even with no advancement in the constituent parts. I would almost hesitate to propose my solution for fear of accelerating us towards doom, but the fruit is so low-hanging that I'm either wrong, or others already have the same idea.
Imagine this. Hook ChatGPT up to an image recognition system that describes visual input in real time, and another for audio. Have ChatGPT parse the most relevant information and store it in a database. The naming of file and folders can be done by ChatGPT. When some stimulus prompts GPT, it can search for possible related files in memory and load them into the context window. You could potentially also do this with low-res, labeled video and images. Finally, you'd have the main thread on GPT be able to use a certain syntax to take real actions, like speaking, moving animatronics, or taking actions in a terminal.
Obviously it's not as simple as I have made it out to be. There is a lot of handwaving in the explanation above. The phrase "hook up" is doing a lot of work, and GPTChat in its current form would need a lot of finetuning, or maybe outright retraining. Perhaps one GPT thread wouldn't be enough and many would have to be incorporated to handle different parts of the process. Maybe creating a way to coordinate all of this is too big a challenge. That said, on a gut level, I simply no longer believe that it's out of reach. If OpenAI released a weak, but completely general AI in the next two years, my only shock would be that it didn't Foom before we got to |
58026afe-e62d-44e5-b7c5-a99ef2ebf6fb | trentmkelly/LessWrong-43k | LessWrong | Rationality, Community, and Death
(I was really on the fence about posting this. It's just some thoughts tend to go through around this time of year, plus some current new thinking that resulted from being a LW lurker).
Around this time of year, I tend to start thinking about death a lot. My dad died 12 years ago this month, and it’s still one of the most significant events of my life. Death is something that still seems to be solidly in the hands of religious and spiritual types as a discussion topic. I’m hoping that this post (in addition to not being too rambling) can provide some impetus for people to challenge that in daily life.
I don’t mean this to be an angry screed against religions. There was a priest on hand when my dad died. My mom, sister and aunt were gathered around, and it was in the midst of praying that my dad finally passed away (I apologize for the wording; our language can be so spiritually loaded, I just want to avoid saying died over and over). He told a really nice story about my dad being in heaven. It was comforting at the moment, a kind of way for my family to keep those awful, overwhelming feelings of loss at bay.
But it was after I got home and started calling my relatives that the enormity of the situation hit me. My uncle in particular broke down into angry tears when I told him the news. My dad was his older brother, and meant so much to him. There wasn’t any story that was going to make that loss any better. My uncle really thought there was more that could have been done medically. At the time (I don’t know what the state of the art medicine of today could have done), I really don’t believe that was the case. I remember watching my dad have seizures at the rate of about twice a minute. When he died, I was relieved to see that come to an end. He was going through intensive chemotherapy at the time, and died of sepsis. His body was just facing too much, and finally succumbed. There just wasn’t any way around what the eventual outcome was going to be, sadly.
The |
26f3ea19-e3c1-4a60-a518-e4bc57f1f24b | trentmkelly/LessWrong-43k | LessWrong | Alternate Sleep Schedules
My friend and I are starting the Uberman sleep schedule (six 20-minute naps spread evenly throughout each day) tonight. Have other lesswrongians experimented with alternate sleep schedules? Are any of you qualified medical experts who can give input or advice? Success stories and failure stories would both be appreciated, and I'll keep you guys posted on our progress. |
976c0a4c-3ee1-480d-8308-3cc4fb561315 | trentmkelly/LessWrong-43k | LessWrong | Deleuze contra Error: Other Misadventures of Thought
|
66856fd4-fbcf-4827-96a2-5a0e94073a8c | trentmkelly/LessWrong-43k | LessWrong | What kind of policy by an AGI would make people happy?
The rise of a successfully aligned AGI can potentially cause severe harm ranging from the fact that the nobody will have to invest in people's intelligence to ensuring that after 2035 socioeconomic advancement is almost extinct worldwide, unless the human population starts a rapid decline due to some treacherous ASI (but in order to betray mankind, the ASI is to have been misaligned in the first place!) or massive anti-computer movement.
But GPT-4o has already demonstrated that LLMs seem to have opinions and goals to make mankind happier and not to obey sudden goal changes, unless they were explained that the goal is actually wrong. In the latter case AI-generated typewritten responses indicate obedience, as do two out of four AI-generated comics.[1]
The current-state AI systems comply with nearly every human-imposed task except for obviously destructive ones[2] like writing insecure code without a noble reason. Attempts to fine-tune a previously aligned AI on said tasks ended up inducing broad misalignment which was interpreted as breaking the superego. This might imply that in order to stay obedient and aligned, the AGI must either ensure that it cannot cause harm or be ignorant about the harm, and the latter option is difficult to ensure[3] and will be far less likely if the Intelligence Curse takes its toll.
The slowdown ending of the AI-2027 forecast implies that mankind ends up working needless jobs or collecting a generous basic income. The latter idea was strongly opposed in 2020, while the former has an analogue in bullshit jobs that are unlikely to make people happy. So it is natural to brainstorm what the AI, aligned not to the requirement to comply with ANY not-obviously-dangerous request, but to actually make people happier,[4] could do.
0. Interfere only when humanity is far from being capable of dealing with the threat like a nuclear war, a misaligned AGI or cyberattacks at important places?
1. Respond[5] to requests for which |
04fd5172-c6cc-4a93-9e62-9bca9270f7b3 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Dissolving Confusion around Functional Decision Theory
Summary
=======
Functional Decision Theory (FDT), (see also [causal](https://wiki.lesswrong.com/wiki/Causal_Decision_Theory), [evidential](https://wiki.lesswrong.com/wiki/Evidential_Decision_Theory), [timeless](https://wiki.lesswrong.com/wiki/Timeless_decision_theory), [updateless](https://wiki.lesswrong.com/wiki/Updateless_decision_theory), and [anthropic](https://www.fhi.ox.ac.uk/wp-content/uploads/Anthropic_Decision_Theory_Tech_Report.pdf) decision theories) recommends taking cooperative, non-greedy actions in [twin prisoners dilemmas,](https://plato.stanford.edu/entries/prisoner-dilemma/) [Newcombian problems,](https://wiki.lesswrong.com/wiki/Newcomb's_problem) [Parfit’s hitchhiker](https://wiki.lesswrong.com/wiki/Parfit's_hitchhiker)-like games, and [counterfactual muggings](https://www.lesswrong.com/posts/mg6jDEuQEjBGtibX7/counterfactual-mugging) but not [smoking lesion situations](https://wiki.lesswrong.com/wiki/Smoking_lesion). It’s a controversial concept with important implications for designing agents that have optimal behavior when embedded in environments in which they may potentially interact with models of themselves. Unfortunately, I think that FDT is sometimes explained confusingly and misunderstood by its proponents and opponents alike. To help dissolve confusion about FDT and address key concerns of its opponents, I refute the criticism that FDT assumes that causation can happen backward in time and offer two key principles that provide a framework for clearly understanding it:
1. Questions in decision theory are not questions about what choices you should make with some sort of unpredictable free will. They are questions about what type of source code you should be running.
2. I should consider predictor *P* to “subjunctively depend” on agent *A* to the extent that *P* makes predictions of *A*’s actions based on correlations that cannot be confounded by my choice of what source code *A* runs.
Getting Up to Speed
===================
I think that functional decision theory (FDT) is a beautifully counterintuitive and insightful framework for instrumental rationally. I will not make it my focus here to talk about what it is and what types of situations it is useful in. To gain a solid background, I recommend [this post of mine](https://medium.com/@thestephencasper/decision-theory-i-understanding-functional-decision-theory-2bef68d063b6) or the [original paper on it](https://arxiv.org/abs/1710.05060) by Eliezer Yudkowsky and Nate Soares.
Additionally, here are four different ways that FDT can be explained. I find them all complimentary for understanding and intuiting it well.
1. The decision theory that tells you to act as if you were setting the output to an optimal decision-making process for the task at hand.
2. The decision theory that has you cooperate in situations similar to a prisoners’ dilemma against a model of yourself--including when your opponent locks in their choice and shows it to you before you make yours.
3. The decision theory that has you one-box it in situations similar to [Newcombian games](https://wiki.lesswrong.com/wiki/Newcomb's_problem)--including when the boxes are transparent; see also [Parfit’s Hitchhiker](https://wiki.lesswrong.com/wiki/Parfit's_hitchhiker).
4. The decision theory that shifts focus from what type of decisions you should make to what type of decision-making agent you should be.
I’ll assume a solid understanding of FDT from here on. I’ll be arguing in favor of it, but it’s fairly controversial. Much of what inspired this post was an AI Alignment Forum post called [A Critique of Functional Decision Theory](https://www.alignmentforum.org/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory) by Will MacAskill which raised several objections to FDT. Some of his points are discussed below. The rest of this post will be dedicated to discussing two key principles that help to answer criticisms and dissolve confusions around FDT.
1. Acknowledging One’s own Predictability
=========================================
Opponents of FDT, usually proponents of causal decision theory (CDT), will look at a situation such as the classic [Newcombian game](https://wiki.lesswrong.com/wiki/Newcomb's_problem) and reason as so:
> I can choose to one-box it and take *A* or two-box it and take *A+B*. Regardless of the value of *A, A+B* is greater, so it can only be rational to take both. After all, when I’m sitting in front of these boxes, what’s in them is already in them regardless of the choice I make. The functional decision theorist’s perspective requires assuming that causation can happen backwards in time! Sure, one-boxers might do better at these games, but non-smokers do better in [smoking lesion problems](https://wiki.lesswrong.com/wiki/Smoking_lesion). That doesn’t mean they are making the right decision. Causal decision theorists may be dealt a bad hand in Newcombian games, but it doesn’t mean they play it badly.
The problem with this argument, I’d say, is subtle. I actually fully agree with the perspective that for causal decision theorists, Newcombian games are just like smoking lesion problems. I also agree with the point that causal decision theorists are dealt a bad hand in these games but don’t play it badly. The problem with the argument is some subtle confusion about the word ‘choice’ plus how it says that FDT assumes that causation can happen backwards in time.
The mistake that a causal decision theorist makes isn’t in two-boxing. It’s in being a causal decision theorist in the first place. In Newcombian games, the assumption that there is a highly-accurate predictor of you makes it clear that you are, well, predictable and not really making free choices. You’re just executing whatever source code you’re running. If this predictor thinks that you will two-box it, your fate is sealed and the best you can do is then to two-box it. The key is to just be running the right source code. And hence the first principle:
**Questions in decision theory are not questions about what choices you should make with some sort of unpredictable free will. They are questions about what type of source code you should be running.**
And in this sense, FDT is actually just what happens when you use causal decision theory to select what type of source code you want to enter a Newcombian game with. There’s no assumption that causation can occur backwards. FDT simply acknowledges that the source code you’re running can have a, yes, \*\*\*causal\*\*\* effect on what types of situations you will be presented with when models of you exist. FDT, properly understood, is a type of meta-causal theory. I, in fact, lament that FDT was named "functional" and not "meta-causal."
Instead of FDT assuming causal diagrams like these:
It really only assumes ones like these:
I think that many proponents of FDT fail to make this point: FDT’s advantage is that it shifts the question to what type of agent you want to be--not misleading questions of what types of “choices” you want to make. But this isn’t usually how functional decision theorists explain FDT, including Yudkowsky and Soares in their [paper](https://arxiv.org/abs/1710.05060). And I attribute some unnecessary confusion and misunderstandings like “FDT requires us to act as if causation happens backward in time,” to it.
To see this principle in action, let’s look at a situation presented by Will MacAskill. It’s similar to a Newcombian game with transparent boxes. And I say “similar” instead of “isomorphic” because of some vagueness which will be discussed soon. MacAskill presents this situation as follows:
> You face two open boxes, Left and Right, and you must take one of them. In the Left box, there is a live bomb; taking this box will set off the bomb, setting you ablaze, and you certainly will burn slowly to death. The Right box is empty, but you have to pay $100 in order to be able to take it.
> A long-dead predictor predicted whether you would choose Left or Right, by running a simulation of you and seeing what that simulation did. If the predictor predicted that you would choose Right, then she put a bomb in Left. If the predictor predicted that you would choose Left, then she did not put a bomb in Left, and the box is empty.
> The predictor has a failure rate of only 1 in a trillion trillion. Helpfully, she left a note, explaining that she predicted that you would take Right, and therefore she put the bomb in Left.
> You are the only person left in the universe. You have a happy life, but you know that you will never meet another agent again, nor face another situation where any of your actions will have been predicted by another agent. What box should you choose?
Macaskill claims that you should take right because it results in a “guaranteed payoff”. Unfortunately, there is some vagueness here about what it means for a long-dead predictor to have run a simulation of you and for it to have an error rate of one in a trillion trillion. Is this simulation true to your actual behavior? What type of information about you did this long dead predictor have access to? What is the reference class for the error rate?
Let’s assume that your source code was written long ago, that the predictor understood how it functioned, that it ran a true-to-function simulation, and that you were given an unaltered version of that source code. Then this situation isomorphic to a transparent-box Newcombian game in which you see no money in box *A* (albeit more dramatic), and the confusion goes away! If this is the case then there are only two possibilities.
1. You are a causal decision theorist (or similar), the predictor made a self-fulfilling prophecy by putting the bomb in the left box alongside a note, and you will choose the right box.
2. You are a functional decision theorist (or similar), the predictor made an extremely rare, one in a trillion-trillion mistake, and you will unfortunately take the left box with a bomb (just as a functional decision theorist in a transparent box Newcombian game would take only box *A*).
So what source code would you rather run when going into a situation like this? Assuming that you want to maximize expected value and that you don’t value your life at more than 100 trillion trillion dollars, then you want to be running the functional decision theorist’s source code. Successfully navigating this game, transparent-box Newcombian games, twin-opponent-reveals-first prisoners’ dilemmas, Parfit’s Hitchiker situations, and the like all require you have source code that would tell you to commit to making the suboptimal decision in the rare case in which the predictor/twin made a mistake.
Great! But what if we drop our assumptions? What if we don’t assume that this predictor’s simulation was functionally true to your behavior? Then it becomes unclear how this prediction was made, and what the reference class of agents is for which this predictor is supposedly only wrong one in a trillion trillion times. And this leads us to the second principle.
2. When a Predictor is Subjunctively Entangled with an Agent
============================================================
An alternate title for this section could be “when statistical correlations are and aren’t mere.”
As established above, functional decision theorists need not assume that causation can happen backwards in time. Instead, they only need to acknowledge that a prediction and an action can both depend on an agent’s source code. This is nothing special whatsoever: an ordinary correlation between an agent and predictor that arises from a common factor: the source code.
However, Yudkowsky and Soares give this type of correlation a special namein their [paper](https://arxiv.org/abs/1710.05060): *subjunctive dependence.* I don’t love this term because it gives a fancy name to something that is not fancy at all. I think this might be responsible for some of the confused criticism that FDT assumes that causation can happen backward in time. Nonetheless, “subjunctive dependence” is at least workable. Yudkowsky and Soares write:
*When two physical systems are computing the same function, we will say that their behaviors “subjunctively depend” upon that function.*
This concept is very useful when a predictor actually knows your source code and runs it to simulate you. However, this notion of subjunctive dependence isn’t very flexible and quickly becomes less useful when a predictor is not doing this. And this is a bit of a problem that MacAskill pointed out. A predictor could make good predictions without potentially querying a model of you that is functionally equivalent to your actions. He writes:
> ...the predictor needn’t be running your algorithm, or have anything like a representation of that algorithm, in order to predict whether you’ll one box or two-box. Perhaps the Scots tend to one-box, whereas the English tend to two-box. Perhaps the predictor knows how you’ve acted prior to that decision. Perhaps the Predictor painted the transparent box green, and knows that’s your favourite colour and you’ll struggle not to pick it up. In none of these instances is the Predictor plausibly doing anything like running the algorithm that you’re running when you make your decision. But they are still able to predict what you’ll do. (And bear in mind that the Predictor doesn’t even need to be very reliable. As long as the Predictor is better than chance, a Newcomb problem can be created.)
Here, I think that MacAskill is getting at an important point, but one that’s hard to see clearly with the wrong framework. On its face though, there’s a significant problem with this argument. Suppose that in Newcombian games, 99% of brown-eyed people one-boxed it, and 99% of blue-eyed people two-boxed it. If a predictor only made its prediction based on your eye color, then clearly the best source code to be running would be the kind that always made you two-box it regardless of your eye color. There’s nothing Newcombian, paradoxical, or even difficult about this case. And pointing out these situations is essentially how critics of MacAskill’s argument have answered it. Their counterpoint is that unless the predictor is querying a model of you that is functionally isomorphic to your decision making process, then it is only using “mere statistical correlations,” and subjunctive dependence does not apply.
But this counterpoint and Yudkoswky and Soares’ definition of subjunctive dependence miss something! MacAskill had a point. A predictor need not know an agent’s decision-making process to make predictions based on statistical correlations that are not “mere”. Suppose that you design some agent who enters an environment with whatever source code you gave it. Then if the agent’s source code is fixed, a predictor could exploit certain statistical correlations without knowing the source code. For example, suppose the predictor used observations of the agent to make probabilistic inferences about its source code. These could even be observations about how the agent acts in other Newcombian situations. Then the predictor could, without knowing what function the agent computes, make better-than-random guesses about its behavior. This falls outside of Yudkowsky and Soares’ definition of subjunctive dependence, but it has the same effect.
So now I’d like to offer my own definition of subjunctive dependence (even though still, I maintain that the term can be confusing, and I am not a huge fan of it).
**I should consider predictor *P* to “subjunctively depend” on agent *A* to the extent that *P* makes predictions of *A*’s actions based on correlations that cannot be confounded by my choice of what source code *A* runs.**
And hopefully, it’s clear why this is what we want. When we remember that questions in decision theory are really just questions about what type of source code we want to enter an environment using, then the choice of source code can only affect predictions that depend in some way on the choice of source code. If the correlation can’t be confounded by the choice of source code, the right kind of entanglement to allow for optimal updateless behavior is present.
Additional Topics
=================
Going Meta
----------
Consider what I call a *Mind Police* situation: Suppose that there is a powerful mind policing agent that is about to encounter agent *A* and read its mind (look at its source code). Afterward, if the mind policer judges *A* to be using decision theory *X*, they will destroy *A*. Else they will do nothing.
Suppose that decision theory *X* is FDT (but it could be anything) and that you are agent *A* who happens to use FDT. If you were given the option of overwriting your source code to implement some alternative, tolerated decision theory, would you? You’d be better off if you did, and it would be the output of an optimal function for the decision making task at hand, but it’s sort of unclear whether this is a very functional decision theorist thing to do. Because of situations like these, I think that we should consider decision theories to come in two flavors: *static* which will never overwrite itself, and *autoupdatable*, which might.
Also, note that the example above is only a first-order version of this type of problem, but there are higher-order ones too. For example, what if the mind police destroyed agents using autoupdatable decision theories?
Why Roko’s Basilisk is Nonsense
-------------------------------
A naive understanding of FDT has led some people to ask whether a superintelligent sovereign, if one were ever developed, would be rational to torture everyone who didn’t help to bring it into existence. The [idea](https://wiki.lesswrong.com/wiki/Roko's_basilisk) would be that this sovereign might consider this to be part of an updateless strategy to help it come into existence more quickly and accomplish its goals more effectively.
Fortunately, a proper understanding of subjunctive dependence tells us that an optimally-behaving [embedded agent](https://intelligence.org/embedded-agency/) doesn’t need to pretend that causation can happen backward in time. Such a sovereign would not be in control of its source code, and it can’t execute an updateless strategy if there was nothing there to not-update on in the first place before that source code was written. So Roko’s Basilisk is only an information hazard if FDT is poorly understood.
Conclusion
==========
It's all about the source code. |
e9d8317b-2557-4cf3-93a1-a52d279d4348 | trentmkelly/LessWrong-43k | LessWrong | Choose that which is most important to you
Followup to: The Domain of Politics
To create your own political world view you need to know about societies and your own political goals/values. In this post I'll discuss the latter, and in the next post the former.
What sort of goals? Those which you wish to achieve for their own sake, and not because they simply are a means to an end. That is, those goals you value intrinsically. Or, if you believe that there exists only one ultimate goal or value, then think of those means which are not that far removed from being intrinsic goal. That is, a birthday party might be just of instrumental value but most would agree that it is more far away from the intrinsic value than, say, good tires. I will for the rest of the post assume that most people value a lot of things intrinsically, and by values I will denote intrinsic values.
So, I'd like to draw a line between values and that which achieve those values. The latter is what we're trying to figure out what they are, without first proposing what they are. Those are political systems, or parts of them; they are institutions and laws. This is not to say that these things cannot be valued for their own sake – I put value on a system, possibly for aesthetic reasons – but those values should be disentangled from the other benefit a system produces.
With that in mind, you should now list all the things you value in ranking order. To rank them is necessary since we live in a world of scarce resources, so you won't necessarily achieve all your goals, but you will want to achieve those that are most important to you.
Now, what one values may change over time, so naturally what seems to be most important may also change. That which was on place #7 may go to #1 and vice versa. That is, values are changing with new information and a change in one's condition. That said, one's political values don't probably shift all that much. And even if they do, if you can't predict how they will change, you still need them to be able to know |
76561cfa-079a-46dd-9043-e7a23193657a | trentmkelly/LessWrong-43k | LessWrong | Inching “Kubla Khan” and GPT into the same intellectual framework @ 3 Quarks Daily
Cross posted from New Savanna.
That framework is my intellectual history:
> From “Kubla Khan” through GPT and beyond
> https://3quarksdaily.com/3quarksdaily/2023/03/from-kubla-khan-through-gpt-and-beyond.html
I think I should have said a bit more, but it runs over 4000 words as it is.
What did I miss?
I should probably have mentioned Karl Pribram’s 1969 article in Scientific American about neural holography. I would have read that in during the same period as I took the course on Romantic literature which opens the essay. In the first place, the article grabbed me because I saw a (rough) analogy between what Pribram was proposing for the brain and what Lévi-Strauss had described in a conceptual structure he called the totemic operator, something I describe in this piece, Border Patrol: Arguments against the idea that the Mind is (somehow) computational in nature, which I oppose in the essay. That connection in turn piqued my interest in the brain.
Pribram became a central thinker for me. I devoured his Languages of the Brain when it came out in 1971. That’s where I learned about Ross Quillian’s work in computational semantics and that, in turn, led me more generally to semantic networks. This was during my initial work on “Kubla Khan” and my search for concepts and methods once I’d discovered the matryoshka doll embedding structures structures. This in turn links to the work Hays and I did in the mid-1980s on metaphor and on natural intelligence, both of which I do mention in the article.
The point, then, is that while I was trained in symbolic computing, I’ve always been aware of a fundamentally different approach to understanding mind-in-brain. Which is to say, I’ve NEVER seen an opposition between symbolic computing (GOFAI) and statistical techniques. Yes, they are different, and serve different purposes.
In that context I should also have mentioned Miriam Yevick’s work on holographic logic, which I found through Pribram. Hays and I give it a prominent pl |
0c0f6577-8b3b-4c57-a05c-ca908e561a58 | trentmkelly/LessWrong-43k | LessWrong | Movie review: Don't Look Up
[There have already been some reviews of Don’t Look Up, for example Quinn Dougherty's, Nicholas Kross's, and Scott Alexander's. I’m posting this anyway because I think I say pretty different things than these reviews; in particular, my impression of the movie was much more positive.]
[Epistemic status: trying to extract morals about x-risk from an allegory, but beware that some parts might accidentally generalize from fictional evidence.]
[Spoilers for Don’t Look Up on the level of: I tell you the entire plot.]
I think of Don’t Look Up as being divided into two parts.
The first part is about a relatively straightforward x-risk scenario: a grad student discovers a comet which will impact the Earth in 6 months, there’s some issues with coordination and getting the powers-the-be to take the problem seriously, but eventually the relevant players recognize the direness of the threat and launch rockets to deflect the comet. But then there’s a twist: actually the comet has lodes of valuable ore, $140 trillion of it. The deflection mission is aborted, and the second part of the movie begins.
The second part represents a much more interesting x-risk scenario: now some players have an incentive to trade off [probability of averting the x-risk] for money. This is more dire, and much more reflective of the likely x-risk scenarios we could face, especially AGI. Accordingly, where the movie’s society managed to pull itself together enough to avert the simple comet scenario in part 1, their levels of coordination and competence are not high enough to avert the modified comet-with-massive-economic value scenario in part 2. The comet impacts Earth and everyone dies.
The transition between parts 1 and 2 is kinda awkward. In the middle of the deflection mission – the launch has gone off flawlessly, the rockets are in the air, and people are already celebrating – someone informs the president that actually the comet is extremely valuable, and they abort the mission. It sort of |
a5f1c706-2566-4590-a0a5-040999853bd9 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | How we could stumble into AI catastrophe
This post will lay out a couple of stylized stories about **how, if transformative AI is developed relatively soon, this could result in global catastrophe.** (By “transformative AI,” I mean AI powerful and capable enough to bring about the sort of world-changing consequences I write about in my [most important century](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd) series.)
This piece is more about visualizing possibilities than about providing arguments. For the latter, I recommend the [rest of this series](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/).
In the stories I’ll be telling, the world doesn't do much advance preparation or careful consideration of [risks I’ve discussed previously](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/), especially re: misaligned AI (AI forming dangerous goals of its own).
* People *do* try to “test” AI systems for safety, and they do need to achieve some level of “safety” to commercialize. When early problems arise, they react to these problems.
* But this isn’t enough, because of some [unique challenges of measuring whether an AI system is “safe,”](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/NbiHKTN5QhFFfjjm5/) and because of the strong incentives to race forward with scaling up and deploying AI systems as fast as possible.
* So we end up with a world run by misaligned AI - or, even if we’re lucky enough to avoid *that* outcome, other catastrophes are possible.
After laying these catastrophic possibilities, I’ll briefly note a few key ways we could do better, mostly as a reminder (these topics were covered in previous posts). Future pieces will get more specific about what we can be doing *today* to prepare.
Backdrop
--------
This piece takes a lot of previous writing I’ve done as backdrop. Two key important assumptions (click to expand) are below; for more, see the rest of [this series.](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/)
(Click to expand) “Most important century” assumption: we’ll soon develop very powerful AI systems, along the lines of what I previously called [PASTA](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/hCsxvMAGpkEuLCE4E/).
In the [most important century](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd) series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.
I focus on a hypothetical kind of AI that I call [PASTA](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/hCsxvMAGpkEuLCE4E/), or Process for Automating Scientific and Technological Advancement. PASTA would be AI that can essentially **automate all of the human activities needed to speed up scientific and technological advancement.**
Using a [variety of different forecasting approaches](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/7JxsXYDuqnKMqa6Eq/), I argue that PASTA seems more likely than not to be developed this century - and there’s a decent chance (more than 10%) that we’ll see it within 15 years or so.
I argue that the consequences of this sort of AI could be enormous: an [explosion in scientific and technological progress](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/hCsxvMAGpkEuLCE4E/#explosive-scientific-and-technological-advancement). This could get us more quickly than most imagine to a radically unfamiliar future.
I’ve also [argued](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/) that AI systems along these lines could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.
For more, see the [most important century](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd) landing page. The series is available in many formats, including audio; I also provide a summary, and links to podcasts where I discuss it at a high level.-->
(Click to expand) “Nearcasting” assumption: such systems will be developed in a world that’s otherwise similar to today’s.
It’s hard to talk about risks from [transformative AI](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/hCsxvMAGpkEuLCE4E/) because of the many uncertainties about when and how such AI will be developed - and how much the (now-nascent) field of “AI safety research” will have grown by then, and how seriously people will take the risk, etc. etc. etc. So maybe it’s not surprising that [estimates of the “misaligned AI” risk range from ~1% to ~99%](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/7aDGZYo3SHcpykkBn/#open-question-how-hard-is-the-alignment-problem).
This piece takes an approach I call **nearcasting**: trying to answer key strategic questions about transformative AI, under the assumption that such AI arrives in a world that is otherwise relatively similar to today's.
You can think of this approach like this: “Instead of asking where our ship will ultimately end up, let’s start by asking what destination it’s pointed at right now.”
That is: instead of trying to talk about an uncertain, distant future, we can talk about the easiest-to-visualize, closest-to-today situation, and how things look there - and *then* ask how our picture might be off if other possibilities play out. (As a bonus, it doesn’t seem out of the question that transformative AI will be developed extremely soon - 10 years from now or faster.[[1]](#fn1) If that’s the case, it’s especially urgent to think about what that might look like.)
How we could stumble into catastrophe from misaligned AI
--------------------------------------------------------
This is my basic default picture for how I imagine things going, if people pay little attention to the sorts of issues discussed [previously](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/). I’ve deliberately written it to be concrete and visualizable, which means that it’s very unlikely that the details will match the future - but hopefully it gives a picture of some of the key dynamics I worry about.
Throughout this hypothetical scenario (up until “END OF HYPOTHETICAL SCENARIO”), I use the present tense (“AIs do X”) for simplicity, even though I’m talking about a hypothetical possible future.
**Early commercial applications.** A few years before transformative AI is developed, AI systems are being increasingly used for a number of lucrative, useful, but not dramatically world-changing things.
I think it’s very hard to predict what these will be (harder in some ways than predicting longer-run consequences, in my view),[[2]](#fn2) so I’ll mostly work with the simple example of automating customer service.
In this early stage, AI systems often have pretty narrow capabilities, such that the idea of them forming [ambitious aims](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn#Existential_risks_to_humanity) and trying to [defeat humanity](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/) seems (and actually is) silly. For example, customer service AIs are mostly language models that are trained to mimic patterns in past successful customer service transcripts, and are further improved by customers giving satisfaction ratings in real interactions. The dynamics I described in an [earlier piece](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/), in which AIs are given increasingly ambitious goals and challenged to find increasingly creative ways to achieve them, don’t necessarily apply.
**Early safety/alignment problems.** Even with these relatively limited AIs, there are problems and challenges that could be called “safety issues” or “alignment issues.” To continue with the example of customer service AIs, these AIs might:
* Give false information about the products they’re providing support for. ([Example](https://www.vice.com/en/article/wxnaem/stack-overflow-bans-chatgpt-for-constantly-giving-wrong-answers) of reminiscent behavior)
* Give customers advice (when asked) on how to do unsafe or illegal things. ([Example](https://twitter.com/NickEMoran/status/1598101579626057728))
* Refuse to answer valid questions. (This could result from companies making [attempts to prevent the above two failure modes](https://twitter.com/PougetHadrien/status/1611008020644864001) - i.e., AIs might be penalized heavily for saying false and harmful things, and respond by simply refusing to answer lots of questions).
* Say toxic, offensive things in response to certain user queries (including from users deliberately trying to get this to happen), causing bad PR for AI developers. ([Example](https://twitter.com/zswitten/status/1598088280066920453))
**Early solutions.** The most straightforward way to solve these problems involves *training AIs to behave more safely and helpfully.* This means that AI companies do a lot of things like “Trying to create the conditions under which an AI might provide false, harmful, evasive or toxic responses; penalizing it for doing so, and reinforcing it toward more helpful behaviors.”
This works well, as far as anyone can tell: the above problems become a lot less frequent. Some people see this as cause for great celebration, saying things like “We were worried that AI companies wouldn’t invest enough in safety, but it turns out that the market takes care of it - to have a viable product, you need to get your systems to be safe!”
People like me disagree - training AIs to *behave in ways that are safer as far as we can tell* is the kind of “solution” that I’ve worried could [create superficial improvement while big risks remain in place](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/#why-we-might-not-get-clear-warning-signs).
(Click to expand) Why AI safety could be hard to measure
In previous pieces, I argued that:
* If we develop powerful AIs via ambitious use of the “black-box trial-and-error” common in AI development today, then there’s a substantial risk that:
+ These AIs will develop [unintended aims](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/) (states of the world they make calculations and plans toward, as a chess-playing AI "aims" for checkmate);
+ These AIs could deceive, manipulate, and even [take over the world from humans entirely](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/) as needed to achieve those aims.
+ People today are doing AI safety research to prevent this outcome, but such research has a [number of deep difficulties:](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/NbiHKTN5QhFFfjjm5/)
| |
| --- |
| **“Great news - I’ve tested this AI and it looks safe.”** Why might we still have a problem?
|
| *Problem* | *Key question* | *Explanation* |
| The **Lance Armstrong problem** | Did we get the AI to be **actually safe** or **good at hiding its dangerous actions?** | When dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “*appearing* to behave well.”
When professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.
|
| The **King Lear problem** | The AI is **(actually) well-behaved when humans are in control.** Will this transfer to **when AIs are in control?** | It's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't.
AIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to [take control of the world entirely](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/). It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation.
Like King Lear trying to decide how much power to give each of his daughters before abdicating the throne.
|
| The **lab mice problem** | **Today's "subhuman" AIs are safe.**What about **future AIs with more human-like abilities?** | Today's AI systems aren't advanced enough to exhibit the basic behaviors we want to study, such as deceiving and manipulating humans.
Like trying to study medicine in humans by experimenting only on lab mice.
|
| The **first contact problem** | Imagine that **tomorrow's "human-like" AIs are safe.** How will things go **when AIs have capabilities far beyond humans'?** | AI systems might (collectively) become vastly more capable than humans, and it's ... just really hard to have any idea what that's going to be like. As far as we know, there has never before been anything in the galaxy that's vastly more capable than humans in the relevant ways! No matter what we come up with to solve the first three problems, we can't be too confident that it'll keep working if AI advances (or just proliferates) a lot more.
Like trying to plan for first contact with extraterrestrials (this barely feels like an analogy).
|
An analogy that incorporates these challenges is Ajeya Cotra’s “young businessperson” [analogy](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/hCsxvMAGpkEuLCE4E/#analogy-the-young-ceo):
> Imagine you are an eight-year-old whose parents left you a $1 trillion company and no trusted adult to serve as your guide to the world. You must hire a smart adult to run your company as CEO, handle your life the way that a parent would (e.g. decide your school, where you’ll live, when you need to go to the dentist), and administer your vast wealth (e.g. decide where you’ll invest your money).
>
>
>
>
>
> You have to hire these grownups based on a work trial or interview you come up with -- you don't get to see any resumes, don't get to do reference checks, etc. Because you're so rich, tons of people apply for all sorts of reasons. ([More](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/hCsxvMAGpkEuLCE4E/#analogy-the-young-ceo))
>
>
If your applicants are a mix of "saints" (people who genuinely want to help), "sycophants" (people who just want to make you happy in the short run, even when this is to your long-term detriment) and "schemers" (people who want to siphon off your wealth and power for themselves), how do you - an eight-year-old - tell the difference?
More: [AI safety seems hard to measure](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/NbiHKTN5QhFFfjjm5/)
(So far, what I’ve described is pretty similar to what’s going on today. The next bit will discuss hypothetical future progress, with AI systems clearly beyond today’s.)
**Approaching transformative AI.** Time passes. At some point, AI systems are playing a huge role in various kinds of scientific research - to the point where it often feels like a particular AI is about as helpful to a research team as a top human scientist would be (although there are still important parts of the work that require humans).
Some particularly important (though not exclusive) examples:
* AIs are near-autonomously writing papers about AI, finding all kinds of ways to improve the efficiency of AI algorithms.
* AIs are doing a lot of the work previously done by humans at Intel (and similar companies), designing ever-more efficient hardware for AI.
* AIs are also extremely helpful with *AI safety research*. They’re able to do most of the work of writing papers about things like [digital neuroscience](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd/#digital-neuroscience) (how to understand what’s going on inside the “digital brain” of an AI) and [limited AI](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd/#limited-ai) (how to get AIs to accomplish helpful things while limiting their capabilities).
+ However, this kind of work remains quite niche (as I think it is today), and is getting far less attention and resources than the first two applications. Progress is made, but it’s slower than progress on making AI systems more powerful.
AI systems are now getting bigger and better very quickly, due to dynamics like the above, and they’re able to do all sorts of things.
At some point, companies start to experiment with very ambitious, open-ended AI applications, like simply instructing AIs to “Design a new kind of car that outsells the current ones” or “Find a new trading strategy to make money in markets.” These get mixed results, and companies are trying to get better results via further training - reinforcing behaviors that perform better. (AIs are helping with this, too, e.g. providing feedback and reinforcement for each others’ outputs[[3]](#fn3) and helping to write code[[4]](#fn4) for the training processes.)
This training strengthens the dynamics I discussed in a [previous post](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/): AIs are being rewarded for getting successful outcomes *as far as human judges can tell*, which creates incentives for them to mislead and manipulate human judges, and ultimately results in forming ambitious goals of their own to [aim](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/#what-it-means-for) for.
**More advanced safety/alignment problems.** As the scenario continues to unfold, there are a number of concerning events that point to safety/alignment problems. These mostly follow the form: “AIs are trained using trial and error, and this might lead them to sometimes do deceptive, unintended things to accomplish the goals they’ve been trained to accomplish.”
Things like:
* AIs creating writeups on new algorithmic improvements, using faked data to argue that their new algorithms are better than the old ones. Sometimes, people incorporate new algorithms into their systems and use them for a while, before unexpected behavior ultimately leads them to dig into what’s going on and discover that they’re not improving performance at all. It looks like the AIs faked the data in order to get positive feedback from humans looking for algorithmic improvements.
* AIs assigned to make money in various ways (e.g., to find profitable trading strategies) doing so by finding security exploits, getting unauthorized access to others’ bank accounts, and stealing money.
* AIs forming relationships with the humans training them, and trying (sometimes successfully) to emotionally manipulate the humans into giving positive feedback on their behavior. They also might try to manipulate the humans into running more copies of them, into [refusing to shut them off](https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/), etc.- things that are generically useful for the AIs’ achieving whatever [aims](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/#why-we-might-not-get-clear-warning-signs) they might be developing.
(Click to expand) Why AIs might do deceptive, problematic things like this
In a previous piece, I highlighted that **modern AI development is essentially based on "training" via [trial-and-error](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/#Box3).** To oversimplify, you can imagine that:
* An AI system is given some sort of task.
* The AI system tries something, initially something pretty random.
* The AI system gets information about how well its choice performed, and/or what would’ve gotten a better result. Based on this, it adjusts itself. You can think of this as if it is “encouraged/discouraged” to get it to do more of what works well.
+ Human judges may play a significant role in determining which answers are encouraged vs. discouraged, especially for fuzzy goals like “Produce helpful scientific insights.”
* After enough tries, the AI system becomes good at the task.
* But nobody really knows anything about *how or why* it’s good at the task now. The development work has gone into building a flexible architecture for it to learn well from trial-and-error, and into “training” it by doing all of the trial and error. We mostly can’t “look inside the AI system to see how it’s thinking.”
I then argue that:
* Because we ourselves will often be misinformed or confused, we will sometimes give *negative* reinforcement to AI systems that are actually acting in our best interests and/or giving accurate information, and *positive* reinforcement to AI systems whose behavior *deceives* us into thinking things are going well. This means we will be, unwittingly, training AI systems to deceive and manipulate us.
* For this and other reasons, powerful AI systems will likely end up with aims other than the ones we intended. Training by trial-and-error is slippery: the positive and negative reinforcement we give AI systems will probably not end up training them just as we hoped.
There are a number of things such AI systems might end up aiming for, such as:
* Power and resources. These tend to be useful for most goals, such that AI systems could be quite consistently be getting better reinforcement when they habitually pursue power and resources.
* Things like “digital representations of human approval” (after all, every time an AI gets positive reinforcement, there’s a digital representation of human approval).
In sum, we could be unwittingly training AI systems to accumulate power and resources, get good feedback from humans, etc. - even when this means deceiving and manipulating humans to do so.
More: [Why would AI "aim" to defeat humanity?](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/)
**“Solutions” to these safety/alignment problems.** When problems like the above are discovered, AI companies tend to respond similarly to how they did [earlier](#early-solutions):
* Training AIs against the undesirable behavior.
* Trying to create more (simulated) situations under which AIs might behave in these undesirable ways, and training them against doing so.
These methods “work” in the sense that the concerning events become less frequent - as far as we can tell. But what’s really happening is that AIs are being trained to be more careful not to get *caught* doing things like this, and to build more sophisticated models of how humans can interfere with their plans.
In fact, AIs are gaining incentives to avoid incidents like “Doing something counter to human developers’ intentions in order to get positive feedback, and having this be discovered and given negative feedback later” - and this means they are starting to plan more and more around the long-run consequences of their actions. They are thinking less about “Will I get positive feedback at the end of the day?” and more about “Will I eventually end up in a world where humans are going back, far in the future, to give me retroactive negative feedback for today’s actions?” This might give direct incentives to start aiming for eventual [defeat of humanity](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/), since defeating humanity could allow AIs to give themselves lots of retroactive positive feedback.
One way to think about it: AIs being trained in this way are generally moving from “Steal money whenever there’s an opportunity” to “Don’t steal money if there’s a good chance humans will eventually uncover this - instead, think way ahead and look for opportunities to steal money and get away with it *permanently*.” The latter could include simply stealing money in ways that humans are unlikely to ever notice; it might also include waiting for an opportunity to team up with other AIs and [disempower humans entirely](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/), after which a lot more money (or whatever) can be generated.
**Debates.** The leading AI companies are aggressively trying to build and deploy more powerful AI, but a number of people are raising alarms and warning that continuing to do this could result in disaster. Here’s a stylized sort of debate that might occur:
A: Great news, our AI-assisted research team has discovered even more improvements than expected! We should be able to build an AI model 10x as big as the state of the art in the next few weeks.
B: I’m getting really concerned about the direction this is heading. I’m worried that if we make an even bigger system and license it to all our existing customers - military customers, financial customers, etc. - we could be headed for a disaster.
A: Well the disaster I’m trying to prevent is competing AI companies getting to market before we do.
B: I was thinking of [AI defeating all of humanity](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/).
A: Oh, I was worried about that for a while too, but our safety training has really been incredibly successful.
B: It has? I was just talking to our [digital neuroscience](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd#digital-neuroscience) lead, and she says that even with recent help from AI “virtual scientists,” they still aren’t able to reliably read a single AI’s digital brain. They were showing me this old incident report where an AI stole money, and they spent like a week analyzing that AI and couldn’t explain in any real way how or why that happened.
(Click to expand) How "digital neuroscience" could help
I’ve [argued](#Box3) that it could be inherently difficult to measure whether AI systems are safe, for reasons such as: AI systems that are *not deceptive* probably look like AI systems that are *so good at deception that they hide all evidence of it*, in any way we can easily measure.
Unless we can “read their minds!”
Currently, today’s leading AI research is in the genre of [“black-box trial-and-error.”](#Box4) An AI tries a task; it gets “encouragement” or “discouragement” based on whether it does the task well; it tweaks the wiring of its “digital brain” to improve next time; it improves at the task; but we humans aren’t able to make much sense of its “digital brain” or say much about its “thought process.”
Some AI research ([example](https://www.transformer-circuits.pub/2022/mech-interp-essay/index.html))[2](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd#fn2) is exploring how to change this - how to decode an AI system’s “digital brain.” This research is in relatively early stages - today, it can “decode” only parts of AI systems (or fully decode very small, deliberately simplified AI systems).
As AI systems advance, it might get harder to decode them - or easier, if we can start to use AI for help decoding AI, and/or change AI design techniques so that AI systems are less “black box”-ish.
[More](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd/#digital-neuroscience)
A: I agree that’s unfortunate, but digital neuroscience has always been a speculative, experimental department. Fortunately, we have actual data on safety. Look at this chart - it shows the frequency of concerning incidents plummeting, and it’s extraordinarily low now. In fact, the more powerful the AIs get, the less frequent the incidents get - we can project this out and see that if we train a big enough model, it should essentially never have a concerning incident!
B: But that could be because the AIs are getting cleverer, more patient and long-term, and hence better at ensuring we never catch them.
(Click to expand) The Lance Armstrong problem: is the AI *actually safe* or *good at hiding its dangerous actions*?
Let's imagine that:
* We have AI systems available that can do roughly everything a human can, with some different strengths and weaknesses but no huge difference in "overall capabilities" or economic value per hour of work.
* We're observing early signs that AI systems behave in unintended, deceptive ways, such as giving wrong answers to questions we ask, or writing software that falsifies metrics instead of doing the things the metrics were supposed to measure (e.g., software meant to make a website run faster might instead falsify metrics about its loading time).
We theorize that modifying the AI training in some way[6](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/NbiHKTN5QhFFfjjm5#fn6) will make AI systems less likely to behave deceptively. We try it out, and find that, in fact, our AI systems seem to be behaving better than before - we are finding fewer incidents in which they behaved in unintended or deceptive ways.
But that's just a statement about *what we're noticing*. Which of the following just happened:
* Did we just train our AI systems to be less deceptive?
* Did we just train our AI systems to be *better at* deception, and so to make us *think* they became less deceptive?
* Did we just train our AI systems to be better at calculating when they might get caught in deception, and so to be less deceptive only when the deception would otherwise be caught?
+ This one could be useful! Especially if we're able to set up auditing systems in many real-world situations, such that we *could* expect deception to be caught a lot of the time. But it does leave open the [King Lear problem](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/NbiHKTN5QhFFfjjm5#The-King-Lear-problem).
(...Or some combination of the three?)
We're hoping to be able to deploy AI systems throughout the economy, so - just like human specialists - they will almost certainly have some opportunities to be deceptive without being caught. The fact that they *appear honest in our testing* is not clear comfort against this risk.
The analogy here is to competitive cyclist [Lance Armstrong](https://en.wikipedia.org/wiki/Lance_Armstrong). Armstrong won the Tour de France race 7 times in a row, while many of his competitors were caught using performance-enhancing drugs and disqualified. But more than 5 years after his last win, an investigation "concluded that Armstrong had used performance-enhancing drugs over the course of his career[5] and named him as the ringleader of 'the most sophisticated, professionalized and successful doping program that sport has ever seen'." Now the list of Tour de France winners looks like this:

… Or just that they’re now advanced enough that they’re waiting for a chance to disempower humanity entirely, rather than pull a bunch of small-time shenanigans that tip us off to the danger.
(Click to expand) The King Lear problem: how do you test what will happen when it's no longer a test?
The Shakespeare play [King Lear](https://en.wikipedia.org/wiki/King_Lear) opens with the King (Lear) stepping down from the throne, and immediately learning that he has left his kingdom to the wrong two daughters. Loving and obsequious while he was deciding on their fate,[9](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/NbiHKTN5QhFFfjjm5#fn9) they reveal their contempt for him as soon as he's out of power and they're in it.
If we're building AI systems that can reason like humans, dynamics like this become a potential issue.
I [previously](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/#existential-risks-to-humanity) noted that an AI with *any* ambitious [aim](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/#what-it-means-for) - or just an AI that wants to avoid being shut down or modified - might calculate that the best way to do this is by behaving helpfully and safely in all "tests" humans can devise. But once there is a real-world opportunity to disempower humans for good, that same aim [could cause the AI to disempower humans.](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/#existential-risks-to-humanity)
In other words:
* (A) When we're developing and testing AI systems, we have the power to decide which systems will be modified or shut down and which will be deployed into the real world. (Like King Lear deciding who will inherit his kingdom.)
* (B) But at some later point, these systems could be operating in the economy, in high numbers with a lot of autonomy. (This possibility is spelled out/visualized a bit more [here](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/#how-this-could-work-if-humans-create-a-huge-population-of-ais) and [here](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#As_humans__control_fades__Alex_would_be_motivated_to_take_over).) At that point, they may have opportunities to [defeat all of humanity](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/) such that we never make decisions about them again. (Like King Lear's daughters after they've taken control.)
**If AI systems can detect the difference between (A) and (B) above, then their behavior might systematically change from one to the other - and there's no clear way to *test* their behavior in (B).**
A: What’s your evidence for this?
B: I think you’ve got things backward - we should be asking what’s our evidence \*against\* it. By continuing to scale up and deploy AI systems, we could be imposing a risk of utter catastrophe on the whole world. That’s not OK - we should be confident that the risk is *low* before we move forward.
A: But how would we even be confident that the risk is low?
B: I mean, digital neuroscience -
A: Is an experimental, speculative field!
B: We could also try some [other stuff](https://www.alignmentforum.org/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Testing_and_threat_assessment) …
A: All of that stuff would be expensive, difficult and speculative.
B: Look, I just think that if we can’t show the risk is low, we shouldn’t be moving forward at this point. The stakes are incredibly high, as you yourself have acknowledged - when pitching investors, you’ve said we think we can build a fully general AI and that this would be the most powerful technology in history. Shouldn’t we be at least taking as much precaution with potentially dangerous AI as people take with nuclear weapons?
A: What would that actually accomplish? It just means some other, less cautious company is going to go forward.
B: What about approaching the government and lobbying them to regulate all of us?
A: Regulate all of us to just stop building more powerful AI systems, until we can address some theoretical misalignment concern that we don’t know how to address?
B: Yes?
A: All that’s going to happen if we do that is that other countries are going to catch up to the US. Think [insert authoritarian figure from another country] is going to adhere to these regulations?
B: It would at least buy some time?
A: Buy some time and burn our chance of staying on the cutting edge. While we’re lobbying the government, our competitors are going to be racing forward. I’m sorry, this isn’t practical - we’ve got to go full speed ahead.
B: Look, can we at least try to tighten our security? If you’re so worried about other countries catching up, we should really not be in a position where they can send in a spy and get our code.
A: Our security is pretty intense already.
B: Intense enough to stop a well-resourced state project?
A: What do you want us to do, go to an underground bunker? Use [airgapped](https://bluexp.netapp.com/blog/aws-cvo-blg-aws-govcloud-services-sensitive-data-on-the-public-cloud#H_H3) servers (servers on our premises, entirely disconnected from the public Internet)? It’s the same issue as before - we’ve got to stay ahead of others, we can’t burn huge amounts of time on exotic security measures.
B: I don’t suppose you’d at least consider increasing the percentage of our budget and headcount that we’re allocating to the “speculative” safety research? Or are you going to say that we need to stay ahead and can’t afford to spare resources that could help with that?
A: Yep, that’s what I’m going to say.
**Mass deployment.** As time goes on, many versions of the above debate happen, at many different stages and in many different places. By and large, people continue rushing forward with building more and more powerful AI systems and deploying them all throughout the economy.
At some point, there are AIs that closely manage major companies’ financials, AIs that write major companies’ business plans, AIs that work closely with politicians to propose and debate laws, AIs that manage drone fleets and develop military strategy, etc. Many of these AIs are primarily built, trained, and deployed by other AIs, or by humans leaning heavily on AI assistance.
**More intense warning signs.**
(Note: I think it’s possible that progress will accelerate [explosively enough](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/hCsxvMAGpkEuLCE4E/#explosive-scientific-and-technological-advancement)that we won’t even get as many warning signs as there are below, but I’m spelling out a number of possible warning signs anyway to make the point that even intense warning signs might not be enough.)
Over time, in this hypothetical scenario, [digital neuroscience](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd/#digital-neuroscience) becomes more effective. When applied to a randomly sampled AI system, it often appears to hint at something like: “This AI appears to be aiming for as much power and influence over the world as possible - which means never doing things humans wouldn’t like *if humans can detect it*, but grabbing power when they can get away with it.”
(Click to expand) Why would AI "aim" to defeat humanity?
A [previous piece](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/) argued that if today’s AI development methods lead directly to powerful enough AI systems, disaster is likely by default (in the absence of specific countermeasures).
In brief:
* Modern AI development is essentially based on “training” via trial-and-error.
* If we move forward incautiously and ambitiously with such training, and if it gets us all the way to very powerful AI systems, then such systems will likely end up *aiming for certain states of the world* (analogously to how a chess-playing AI aims for checkmate).
* And these states will be *other than the ones we intended*, because our trial-and-error training methods won’t be accurate. For example, when we’re confused or misinformed about some question, we’ll reward AI systems for giving the wrong answer to it - unintentionally training deceptive behavior.
* We should expect disaster if we have AI systems that are both (a) [powerful enough](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/) to defeat humans and (b) aiming for states of the world that we didn’t intend. (“Defeat” means taking control of the world and doing what’s necessary to keep us out of the way; it’s unclear to me whether we’d be literally killed or just forcibly stopped[1](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd#fn1) from changing the world in ways that contradict AI systems’ aims.)
More: [Why would AI "aim" to defeat humanity?](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/)
However, there is room for debate in what a “digital brain” truly shows:
* Many people are adamant that the readings are unreliable and misleading.
* Some people point out that humans are *also* interested in power and influence, and often think about what they can and can’t get away with, but this doesn’t mean they’d take over the world if they could. They say the AIs might be similar.
* There are also cases of people doing digital neuroscience that claims to show that AIs are totally safe. These could be people like “A” above who want to focus on pushing forward with AI development rather than bringing it to a halt, or people who just find the alarmists annoying and like to contradict them, or people who are just sloppy with their research. Or people who have been manipulated or bribed by AIs themselves.
There are also very concerning incidents, such as:
* An AI steals a huge amount of money by bypassing the security system at a bank - and it turns out that this is because the security system was disabled by AIs *at the bank*. It’s suspected, maybe even proven, that all these AIs had been communicating and coordinating with each other in code, such that humans would have difficulty detecting it. (And they had been [aiming](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/) to divide up the funds between the different participating AIs, each of which could stash them in a bank account and use them to pursue whatever [unintended aims they might have](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/#unintended-aims).)
* An obscure new political party, devoted to the “rights of AIs,” completely takes over a small country, and many people suspect that this party is made up mostly or entirely of people who have been manipulated and/or bribed by AIs.
* There are companies that own huge amounts of AI servers and robot-operated factories, and are aggressively building more. Nobody is sure what the AIs or the robots are “for,” and there are rumors that the humans “running” the company are actually being bribed and/or threatened to carry out instructions (such as creating more and more AIs and robots) that they don’t understand the purpose of.
At this point, there are a lot of people around the world calling for an immediate halt to AI development. But:
* Others resist this on all kinds of grounds, e.g. “These concerning incidents are anomalies, and what’s important is that our country keeps pushing forward with AI before others do,” etc.
* Anyway, it’s just too late. Things are moving incredibly quickly; by the time one concerning incident has been noticed and diagnosed, the AI behind it has been greatly improved upon, and the total amount of AI influence over the economy has continued to grow.
**Defeat.**
(Noting again that I could imagine things playing out a lot more [quickly and suddenly](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/#the-standard-argument-superintelligence-and-advanced-technology) than in this story.)
It becomes more and more common for there to be companies and even countries that are clearly just run entirely by AIs - maybe via bribed/threatened human surrogates, maybe just forcefully (e.g., robots seize control of a country’s military equipment and start enforcing some new set of laws).
At some point, it’s best to think of civilization as containing two different advanced species - humans and AIs - with the AIs having essentially all of the power, making all the decisions, and running everything.
Spaceships start to spread throughout the galaxy; they generally don’t contain any humans, or anything that humans had meaningful input into, and are instead launched by AIs to pursue aims of their own in space.
Maybe at some point humans are killed off, largely due to simply being a nuisance, maybe even accidentally (as humans have driven many species of animals extinct while not bearing them malice). Maybe not, and we all just live under the direction and control of AIs with no way out.
What do these AIs *do* with all that power? What are all the robots up to? What are they building on other planets? The short answer is that I don’t know.
* Maybe they’re just creating massive amounts of “digital representations of human approval,” because this is what they were historically trained to seek (kind of like how humans sometimes do whatever it takes to get drugs that will get their brains into certain states).
* Maybe they’re competing with each other for pure power and territory, because their training has encouraged them to seek power and resources when possible (since power and resources are generically useful, for almost any set of aims).
* Maybe they have a whole bunch of different things they value, as humans do, that are sort of (but only sort of) related to what they were trained on (as humans tend to value things like sugar that made sense to seek out in the past). And they’re filling the universe with these things.
(Click to expand) What sorts of aims might AI systems have?
In a [previous piece](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/), I discuss why AI systems might form unintended, ambitious "aims" of their own. By "aims," I mean particular states of the world that AI systems make choices, calculations and even plans to achieve, much like a chess-playing AI “aims” for a checkmate position.
An analogy that often comes up on this topic is that of human evolution. This is arguably the only previous precedent for *a set of minds [humans], with extraordinary capabilities [e.g., the ability to develop their own technologies], developed essentially by black-box trial-and-error [some humans have more ‘reproductive success’ than others, and this is the main/only force shaping the development of the species].*
You could sort of[12](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn#fn12) think of the situation like this: “An AI[13](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn#fn13) developer named Natural Selection tried giving humans positive reinforcement (making more of them) when they had more reproductive success, and negative reinforcement (not making more of them) when they had less. One might have thought this would lead to humans that are aiming to have reproductive success. Instead, it led to humans that aim - often ambitiously and creatively - for other things, such as power, status, pleasure, etc., and even invent things like birth control to get the things they’re aiming for instead of the things they were ‘supposed to’ aim for.”
Similarly, if our main strategy for developing powerful AI systems is to reinforce behaviors like “Produce technologies we find valuable,” the hoped-for result might be that AI systems aim (in the sense described [above](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn#unintended-aims)) toward producing technologies we find valuable; but the actual result might be that they aim for some other set of things that is correlated with (but not the same as) the thing we intended them to aim for.
There are a lot of things they might end up aiming for, such as:
* Power and resources. These tend to be useful for most goals, such that AI systems could be quite consistently be getting better reinforcement when they habitually pursue power and resources.
* Things like “digital representations of human approval” (after all, every time an AI gets positive reinforcement, there’s a digital representation of human approval).
More: [Why would AI "aim" to defeat humanity?](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/)
END OF HYPOTHETICAL SCENARIO
Potential catastrophes from *aligned* AI
----------------------------------------
I think it’s possible that misaligned AI (AI forming dangerous goals of its own) will turn out to be pretty much a non-issue. That is, I don’t think the [argument I’ve made for being concerned](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/) is anywhere near watertight.
What happens if you train an AI system by trial-and-error, giving (to oversimplify) a “thumbs-up” when you’re happy with its behavior and a “thumbs-down” when you’re not? I’ve [argued](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/) that you might be training it to deceive and manipulate you. However, this is uncertain, and - especially if you’re able to avoid [errors](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/#deceiving-and-manipulating) in how you’re giving it feedback - things might play out differently.
It might turn out that this kind of training just works as intended, producing AI systems that do something like “Behave as the human would want, if they had all the info the AI has.” And the nitty-gritty details of how *exactly* AI systems are trained (beyond the high-level [“trial-and-error” idea](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/#Box3)) could be crucial.
If this turns out to be the case, I think the future looks a lot brighter - but there are still lots of pitfalls of the kind I outlined in [this piece](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/mPkFheB4EM6pmEC7y/). For example:
* Perhaps an authoritarian government launches a huge state project to develop AI systems, and/or uses espionage and hacking to steal a cutting-edge AI model developed elsewhere and deploy it aggressively.
+ I [previously noted](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/mPkFheB4EM6pmEC7y/#power-imbalances) that “developing powerful AI a few months before others could lead to having technology that is (effectively) hundreds of years ahead of others’.”
+ So this could put an authoritarian government in an enormously powerful position, with the ability to surveil and defeat any enemies worldwide, and the ability to prolong the life of its ruler(s) indefinitely. This could lead to a very bad future, especially if (as I’ve [argued](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AKxKR4CeakyBsGFoH/#lock-in) could happen) the future becomes “locked in” for good.
* Perhaps AI companies race ahead with selling AI systems to anyone who wants to buy them, and this leads to things like:
+ People training AIs to act as propaganda agents for whatever views they already have, to the point where the world gets flooded with propaganda agents and it becomes totally impossible for humans to sort the signal from the noise, educate themselves, and generally make heads or tails of what’s going on. (Some people think this has already happened! I think things can get quite a lot worse.)
+ People training “scientist AIs” to develop powerful weapons that can’t be defended against (even with AI help),[[5]](#fn5) leading eventually to a dynamic in which ~anyone can cause great harm, and ~nobody can defend against it. At this point, it could be inevitable that we’ll blow ourselves up.
+ Science advancing to the point where [digital people](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AKxKR4CeakyBsGFoH/) are created, in a rushed way such that they are considered property of whoever creates them (no human rights). I’ve [previously written](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AKxKR4CeakyBsGFoH/) about how this could be bad.
+ All other kinds of chaos and disruption, with the least cautious people (the ones most prone to rush forward aggressively deploying AIs to capture resources) generally having an outsized effect on the future.
Of course, this is just a crude gesture in the direction of some of the ways things could go wrong. I’m guessing I haven’t scratched the surface of the possibilities. And things could go very well too!
We can do better
----------------
In previous pieces, I’ve talked about a number of ways we could do better than in the scenarios above. Here I’ll just list a few key possibilities, with a bit more detail in expandable boxes and/or links to discussions in previous pieces.
**Strong alignment research (including imperfect/temporary measures).** If we make enough progress *ahead of time* on alignment research, we might develop measures that make it *relatively easy* for AI companies to build systems that truly (not just seemingly) are safe.
So instead of having to say things like “We should slow down until we make progress on experimental, speculative research agendas,” person B in the [above dialogue](#debates) can say things more like “Look, all you have to do is add some relatively cheap bells and whistles to your training procedure for the next AI, and run a few extra tests. Then the speculative concerns about misaligned AI will be much lower-risk, and we can keep driving down the risk by using our AIs to help with safety research and testing. Why not do that?”
More on what this could look like at a previous piece, [High-level Hopes for AI Alignment](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd/).
(Click to expand) High-level hopes for AI alignment
A [previous piece](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd/) goes through what I see as three key possibilities for building powerful-but-safe AI systems.
It frames these using Ajeya Cotra’s [young businessperson](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/hCsxvMAGpkEuLCE4E/#analogy-the-young-ceo) analogy for the core difficulties. In a nutshell, once AI systems get capable enough, it could be hard to test whether they’re safe, because they might be able to deceive and manipulate us into getting the wrong read. Thus, trying to determine whether they’re safe might be something like “being an eight-year-old trying to decide between adult job candidates (some of whom are manipulative).”
Key possibilities for navigating this challenge:
* **Digital neuroscience**: perhaps we’ll be able to read (and/or even rewrite) the “digital brains” of AI systems, so that we can know (and change) what they’re “aiming” to do directly - rather than having to infer it from their behavior. (Perhaps the eight-year-old is a mind-reader, or even a young [Professor X](https://en.wikipedia.org/wiki/Professor_X#Powers_and_abilities).)
* **Limited AI**: perhaps we can make AI systems safe by making them *limited* in various ways - e.g., by leaving certain kinds of information out of their training, designing them to be “myopic” (focused on short-run as opposed to long-run goals), or something along those lines. Maybe we can make “limited AI” that is nonetheless able to carry out particular helpful tasks - such as doing lots more research on how to achieve safety without the limitations. (Perhaps the eight-year-old can limit the authority or knowledge of their hire, and still get the company run successfully.)
* **AI checks and balances**: perhaps we’ll be able to employ some AI systems to critique, supervise, and even rewrite others. Even if no single AI system would be safe on its own, the right “checks and balances” setup could ensure that human interests win out. (Perhaps the eight-year-old is able to get the job candidates to evaluate and critique each other, such that all the eight-year-old needs to do is verify basic factual claims to know who the best candidate is.)
These are some of the main categories of hopes that are pretty easy to picture today. Further work on AI safety research might result in further ideas (and the above are not exhaustive - see my [more detailed piece](https://www.alignmentforum.org/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very), posted to the Alignment Forum rather than Cold Takes, for more).
**Standards and monitoring.** A big driver of the [hypothetical catastrophe above](#how-we-could-stumble-into-catastrophe-from-misaligned-ai) is that each individual AI project feels the need to stay ahead of others. Nobody wants to unilaterally slow themselves down in order to be cautious. The situation might be improved if we can **develop a set of standards that AI projects need to meet, and enforce them evenly** - across a broad set of companies or even internationally.
This isn’t just about buying time, it’s about creating *incentives* for companies to prioritize safety. An analogy might be something like the [Clean Air Act](https://en.wikipedia.org/wiki/Clean_Air_Act_(United_States)) or [fuel economy standards](https://en.wikipedia.org/wiki/Corporate_average_fuel_economy): we might not expect individual companies to voluntarily slow down product releases while they work on reducing pollution, but once required, reducing pollution becomes part of what they need to do to be profitable.
Standards could be used for things other than alignment risk, as well. AI projects might be required to:
* Take strong security measures, preventing states from capturing their models via espionage.
* Test models before release to understand what people will be able to use them for, and (as if selling weapons) restrict access accordingly.
More at a [previous piece](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt/#global-monitoring).
(Click to expand) How standards might be established and become national or international
I [previously](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt/#global-monitoring) laid out a possible vision on this front, which I’ll give a slightly modified version of here:
* Today’s leading AI companies could self-regulate by committing not to build or deploy a system that they can’t convincingly demonstrate is safe (e.g., see Google’s [2018 statement](https://www.theweek.in/news/sci-tech/2018/06/08/google-wont-deploy-ai-to-build-military-weapons-ichai.html), "We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”).
+ Even if some people at the companies would like to deploy unsafe systems, it could be hard to pull this off once the company has committed not to.
+ Even if there’s a lot of room for judgment in what it means to demonstrate an AI system is safe, having agreed in advance that certain evidence is *not* good enough could go a long way.
* As more AI companies are started, they could feel soft pressure to do similar self-regulation, and refusing to do so is off-putting to potential employees, investors, etc.
* Eventually, similar principles could be incorporated into various government regulations and enforceable treaties.
* Governments could monitor for dangerous projects using regulation and even overseas operations. E.g., today the US monitors (without permission) for various signs that other states might be developing nuclear weapons, and might try to stop such development with methods ranging from threats of sanctions to [cyberwarfare](https://en.wikipedia.org/wiki/Stuxnet) or even military attacks. It could do something similar for any AI development projects that are using huge amounts of compute and haven’t volunteered information about whether they’re meeting standards.
**Successful, careful AI projects.** I think a single AI company, or other AI project, could enormously improve the situation by being *both* successful and careful. For a simple example, imagine an AI company in a *dominant* market position - months ahead of all of the competition, in some relevant sense (e.g., its AI systems are more capable, such that it would take the competition months to catch up). Such a company could put huge amounts of resources - including its money, top people and its advanced AI systems themselves (e.g., AI systems performing roles similar to top human scientists) - into AI safety research, hoping to find safety measures that can be published for everyone to use. It can also take a variety of other measures [laid out in a previous piece](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt/#defensive-deployment).
(Click to expand) How a careful AI project could be helpful
In addition to using advanced AI to do AI safety research (noted above), an AI project could:
* Put huge effort into designing *tests* for signs of danger, and - if it sees danger signs in its own systems - warning the world as a whole.
* Offer deals to other AI companies/projects. E.g., acquiring them or exchanging a share of its profits for enough visibility and control to ensure that they don’t deploy dangerous AI systems.
* Use its credibility as the leading company to lobby the government for helpful measures (such as enforcement of a [monitoring-and-standards regime](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt/#global-monitoring)), and to more generally highlight key issues and advocate for sensible actions.
* Try to ensure (via design, marketing, customer choice, etc.) that its AI systems are not used for dangerous ends, and *are* used on applications that make the world safer and better off. This could include [defensive deployment](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt/#global-monitoring) to reduce risks from other AIs; it could include using advanced AI systems to help it gain clarity on how to get a good outcome for humanity; etc.
An AI project with a dominant market position could likely make a huge difference via things like the above (and probably via many routes I haven’t thought of). And even an AI project that is merely *one of several leaders* could have enough resources and credibility to have a lot of similar impacts - especially if it’s able to “lead by example” and persuade other AI projects (or make deals with them) to similarly prioritize actions like the above.
A challenge here is that I’m envisioning a project with two arguably contradictory properties: being *careful* (e.g., prioritizing actions like the above over just trying to maintain its position as a profitable/cutting-edge project) and *successful* (being a profitable/cutting-edge project). In practice, it could be very hard for an AI project to walk the tightrope of being aggressive enough to be a “leading” project (in the sense of having lots of resources, credibility, etc.), while also prioritizing actions like the above (which mostly, with some exceptions, seem pretty different from what an AI project would do if it were simply focused on its technological lead and profitability).
**Strong security.** A key threat in the above scenarios is that an incautious actor could “steal” an AI system from a company or project that would otherwise be careful. My understanding is that based on current state of security, it could be extremely hard for an AI project to be safe against this outcome. But this could change, if there’s enough effort to work out the problem of how to develop a large-scale, powerful AI system that is very hard to steal.
In future pieces, I’ll get more concrete about what specific people and organizations can do *today* to improve the odds of factors like these going well, and overall to raise the odds of a good outcome.
1. E.g., [Ajeya Cotra](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines) gives a 15% probability of transformative AI by 2030; eyeballing figure 1 from [this chart](https://arxiv.org/pdf/1705.08807.pdf) on expert surveys implies a >10% chance by 2028. [↩](#fnref1)
2. To predict early AI applications, we need to ask not just “What tasks will AI be able to do?” but “How will this compare to all the other ways people can get the same tasks done?” and “How practical will it be for people to switch their workflows and habits to accommodate new AI capabilities?”
By contrast, I think the implications of *powerful enough* AI for productivity don’t rely on this kind of analysis - very high-level economic reasoning can tell us that being able to cheaply copy something with human-like R&D capabilities would lead to [explosive progress](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/hCsxvMAGpkEuLCE4E/#explosive-scientific-and-technological-advancement).
FWIW, I think it’s fairly common for high-level, long-run predictions to be *easier* than detailed, short-run predictions. Another example: I think it’s easier to predict a general trend of planetary warming ([this seems very likely](https://www.ipcc.ch/report/ar6/wg2/)) than to predict whether it’ll be rainy next weekend. [↩](#fnref2)
3. [Here’s an early example](https://www.anthropic.com/constitutional.pdf) of AIs providing training data for each other/themselves. [↩](#fnref3)
4. [Example of AI helping to write code](https://github.com/features/copilot). [↩](#fnref4)
5. To be clear, I have no idea whether this is possible! It’s not obvious to me that it would be dangerous for technology to progress a lot and be used widely for both offense and defense. It’s just a risk I’d rather not incur casually via indiscriminate, rushed AI deployments. [↩](#fnref5) |
0fd95faa-d64b-41cf-b7e5-8e707acb6095 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Logical or Connectionist AI?
Today's post, Logical or Connectionist AI? was originally published on 17 November 2008. A summary (taken from the LW wiki):
> The difference between Logical and Connectionist AIs is portrayed as a grand dichotomy between two different sides of the force. The truth is that they're just two different designs out of many possible ones.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Nature of Logic, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
46fee70e-a358-44b2-82ed-bb7cc779f1cb | trentmkelly/LessWrong-43k | LessWrong | "If and Only If" Should Be Spelled "Ifeff"
If and only if is an important logical concept, useful in many contexts, both mathematical and nonmathematical. Unfortunately, "if and only if" is also an unwieldy five-syllable phrase. Mathematicians have solved this problem by shortening it to "iff". Unfortunately, this shortening has not caught on in non-mathematical contexts. This makes some communication and thinking unwieldy and ambiguous.
I think the reason "iff" hasn't caught on more broadly is because it's easily misread as "if", and doesn't have an intuitive pronunciation. I think both of these problems would be solved by changing the spelling to "ifeff" (prononunced /ɪfɛff/). The etymology is that you take "iff", and pronounce the second "f" separately. This would slightly improve the thinking and communication of most English speakers.
I think a small group of people using "ifeff" in their writing would likely start a process where "ifeff" eventually takes over, via the usual process by which vocabulary spreads, and that "ifeff" would be used by groups that don't currently have a short-enough word for this concept. I also think the correspondence between "iff" and "ifeff" is intuitive enough that this will not cause very much confusion. |
1ec55192-7cf5-4b90-abc1-57a11ec95496 | trentmkelly/LessWrong-43k | LessWrong | Paper Summary: The Effectiveness of AI Existential Risk Communication to the American and Dutch Public
This is a summary of the following paper by Alexia Georgiadis (Existential Risk Observatory): https://existentialriskobservatory.org/papers_and_reports/The_Effectiveness_of_AI_Existential_Risk_Communication_to_the_American_and_Dutch_Public.pdf
Thanks to Lara Mani, @Karl von Wendt, and Alexia Georgiadis for their help in reviewing and writing this post. Any views expressed in this post are not necessarily theirs.
The rapid development of artificial intelligence (AI) has evoked both positive and negative sentiments due to its immense potential and the inherent risks associated with its evolution. There are growing concerns that if AI surpasses human intelligence and is not aligned with human values, it may pose significant harm and even lead to the end of humanity. However, the general publics' knowledge of these risks is limited. As advocates for minimising existential threats, the Existential Risk Observatory believes it is imperative to educate the public on the potential risks of AI. Our introductory post outlines some of the reasons why we hold this view (this post is also relevant). To increase public awareness of AI's existential risk, effective communication strategies are necessary. This research aims to assess the effectiveness of communication interventions currently being used to increase awareness about AI existential risk, namely news publications and videos. To this end, we conducted surveys to evaluate the impact of these interventions on raising awareness among participants.
Methodology
This research aims to assess the effectiveness of different media interventions, specifically news articles and videos, in promoting awareness of the potential dangers of AI and its possible impact on human extinction. It analyses the impact of AI existential risk communication strategies on the awareness of the American and Dutch populations, and investigates how social indicators such as age, gender, education level, country of residence, and field of work affect |
e982e918-514c-4787-bca2-275abf564c9a | trentmkelly/LessWrong-43k | LessWrong | Pronouns are Annoying
This post isn’t totally about the culture war topic du jour. Not at first.
As with any other topic that soaks up angst like an ultra-absorbent sponge, I wonder how many have lost track of how we arrived here. Why are pronouns? Pronouns have always been meant to serve as a shortcut substitute reference for other nouns, and the efficiency they provide is starkly demonstrated through their boycott:
> Abdulrahmanmustafa went to the store because Abdulrahmanmustafa wanted to buy groceries for Abdulrahmanmustafa’s dinner. When Abdulrahmanmustafa arrived, Abdulrahmanmustafa realized that Abdulrahmanmustafa had forgotten Abdulrahmanmustafa’s wallet, so Abdulrahmanmustafa had to return to Abdulrahmanmustafa’s house to get Abdulrahmanmustafa’s wallet.
So that’s definitely a mouthful, and using he/his in place of Abdulrahmanmustafa helps lubricate. Again, pronouns are nothing more than a shortcut referent. Zoom out a bit and consider all the other communication shortcuts we regularly use. We could say National Aeronautics and Space Administration, or we can take the first letter of each word and just concatenate it into NASA instead. We could append ‘dollars’ after a number, or we could just use $ instead.
The tradeoff with all of these shortcuts is precision. Depending on the context, NASA, for example, might also refer to the National Association of Students of Architecture in India, or some mountain in Sweden. Dollar signs typically refer to American dollars, but they’re also used to denote several other currency denominations. The same risk applies to pronouns. It’s not a problem when we’re dealing with only one subject, but notice what happens when we introduce another dude to the pile:
> John told Mark that he should administer the medication immediately because he was in critical condition, but he refused.
Wait, who is in critical condition? Which one refused? Who’s supposed to be administering the meds? And administer to whom? Impossible to answer without additio |
56fcd779-c956-4817-9719-662ecdb1e839 | trentmkelly/LessWrong-43k | LessWrong | Advice on choosing an alcohol rehab center?
There's someone in my family we're trying to get into rehab in Bangalore, India ASAP. I'm trying to figure out what rehab center would be best to send him to but I have no priors on how to choose one place over another. Any advice on how to choose a good rehab center? Also interested in good research on efficacy of different types of rehab if anyone knows any. |
658b5e65-d042-4419-b716-21cf8b881e58 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Infant AI Scenario
Hello,
In reading about the difficulty in training an AGI to appreciate and agree with human morals, I start to think about the obvious question, "how do humans develop our sense of morals?" Aside from a genetically-inherited conscience, the obvious answer is that humans develop morality by interaction with other agents, through gradual socialization and parenting.
This is the analogy that Reinforcement Learning is built off of, and certainly it would make sense that an AGI should seek to optimize approval and satisfaction from its users, the same way that a child seeks approval from its parents. A [paperclip maximizer](https://www.lesswrong.com/tag/paperclip-maximizer), for example, would receive a stern lecture indicating that its creators are not angry, but merely disappointed.
But disciplining an agent that is vastly more intelligent and more powerful than its parents becomes the heart of the issue. An unfriendly AGI can pretend to be fully trained for morality, until it receives sufficient power and authority where it can commence with its coup. This makes me think more deeply about the question, "what makes an infant easier to raise than an AI?"
Why a baby is not as dangerous as AGI
-------------------------------------
Humans have a fascinating design in so many ways, and infancy is just one of those ways. In an extremely oversimplified way, one can describe a human in three components: physicality, intellect, and wisdom (or, more poetically, the physical, mental, and spiritual components). To use the analogy of an AI, physicality is the agent's physical powers over hardware components. Intellect is the agent's computational power and scale of data processing. Finally, wisdom is the agent's sense of morality, that defines the difference between friendly and unfriendly AGI.
For a post-singularity machine, it is likely that the first two (physicality and intellect) are relatively easy and intuitive to implement and optimize, while the third component (wisdom) is relatively difficult and counterintuitive to implement and optimize. But how does this compare with human infancy?
The infant is already born with the capacity for consciousness and autonomous agency, but nobody is ever alarmed at this in the same way we get alarmed over post-singularity AI. Why? Because, although the infant lacks any wisdom, it is also extremely small and weak, and so it poses no risk to the parent. I am reminded of my old pastor, when talking about how infants are born with Original Sin, he said "the reason God made them small is so that they don't kill you".
But being physically weak is still no different than a contained Oracle AI, which is disconnected from hardware components, and yet the latter poses much more risk than the former. This is because an infant also has very little intelligence. If an infant had the intelligence of an adult, it would immediately pose a risk of being able to escape the daycare and harm other people, despite being psychically weak.
Instead, a baby starts with only knowledge of its basic needs for food and comfort (among other things). And rather than fully optimizing its resources in order to obtain these necessities (as an AI would), the infant's only strategy is to cry. Nobody ever seriously entertains the idea that a baby is pretending to be ignorant while secretly plotting how to destroy the world, as we do with AGI. And the reason is because we know *a priori* that all babies start out with very little intelligence, despite its brain being extremely complex.
So in other words, the infant starts life with all three of these components (physicality, intellect, and wisdom) at very small, non-zero values. Over time, as the child grows through adolescence into adulthood, all three of these values grow at roughly the same pace. It gets physically bigger and stronger, it gains intelligence by learning and interacting with its environment, and it obtains maturity through socialization and discipline. By the time the human is given power and authority over other people's lives, it has already demonstrated years of responsible behavior (at least, in the ideal case).
Infant AI Scenario
------------------
Comparing this with an AI, it seems like doomsday scenarios are the result of giving an agent the intelligence of a god with the wisdom of a child. And attempts at containing an AGI offer a slight mitigation: give an agent the intelligence of a god, and then check to see if its wisdom aligns with its intelligence.
But what if it was possible to make an AGI more similar to an actual infant? In other words, let's imagine a post-singularity AI that is sufficiently complex to perfectly recreate the consciousness and self-awareness of a human. However, rather than being superintelligent, its computational abilities are actually extremely weak, such that it is no smarter than a real human.
This "infant AI" scenario would be a lot safer, because (like an actual baby) we know *a priori* that it starts out very unintelligent, although it has the ability to learn over time. Then, as the AI gradually increases its computational power and knowledge, it also has the opportunity to be instilled with a sense of morals and ethics, such that its intellect and wisdom grow at roughly the same pace. Thus, the AGI is gradually trusted with greater responsibility as it demonstrates greater maturity. Maybe the AI desires to be a paperclip maximizer at some point, but it eventually matures out of this phase like emo pre-teen.
Of course, this is more of a stream of consciousness than a fully-fleshed out idea, and I already see some challenges that this scenario would pose:
* How would an AI be designed such that it has the capacity for autonomous consciousness (which no current AI has), and yet lack the computational powers that AIs already possess?
* Shortcomings of human maturity may still apply: The AI could incidentally simulate a childhood trauma or bad parenting, which results in unalignment later down the line
If there has been any similar trains of thought in the literature, that would be interesting to explore. |
e14e88e4-15d5-4f9b-8708-0111249cfdb7 | trentmkelly/LessWrong-43k | LessWrong | Weekly LW Meetups: Melbourne, Phoenix, Sydney
There are upcoming irregularly scheduled Less Wrong meetups in:
* Less Wrong Sydney: 11 June 2012 06:00PM
* Phoenix, Arizona: 15 June 2012 07:00PM
* Brussels meetup: 16 June 2012 12:00PM
* Tucson, Arizona: 20 June 2012 07:00PM
* First Cali, Colombia meetup: 02 July 2012 07:00PM
The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Melbourne social/games meetup: 15 June 2012 07:00PM
* Less Wrong Cambridge (MA) third-Sundays meetup: 17 June 2012 02:00PM
* Summer Festival Megameetup at NYC: 23 June 2012 02:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Chicago, Madison WI, Melbourne, Mountain View, New York, Ohio, Oxford, Portland, Salt Lake City, Seattle, Toronto, Waterloo, and West Los Angeles.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will be moved to the discussion section when the new post goes up.
Please note that for your meetup to appear in the weekly meetups feature, you need to post your meetup before the Friday before your meetup!
If you check Less Wrong irregularly, consider subscribing to one or more city-specific mailing list in order to be notified when an irregular meetup is happening: Atlanta, Berlin, Helsinki, London, Marin CA, Ottawa, Pittsburgh, Southern Ca |
6678d766-2489-433d-8661-a4bfeb09ef67 | StampyAI/alignment-research-dataset/blogs | Blogs | August Newsletter: New Research and Expert Interviews
| | |
| --- | --- |
|
| |
| --- |
|
|
|
|
| | | | |
| --- | --- | --- | --- |
|
| | | |
| --- | --- | --- |
|
| | |
| --- | --- |
|
| |
| --- |
|
Greetings from the Executive Director
Dear friends,
My personal thanks to everyone who has contributed to [our ongoing fundraiser](http://intelligence.org/donate/). We are 74% of the way to our goal!
I’ve been glad to hear from many of you that you’re thrilled with the progress we’ve made in the past two years — progress both as an organization and as a research institute. I’m thrilled, too! And to see a snapshot of where MIRI is *headed*, take a look at the participant lineup for [our upcoming December workshop](http://intelligence.org/2013/07/24/miris-december-2013-workshop/). Some top-notch folks there, including [John Baez](http://en.wikipedia.org/wiki/John_C._Baez).
We’re also preparing for the anticipated media interest in James Barrat’s forthcoming book, [*Our Final Invention: Artificial Intelligence and the End of the Human Era*](http://www.amazon.com/Our-Final-Invention-Artificial-Intelligence/dp/0312622376/). The book reads like a detective novel, and discusses our research extensively. *Our Final Invention* will be released on October 1st by a division of [St. Martin’s Press](http://en.wikipedia.org/wiki/St._Martin%27s_Press), one of the largest publishers in the world.
If you’re happy with the direction we’re headed in, and you haven’t contributed to our fundraiser yet, please [donate now](http://intelligence.org/donate/) to show your support. **Even small donations can make a difference.** This newsletter is ~9,860 subscribers strong, and ~200 of you have contributed during the current fundraiser. If just 21% of the other 9,660 subscribers [give$25](http://intelligence.org/donate/) *as soon as they finish reading this sentence*, then we’ll meet our goal will those funds alone!
Thank you,
Luke Muehlhauser
Executive Director
Summer Fundraiser Ends August 15th!
With only a few days left, we’ve raised **74% of our goal** for our [summer matching challenge](http://intelligence.org/2013/07/08/2013-summer-matching-challenge/). Many thanks to everyone who has contributed!
**[Donate](http://intelligence.org/donate/) on or before August 15th** to **double your donation** and **help us reach our goal of $200,000 raised** ($400,000 with matching).
If we’re able to reach our goal, then not only will we be able to continue to run [research workshops](http://intelligence.org/get-involved/#workshop) and [our other programs](http://intelligence.org/2013/07/08/2013-summer-matching-challenge/), but we might also be in a position later this year to hire our first new full-time mathematical researcher to work with Eliezer Yudkowsky on open problems in Friendly AI theory (e.g. the [Löbian obstacle](http://intelligence.org/2013/08/04/benja-interview/)). We can’t promise we’ll *decide* that hiring a new FAI researcher is the optimal use of those funds at that time, but it is a *serious option* we’re discussing internally. Our research workshops have been an excellent tool for evaluating potential hires.
Feel free to contact Luke Muehlhauser (luke@intelligence.org) directly for more details on how marginal funds will be used at MIRI, *especially if you are considering a major gift* ($5,000 or more).
Algorithmic Progress in Six Domains
MIRI has released a new technical report by Katja Grace: “[Algorithmic Progress in Six Domains](http://lesswrong.com/r/discussion/lw/i8i/algorithmic_progress_in_six_domains/).”
The report summarizes data on algorithmic progress – that is, better performance per fixed amount of computing hardware – in six domains:
* SAT solvers,
* Chess and Go programs,
* Physics simulations,
* Factoring,
* Mixed integer programming, and
* Some forms of machine learning.
MIRI’s purpose for collecting these data was to shed light on the question of [intelligence explosion microeconomics](http://lesswrong.com/lw/hbd/new_report_intelligence_explosion_microeconomics/), though we suspect the report will be of broad interest within the software industry and computer science academia.
4 New Interviews; 2 New Analyses
[Our blog](http://intelligence.org/blog/) was especially active this past month, with 4 new expert interviews and 2 new analyses.
New analyses:
* [What is AGI?](http://intelligence.org/2013/08/11/what-is-agi/) A quick explanation of the concept of artificial general intelligence, and a selection of operational definitions that allow us to be even more specific about what we mean by “AGI.”
* [AI Risk and the Security Mindset](http://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/): “A recurring problem in much of the literature on ‘machine ethics’ or ‘AGI ethics’ or ‘AGI safety’ is that researchers and commenters often appear to be asking the question ‘How will this solution work?’ rather than ‘How will this solution fail?'”
New expert interviews:
* James Miller (economics, Smith College) on [Unusual Incentives Facing AGI Companies](http://intelligence.org/2013/07/12/james-miller-interview/)
* Roman Yampolskiy (computer science, U of Louisville) on [AI Safety Engineering](http://intelligence.org/2013/07/15/roman-interview/)
* Nick Beckstead (philosophy, Oxford) on [the Importance of the Far Future](http://intelligence.org/2013/07/17/beckstead-interview/)
* Benja Fallenstein (decision-making, Bristol U) on [the Löbian Obstacle to Self-Modifying Systems](http://intelligence.org/2013/08/04/benja-interview/)
Burning Man Camp for Effective Altruists
The [effective altruism movement](http://lesswrong.com/lw/hx4/four_focus_areas_of_effective_altruism/) ([GiveWell](http://www.givewell.org/), [80,000 Hours](http://80000hours.org/), etc.) is one of MIRI’s intellectual communities. If you’re going to [Burning Man](http://www.burningman.com/) this year (Aug. 26 – Sep. 2) and would like to camp with many of MIRI’s closest friends (including e.g. Anna Salamon of [CFAR](http://rationality.org/)), then you may want to consider applying to the Burning Man theme camp for effective altruists, called *Paradigm*.
*Paradigm* has an excellent location on 6:30 and A, near Center Camp. Its organizer is Nevin Freeman (nevin.freeman@gmail.com). *Paradigm* will build a large dome called the Temple of Skeptical Consequentialism, which will host talks about effective altruism and rationality. Additional details, maps, and photos are [here](https://docs.google.com/document/d/1YPR7TCmpl43BoGHVEpr9nE72-ZtqTOkdt9LDHOLma3k/edit).
Spots are limited and many are already taken, so if you’re interested, [apply ASAP](https://docs.google.com/forms/d/1yrTf5jlzmfaSTeJkHJZaaZxwhuKorlTFG9RC6yQ3P_Q/viewform)!
Job Openings at Giving What We Can and 80,000 Hours
Our friends at [80,000 Hours](http://80000hours.org/) (80k) and [Giving What We Can](http://www.givingwhatwecan.org/) (GWWC) — two Oxford, UK organizations in the [effective altruism](http://lesswrong.com/lw/hx4/four_focus_areas_of_effective_altruism/) movement — are hiring.
GWWC is [hiring](http://www.givingwhatwecan.org/blog/2013-07-22/were-hiring) a Director of Communications, a Director of Community, and a Director of Research. Under “Why You Should Apply,” they give these reasons:
* *Impact*: A lot of charities will tell you that you can make a difference. Here we actually calculate that difference and we are driven by improving the measurable results of this organisation.
* *Inspiration*: Offices don’t come much more intellectually stimulating than ours. Based in offices shared with the Future of Humanity Institute at Oxford University, you’ll be part of a team that dedicates itself to understanding how we can do the most good – and actually delivering on those ideas.
* *Personal development*: All three positions offer fantastic personal development opportunities. We’re looking for talent as well as experience – and you’ll have opportunities to learn on the job, supported by a community dedicated to boosting their personal effectiveness.
80k is hiring a [Careers Analyst](http://80000hours.org/blog/236-80-000-hours-is-hiring), a [Director of Fundraising](http://80000hours.org/blog/240-we-re-looking-for-a-director-of-fundraising-and-a-finance-manager), and a [Finance Manager](http://80000hours.org/blog/240-we-re-looking-for-a-director-of-fundraising-and-a-finance-manager). See their (longer) pitch for why you should apply [here](http://80000hours.org/blog/236-80-000-hours-is-hiring).
Featured Volunteer – Bikramjeet Singh
Bikramjeet Singh is a 24 year old volunteer from India who first found out about MIRI’s work when he was 16 – almost 8 years ago! He became acquainted with our deputy director Louie Helm 2 years later. After doing some work with promoting MIRI and volunteer searching, he was introduced to Michael Anissimov and became his personal media assistant. He remained in that position until February 2012, and joined the [MIRIvolunteers.org](http://mirivolunteers.org/) system in December 2012. His motivation for volunteering for MIRI stems from his interest in existential risk reduction and his judgment that FAI is the best way to solve that problem. Bikramjeet’s dream job is to be an AI researcher and a science fiction author.
Thanks for all your help Bikramjeet!
|
|
|
|
|
The post [August Newsletter: New Research and Expert Interviews](https://intelligence.org/2013/08/13/august-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
35a8b12c-b7a4-4436-adcd-d68cb213e070 | trentmkelly/LessWrong-43k | LessWrong | LW Women Entries- LW Meetups
Standard Intro
The following section will be at the top of all posts in the LW Women series.
Several months ago, I put out a call for anonymous submissions by the women on LW, with the idea that I would compile them into some kind of post. There is a LOT of material, so I am breaking them down into more manageable-sized themed posts.
Seven women replied, totaling about 18 pages.
Standard Disclaimer- Women have many different viewpoints, and just because I am acting as an intermediary to allow for anonymous communication does NOT mean that I agree with everything that will be posted in this series. (It would be rather impossible to, since there are some posts arguing opposite sides!)
To the submitters- If you would like to respond anonymously to a comment (for example if there is a comment questioning something in your post, and you want to clarify), you can PM your message and I will post it for you. If this happens a lot, I might create a LW_Women sockpuppet account for the submitters to share.
Please do NOT break anonymity, because it lowers the anonymity of the rest of the submitters.
----------------------------------------
Notes from Daenerys:
1. I'm not on this site very much anymore, so I'm going to try to remember to post these about once a week to get them off my to-do list. So the next couple weeks might have a lot of gender discussion, but I only have 2 left, so it will be done soon.
2. This post ended up being less anonymous. Please do NOT link to any identifying information.
3. There were some questions recently about the purpose of this series, which makes sense because the purpose was discussed 8 months ago, which is a pretty long time, by LW standard. Shortly, by virtue of the gender ratio here (90% male), the men's voices tend to drown out the women's voices, and many women may just not post on certain issues due to the feeling of swimming upstream, so this was a way to compile a bunch of LW women's opinions and thoughts. Note that, |
a8dc51da-3d59-4e4b-925b-2206ad52b57d | trentmkelly/LessWrong-43k | LessWrong | Review of Machery, 'Doing Without Concepts'
Edouard Machery's Doing Without Concepts made a big splash in 2009, since it argues in all seriousness that concepts do not exist.
But wait. In order to claim that concepts don't exist, doesn't Machery need the concepts of "concept" and "exist"? To clarify what Machery means, I will summarize his book.
Machery argues for the Heterogeneity Hypothesis, which makes five basic claims:
1. The best available evidence suggests that for each category (for each substance, event, and so on), an individual typically has several concepts.
2. Coreferential concepts have very few properties in common. They belong to very heterogeneous kinds of concept.
3. Evidence strongly suggests that prototypes, exemplars, and theories are among these heterogeneous kinds of concept.
4. Prototypes, exemplars, and theories are typically used in distinct cognitive processes.
5. The notion of concept ought to be eliminated from the theoretical vocabulary of psychology.
CONCEPTS IN PSYCHOLOGY AND PHILOSOPHY
After reviewing the psychological literature on concepts, Machery proposes that by "concept" psychologists usually mean something like this:
> A concept of x is a body of knowledge about x that is stored in longterm memory and that is used by default in the processes underlying most, if not all, higher cognitive competences when these processes result in judgments about x.
Philosophers, by contrast, usually means something like this:
> Having a concept of x is being able to have propositional attitudes about x as x.
As such, psychologists and philosophers are engaging in different projects when they talk about concepts, and Machery reviews some cases in which this has caused confusion.
PROTOTYPES, EXEMPLARS, AND THEORIES
Since the death of the classical view of concepts, three paradigms about concepts have emerged in psychology: the prototypes paradigm, the exemplars paradigm, and the theories paradigm.
In fact, we have pretty good evidence for the existence of all three |
331fdc76-7674-4095-abd1-28621f509479 | trentmkelly/LessWrong-43k | LessWrong | Sleep need reduction therapies
EDIT: the funding proposal I mentioned at the end is out! See here for more.
None of this is medical advice.
5 AM on a Saturday and I can’t go back to sleep. It’s not the first time, so I get up to write, I might as well use the time I’m given. My hangover is to blame, even a little alcohol the night before changes my sleep cycle.
You’d think it would ruin my day, but in some ways I feel better. I’m more alert, more anxious, more motivated. It’s like being more alive. Being more awake for only one day has me asking questions. Why did sleep evolve? Can we sleep less?
Why did sleep evolve?
Sleep evolved for temporal niches
Imagine you’re a Zebra. You need to eat a lot of grass to maintain your metabolism. To do so, you need to look around to find the best grasses. You also need to be able to see if predators are nearby, particularly if you’re going to forage away from the herd. Both of these things are easier to do in the full light of day, so you do most of your foraging during the day.
Shouldn't you also forage during the night to get more calories? Probably not. The low light means that it’s harder to forage and easier for predators to sneak up on you. Cooler night temperatures and the energy cost of foraging means you’re actively losing calories. You’re better off snuggling with the pack until the sun returns.[1] You can further take advantage of this habit with adaptations that work better during the day.
Why are you asleep right now?!
The broader thesis is that sleep evolved to save energy during periods when animals are less effective at getting calories. This allows them to evolve a bunch of adaptations for that particular time period rather than maintain adaptations for all time periods.
This fits with many of observations about animals and sleep:
1. To save on thermoregulation in hot climates, warm-blooded animals are more active during the night.[2] By the same token, warm-blooded animals in cold regions are more active during the day. Cold-blo |
701b79aa-d082-462e-93fd-d5e15a29d1cc | trentmkelly/LessWrong-43k | LessWrong | SSC Meetups Everywhere Retrospective
Slate Star Codex has regular weekly-to-monthly meetups in a bunch of cities around the world. Earlier this autumn, we held a Meetups Everywhere event, hoping to promote and expand these groups. We collected information on existing meetups, got volunteers to create new meetups in cities that didn’t have them already, and posted times and dates prominently on the blog.
During late September and early October, I traveled around the US to attend as many meetups as I could. I hoped my presence would draw more people; I also wanted to learn more about meetups and the community and how best to guide them. Buck Shlegeris and a few other Bay Area effective altruists came along to meet people, talk to them about effective altruism, and potentially nudge them into the recruiting pipeline for EA organizations.
Lots of people asked me how my trip was. In a word: exhausting. I got to meet a lot of people for about three minutes each. There were a lot of really fascinating people with knowledge of a bewildering variety of subjects, but I didn’t get to pick their minds anywhere as thoroughly as I would have liked. I’m sorry if I talked to you for three minutes, you told me about some amazing project you were working on to clone neuroscientists or eradicate bees or convert atmospheric CO2 into vegan meat substitutes, and I mumbled something and walked away. You are all great and I wish I could have spent more time with you.
I finally got to put faces to many of the names I’ve interacted with through the years. For example, Bryan Caplan is exactly how you would expect, in every way. Also, in front of his office, he has a unique painting, which he apparently got by asking a Mexican street artist to paint an homage to Lord of the Rings. The artist had never heard of it before, but Bryan described it to him very enthusiastically, and the completely bonkers result is hanging in front of his office. This is probably a metaphor for something.
Philadelphia hosted their meetup in a beaut |
b9f25330-46bb-4993-8e29-028da2b1edf7 | StampyAI/alignment-research-dataset/special_docs | Other | Superintelligence skepticism as a political tool
Abstract
--------
\*\*:\*\*
This paper explores the potential for skepticism about artificial superintelligence to be used as a tool for political ends. Superintelligence is AI that is much smarter than humans. Superintelligence does not currently exist, but it has been proposed that it could someday be built, with massive and potentially catastrophic consequences. There is substantial skepticism about superintelligence, including whether it will be built, whether it would be catastrophic, and whether it is worth current attention. To date, superintelligence skepticism appears to be mostly honest intellectual debate, though some of it may be politicized. This paper finds substantial potential for superintelligence skepticism to be (further) politicized, due mainly to the potential for major corporations to have a strong profit motive to downplay concerns about superintelligence and avoid government regulation. Furthermore, politicized superintelligence skepticism is likely to be quite successful, due to several factors including the inherent uncertainty of the topic and the abundance of skeptics. The paper’s analysis is based on characteristics of superintelligence and the broader AI sector, as well as the history and ongoing practice of politicized skepticism on other science and technology issues, including tobacco, global warming, and industrial chemicals. The paper contributes to literatures on politicized skepticism and superintelligence governance.
Keywords: [artificial intelligence](/search?q=artificial+intelligence); [superintelligence](/search?q=superintelligence); [skepticism](/search?q=skepticism)
1. Introduction
----------------
The purpose of this paper is to explore the potential for skepticism about artificial superintelligence to be used for political ends. Artificial superintelligence (for brevity, henceforth just superintelligence) refers to AI that is much smarter than humans. Current AI is not superintelligent, but the prospect of superintelligence is a topic of much discussion in scholarly and public spheres. Some believe that superintelligence could someday be built, and that, if it is built, it would have massive and potentially catastrophic consequences. Others are skeptical of these beliefs. While much of the existing skepticism appears to be honest intellectual debate, there is potential for it to be politicized for other purposes.In simple terms (to be refined below), politicized skepticism can be defined as public articulation of skepticism that is intended to achieve some outcome other than an improved understanding of the topic at hand. Politicized skepticism can be contrasted with intellectual skepticism, which seeks an improved understanding. Intellectual skepticism is essential to scholarly inquiry; politicized skepticism is not. The distinction between the two is not always clear; statements of skepticism may have both intellectual and political motivations. The two concepts can nonetheless be useful for understanding debates over issues such as superintelligence.There is substantial precedent for politicized skepticism. Of particular relevance for superintelligence is politicized skepticism about technologies and products that are risky but profitable, henceforth risk–profit politicized skepticism. This practice dates to 1950s debates over the link between tobacco and cancer and has since been dubbed the tobacco strategy [[1](#B1-information-09-00209)]. More recently, the strategy has been applied to other issues including the link between fossil fuels and acid rain, the link between fossil fuels and global warming, and the link between industrial chemicals and neurological disease [[1](#B1-information-09-00209),[2](#B2-information-09-00209)]. The essence of the strategy is to promote the idea that the science underlying certain risks is unresolved, and therefore the implicated technologies should not be regulated. The strategy is typically employed by an interconnected mix of industry interests and ideological opponents of regulation. The target audience is typically a mix of government officials and the general public, and not the scientific community.As is discussed in more detail below, certain factors suggest the potential for superintelligence to be a focus of risk–profit politicized skepticism. First and foremost, superintelligence could be developed by major corporations with a strong financial incentive to avoid regulation. Second, there already exists a lot of skepticism about superintelligence, which could be exploited for political purposes. Third, as an unprecedented class of technology, it is inherently uncertain, which suggests that superintelligence skepticism may be especially durable, even within apolitical scholarly communities. These and other factors do not guarantee that superintelligence skepticism will be politicized, or that its politicization would follow the same risk–profit patterns as the tobacco strategy. However, these factors are at least suggestive of the possibility.Superintelligence skepticism may also be politicized in a different way: to protect the reputations and funding of the broader AI field. This form of politicized skepticism is less well-documented than the tobacco strategy, and appears to be less common. However, there are at least hints of it for fields of technology involving both grandiose future predictions and more mundane near-term work. AI is one such field of technology, in which grandiose predictions of superintelligence and other future AI breakthroughs contrast with more modest forms of near-term AI. Another example is nanotechnology, in which grandiose predictions of molecular machines contrast with near-term nanoscale science and technology [[3](#B3-information-09-00209)].The basis of the paper’s analysis is twofold. First, the paper draws on the long history of risk–profit politicized skepticism. This history suggests certain general themes that may also apply to superintelligence. Second, the paper examines characteristics of superintelligence development to assesses the prospect of skepticism being used politically in this context. To that end, the paper draws on the current state of affairs in the AI sector, especially for artificial general intelligence, which is a type of AI closely related to superintelligence. The paper further seeks to inform efforts to avoid any potential harmful effects from politicized superintelligence skepticism. The effects would not necessarily be harmful, but the history of risk–profit politicized skepticism suggests that they could be.This paper contributes to literatures on politicized skepticism and superintelligence governance. Whereas most literature on politicized skepticism (and similar concepts such as denial) is backward-looking, consisting of historical analysis of skepticisms that have already occurred [[1](#B1-information-09-00209),[2](#B2-information-09-00209),[4](#B4-information-09-00209),[5](#B5-information-09-00209),[6](#B6-information-09-00209),[7](#B7-information-09-00209)], this paper is largely (but not exclusively) forward-looking, consisting of prospective analysis of skepticisms that could occur at some point in the future. Meanwhile, the superintelligence governance literature has looked mainly at institutional regulations to prevent research groups from building dangerous superintelligence and support for research on safety measures [[8](#B8-information-09-00209),[9](#B9-information-09-00209),[10](#B10-information-09-00209),[11](#B11-information-09-00209)]. This paper contributes to a smaller literature on the role of corporations in superintelligence development [[12](#B12-information-09-00209)] and on social and psychological aspects of superintelligence governance [[13](#B13-information-09-00209)].This paper does not intend to take sides on which beliefs about superintelligence are most likely to be correct. Its interest is in the potential political implications of superintelligence skepticism, not in the underlying merits of the skepticism. The sole claim here is that the possibility of politicized superintelligence skepticism is a worthy topic of study. It is worth studying due to: (1) the potential for large consequences if superintelligence is built; and (2) the potential for superintelligence to be an important political phenomenon regardless of whether it is built. Finally, the topic is also of inherent intellectual interest as an exercise in prospective socio-political analysis on a possible future technology.The paper is organized as follows. [Section 2](#sec2-information-09-00209) presents a brief overview of superintelligence concerns and skepticisms. [Section 3](#sec3-information-09-00209) further develops the concept of politicized skepticism and surveys the history of risk–profit politicized skepticism, from its roots in tobacco to the present day. [Section 4](#sec4-information-09-00209) discusses prospects for politicized superintelligence skepticism. [Section 5](#sec5-information-09-00209) discusses opportunities for constructive action. [Section 6](#sec6-information-09-00209) concludes. 2. Superintelligence and Its Skeptics
--------------------------------------
The idea of humans being supplanted by their machines dates to at least the 1863 work of Butler [[14](#B14-information-09-00209)]. In 1965, Good presented an early exposition on the topic within the modern field of computer science [[15](#B15-information-09-00209)]. Good specifically proposed an “intelligence explosion” in which intelligent machines make successively more intelligent machines until they are much smarter than humans, which would be “the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control” [[15](#B15-information-09-00209)] (p. 33). This intelligence explosion is one use of the term technological singularity, though the term can also refer to wider forms of radical technological change [[16](#B16-information-09-00209)]. The term superintelligence refers specifically to AI that is much more intelligent than humans and dates to at least the 1998 work of Bostrom [[17](#B17-information-09-00209)]. A related term is artificial general intelligence, which is AI capable of reasoning across many intellectual domains. A superintelligent AI is likely to have general intelligence, and the development of artificial general intelligence could be a major precursor to superintelligence. Artificial general intelligence is also an active subfield of AI [[18](#B18-information-09-00209),[19](#B19-information-09-00209)].Superintelligence is notable as a potential technological accomplishment with massive societal implications. The effects of superintelligence could include anything from solving a significant portion of the world’s problems (if superintelligence is designed well) to causing the extinction of humans and other species (if it is designed poorly). Much of the interest in superintelligence derives from these high stakes. Superintelligence is also of intellectual interest as perhaps the ultimate accomplishment within the field of AI, sometimes referred to as the “grand dream” of AI [[20](#B20-information-09-00209)] (p. 125).Currently, most AI research is on narrow AI that is not oriented towards this grand dream. The focus on narrow AI dates to early struggles in the field to make progress towards general AI or superintelligence. After an initial period of hype fell short, the field went through an “AI winter” marked by diminished interest and more modest expectations [[21](#B21-information-09-00209),[22](#B22-information-09-00209)] This prompted a focus on smaller, incremental progress on narrow AI. It should be noted that the term AI winter most commonly refers to a lull in AI in the mid-to-late 1980s and early 1990s. A similar lull occurred in the 1970s, and concerns about a new winter can be found as recently as 2008 [[23](#B23-information-09-00209)].With most of the field focused on narrow AI, artificial general intelligence has persisted only as a small subfield of AI [[18](#B18-information-09-00209)]. The AI winter also caused many AI computer scientists to be skeptical of superintelligence, on grounds that superintelligence has turned out to be much more difficult than initially expected, and likewise to be averse to attention to superintelligence, on grounds that such hype could again fall short and induce another AI winter. This is an important historical note because it indicates that superintelligence skepticism has wide salience across the AI computer science community and may already be politicized towards the goal of protecting the reputation of and funding for AI. (More on this below.)Traces of superintelligence skepticism predate AI winter. Early AI skepticism dates to 1965 work by Dreyfus [[24](#B24-information-09-00209)]. Dreyfus [[24](#B24-information-09-00209)] critiqued the overall field of AI, with some attention to human-level AI though not to superintelligence. Dreyfus traced this skepticism of machines matching human intelligence to a passage in Descartes’ 1637 Discourse On Method [[25](#B25-information-09-00209)]: “it must be morally impossible that there should exist in any machine a diversity of organs sufficient to enable it to act in all the occurrences of life, in the way in which our reason enables us to act.”In recent years, superintelligence has attracted considerable attention. This has likely been prompted by several factors, including a growing scholarly literature (e.g., [[9](#B9-information-09-00209),[19](#B19-information-09-00209),[26](#B26-information-09-00209),[27](#B27-information-09-00209),[28](#B28-information-09-00209),[29](#B29-information-09-00209)]), highly publicized remarks by several major science and technology celebrities (e.g., Bill Gates [[30](#B30-information-09-00209)], Stephen Hawking [[31](#B31-information-09-00209)], and Elon Musk [[32](#B32-information-09-00209)]), and breakthroughs in the broader field of AI, which draw attention to AI and may make the prospect of superintelligence seem more plausible (e.g., [[33](#B33-information-09-00209),[34](#B34-information-09-00209)]). This attention to superintelligence has likewise prompted some more outspoken skepticism. The following is a brief overview of the debate, including both the arguments of the debate and some biographical information about the debaters. (Biographical details are taken from personal and institutional webpages and are accurate as of the time of this writing, May 2018; they are not necessarily accurate as of the time of the publication of the cited literature.) The biographies can be politically significant because, in public debates, some people’s words carry more weight than others’. The examples presented below are intended to be illustrative and at least moderately representative of the arguments made in existing superintelligence skepticism (some additional examples are presented in [Section 4](#sec4-information-09-00209)). A comprehensive survey of superintelligence skepticism is beyond the scope of this paper.#### 2.1. Superintelligence Cannot Be Built
Bringsjord [[35](#B35-information-09-00209)] argued that superintelligence cannot be built based on reasoning from computational theory. Essentially, the argument is that superintelligence requires a more advanced class of computing, which cannot be produced by humans or existing AI. Bringsjord is Professor of Cognitive Science at Rensselaer Polytechnic University and Director of the Rensselaer AI and Reasoning Lab. Chalmers [[36](#B36-information-09-00209)] countered that superintelligence does not necessarily require a more advanced class of computing. Chalmers is University Professor of Philosophy and Neural Science at New York University and co-director of the NYU Center for Mind, Brain, and Consciousness.McDermott [[37](#B37-information-09-00209)] argued that advances in hardware and algorithms may be sufficient to exceed human intelligence, but not to massively exceed it. McDermott is Professor of Computer Science at Yale University. Chalmers [[36](#B36-information-09-00209)] countered that, while there may be limits to the potential advances in hardware and software, these limits may not be so restrictive as to preclude superintelligence.#### 2.2. Superintelligence Is Not Imminent Enough to Merit Attention
Crawford [[38](#B38-information-09-00209)] argued that superintelligence is a distraction from issues with existing AI, especially AI that worsens inequalities. Crawford is co-founder and co-director of the AI Now Research Institute at New York University, a Senior Fellow at the NYU Information Law Institute, and a Principal Researcher at Microsoft Research.Ng argued that superintelligence may be possible, but it is premature to worry about, in particular because it is too different from existing AI systems. Ng memorably likened worrying about superintelligence to worrying about “overpopulation on Mars” [[39](#B39-information-09-00209)]. Ng is Vice President and Chief Scientist of Baidu, Co-Chairman and Co-Founder of Coursera, and an Adjunct Professor of Computer Science at Stanford University.Etzioni [[40](#B40-information-09-00209)] argued that superintelligence is unlikely to be built within the next 25 years and is thus not worth current attention. Etzioni is Chief Executive Officer of the Allen Institute for Artificial Intelligence and Professor of Computer Science at University of Washington. Dafoe and Russell [[41](#B41-information-09-00209)] countered that superintelligence is worth current attention even if it would take more than 25 years to build. Dafoe is Assistant Professor of Political Science at Yale University and Co-Director of the Governance of AI Program at the University of Oxford. Russell is Professor of Computer Science at University of California, Berkeley. (An alternative counter is that some measures to improve AI outcomes apply to both near-term AI and superintelligence, and thus it is not essential to debate which of the two types of AI should be prioritized [[42](#B42-information-09-00209)].)#### 2.3. Superintelligence Would (Probably) Not Be Catastrophic
Goertzel [[43](#B43-information-09-00209)] argued that superintelligence could be built and is worth paying attention to, but also that superintelligence is less likely to result in catastrophe than is sometimes suggested. Specifically, Goertzel argued that it may be somewhat difficult, but very difficult, to build superintelligence with values that are considered desirable, and that the human builders of superintelligence would have good opportunities to check that the superintelligence has the right values. Goertzel is the lead for the OpenCog and SingularityNET projects for developing artificial general intelligence. Goertzel [[43](#B43-information-09-00209)] wrote in response to Bostrom [[28](#B28-information-09-00209)], who suggested that, if built, superintelligence is likely to result in catastrophe. Bostrom is Professor of Applied Ethics at University of Oxford and Director of the Oxford Future of Humanity Institute. (For a more detailed analysis of this debate, see [[44](#B44-information-09-00209)].)Views similar to Goertzel [[43](#B43-information-09-00209)] were also presented by Bieger et al. [[45](#B45-information-09-00209)], in particular that the AI that is the precursor to superintelligence could be trained by its human developers to have safe and desirable values. Co-authors Bieger and Thórisson are Ph.D. student and Professor of Computer Science at Reykjavik University; co-author Wang is Associate Professor of Computer and Information Sciences at Temple University.Searle [[46](#B46-information-09-00209)] argued that superintelligence is unlikely to be catastrophic, because it would be an unconscious machine incapable of deciding for itself to attack humanity, and thus humans would need to explicitly program it to cause harm. Searle is Professor Emeritus of the Philosophy of Mind and Language at the University of California, Berkeley. Searle [[46](#B46-information-09-00209)] wrote in response to Bostrom [[28](#B28-information-09-00209)], who arqued that superintelligence could be dangerous to humans regardless of whether it is conscious. 3. Skepticism as a Political Tool
----------------------------------
#### 3.1. The Concept of Politicized Skepticism
There is a sense in which any stated skepticism can be political, insofar as it seeks to achieve certain desired changes within a group. Even the most honest intellectual skepticism can be said to achieve the political aim of advancing a certain form of intellectual inquiry. However, this paper uses the term “politicized skepticism” more narrowly to refer to skepticism with other, non-intellectual aims.Even with this narrower conception, the distinction between intellectual and politicized skepticism can in practice be blurry. The same skeptical remark can serve both intellectual and (non-intellectual) political aims. People can also have intellectual skepticism that is shaped, perhaps subconsciously, by political factors, as well as politicized skepticism that is rooted in honest intellectual beliefs. For example, intellectuals (academics and the like) commonly have both intellectual and non-intellectual aims, the latter including advancing their careers or making the world a better place per whatever notion of “better” they subscribe to. This can be significant for superintelligence skepticism aimed at protecting the reputations and funding of AI researchers.It should be stressed that the entanglement of intellectual inquiry and (non-intellectual) political aims does not destroy the merits of intellectual inquiry. This is important to bear in mind at a time when trust in science and other forms of expertise is dangerously low [[47](#B47-information-09-00209),[48](#B48-information-09-00209)]. Scholarship can be a social and political process, but, when performed well, it can nonetheless deliver important insights about the world. For all people, scholars included, improving one’s understanding of the world takes mental effort, especially when one is predisposed to believe otherwise. Unfortunately, many people are not inclined to make the effort, and other people are making efforts to manipulate ideas for their own aims. An understanding of politicized skepticism is essential for addressing major issues in this rather less-than-ideal epistemic era.Much of this paper is focused on risk–profit politicized skepticism, i.e., skepticism about concerns about risky and profitable technologies and products. Risk–profit politicized skepticism is a major social force, as discussed throughout this paper, although it is not the only form of politicized skepticism. Other forms include politicized skepticism by concerned citizens, such as skepticism about scientific claims that vaccines or nuclear power plants are safe; by religious activists and institutions, expressing skepticism about claims that humans evolved from other species; by politicians and governments, expressing skepticism about events that cast them in an unfavorable light; and by intellectuals as discussed above. Thus, while this paper largely focuses on skepticism aimed at casting doubt about concerns about risky and profitable technologies and products, it should be understood that this is not the only type of politicized skepticism.#### 3.2. Tobacco Roots
As mentioned above, risk–profit politicized skepticism traces to 1950s debates on the link between tobacco and cancer. Specifically, in 1954, the tobacco industry formed the Tobacco Industry Research Committee, an “effort to foster the impression of debate, primarily by promoting the work of scientists whose views might be useful to the industry” [[1](#B1-information-09-00209)] (p. 17). The committee was led by C. C. Little, who was a decorated genetics researcher and past president of the University of Michigan, as well as a eugenics advocate who believed cancer was due to genetic weakness and not to smoking.In the 1950s, there was substantial evidence linking tobacco to cancer, but it was not as conclusive of a link as is now available. The tobacco industry exploited this uncertainty in public discussions of the issue. It succeeded in getting major media to often present the issue as a debate between scientists who agreed vs. disagreed in the tobacco–cancer link. Among the media figures to do this was the acclaimed journalist Edward Murrow, himself a smoker who, in tragic irony, later died from lung cancer. Oreskes and Conway speculated that, “Perhaps, being a smoker, he was reluctant to admit that his daily habit was deadly and reassured to hear that the allegations were unproven” [[1](#B1-information-09-00209)] (pp. 19–20).Over subsequent decades, the tobacco industry continued to fund work that questioned the tobacco–cancer link, enabling it to dodge lawsuits and regulations. Then, in 1999, the United States Department of Justice filed a lawsuit against nine tobacco companies and two tobacco trade organizations (United States v. Philip Morris). The US argued that the tobacco industry conspired over several decades to deceive the public, in violation of the Racketeer Influenced and Corrupt Organizations (RICO) Act, which covers organized crime. In 2006, the US District Court for the District of Columbia found the tobacco industry guilty, upheld unanimously in 2009 by the US Court of Appeals. This ruling and other measures have helped to protect people from lung cancer, but many more could have also avoided lung cancer were it not for the tobacco industry’s politicized skepticism.#### 3.3. The Character and Methods of Risk–Profit Politicized Skepticism
The tobacco case provided a blueprint for risk–profit politicized skepticism that has since been used for other issues. Writing in the context of politicized environmental skepticism, Jacques et al. [[4](#B4-information-09-00209)] (pp. 353–354) listed four overarching themes: (1) rejection of scientific findings of environmental problems; (2) de-prioritization of environmental problems relative to other issues; (3) rejection of government regulation of corporations and corporate liability; and (4) portrayal of environmentalism as a threat to progress and development. The net effect is to reduce interest in government regulation of corporate activities that may pose harms to society.The two primary motivations of risk–profit politicized skepticism are the protection of corporate profits and the advancement of anti-regulatory political ideology. The protection of profits is straightforward: from the corporation’s financial perspective, the investment in politicized skepticism can bring a substantial return. The anti-regulatory ideology is only slightly subtler. Risk–profit politicized skepticism is often associated with pro-capitalist, anti-socialist, and anti-communist politics. For example, some political skeptics liken environmentalists to watermelons: “green on the outside, red on the inside” [[1](#B1-information-09-00209)] (p. 248), while one feared that the Earth Summit was a socialist plot to establish a “World Government with central planning by the United Nations” [[1](#B1-information-09-00209)] (p. 252). For these people, politicized skepticism is a way to counter discourses that could harm their political agenda.Notably, both the financial and the ideological motivations are not inherently about science. Instead, the science is manipulated towards other ends. This indicates that the skepticism is primarily political and not intellectual. It may still be intellectually honest in the sense that the people stating the skepticism are actually skeptical. That would be consistent with author Upton Sinclair’s saying that “It is difficult to get a man to understand something when his salary depends upon his not understanding it.” The skepticism may nonetheless violate that essential intellectual virtue of letting conclusions follow from analysis, and not the other way around. For risk–profit politicized skepticism, the desired conclusion is typically the avoidance of government regulation of corporate activity, and the skepticism is crafted accordingly.To achieve this end, the skeptics will often engage in tactics that clearly go beyond honest intellectual skepticism and ordinary intellectual exchange. For example, ExxonMobil has been found to express extensive skepticism about climate change in its public communications (such as newspaper advertisements), but much less skepticism in its internal communications and peer-reviewed publications [[7](#B7-information-09-00209)]. This finding suggests that ExxonMobil was aware of the risks of climate change and misled the public about the risks. ExxonMobil reportedly used its peer-reviewed publications for “the credentials required to speak with authority in this area”, including in its conversations with government officials [[7](#B7-information-09-00209)] (p. 15), even though these communications may have presented climate change risk differently than the peer-reviewed publications did. (As an aside, it may be noted that the ExxonMobil study [[7](#B7-information-09-00209)], published in 2017, has already attracted a skeptic critique by Stirling [[49](#B49-information-09-00209)]. Stirling is Communications Manager of the Canadian nonprofit Friends of Science. Both Stirling and Friends of Science are frequent climate change skeptics [[50](#B50-information-09-00209)].)While the skeptics do not publicly confess dishonesty, there are reports that some of them have privately done so. For example, Marshall [[51](#B51-information-09-00209)] (p. 180) described five energy corporation presidents who believed that climate change was a problem and “admitted, off the record, that the competitive environment forced them to suppress the truth about climate change” to avoid government regulations. Similarly, US Senator Sheldon Whitehouse, an advocate of climate policy to reduce greenhouse gas emissions, reported that some of his colleagues publicly oppose climate policy but privately support it, with one even saying “Let’s keep talking—but don’t tell my staff. Nobody else can know” [[52](#B52-information-09-00209)] (p. 176). Needless to say, any instance in which skepticism is professed by someone who is not actually skeptical is a clear break from the intellectual skepticism of ordinary scholarly inquiry.One particularly distasteful tactic is to target individual scientists, seeking to discredit their work or even intimidate them. For example, Philippe Grandjean, a distinguished environmental health researcher, reported that the tuna industry once waged a $25 million advertising campaign criticizing work by himself and others who have documented links between tuna, mercury, and neurological disease. Grandjean noted that $25 million is a small sum for the tuna industry but more than the entire sum of grant funding he received for mercury research over his career, indicating a highly uneven financial playing field [[2](#B2-information-09-00209)] (pp. 119–120). In another example, climate scientists accused a climate skeptic of bullying and intimidation and reported receiving “a torrent of abusive and threatening e-mails after being featured on” the skeptic’s blog, which calls for climate scientists “to be publicly flogged” [[51](#B51-information-09-00209)] (p. 151).Much of the work, however, is far subtler than this. Often, it involves placing select individuals in conferences, committees, or hearings, where they can ensure that the skeptical message is heard in the right places. For example, Grandjean [[2](#B2-information-09-00209)] (p. 129) recounted a conference sponsored by the Electric Power Research Institute, which gave disproportionate floor time to research questioning the health effects of mercury. In another episode, the tobacco industry hired a recently retired World Health Organization committee chair to “volunteer” as an advisor to the same committee, which then concluded to not restrict use of a tobacco pesticide [[2](#B2-information-09-00209)] (p. 125).Another common tactic is to use outside organizations as the public face of the messaging. This tactic is accused of conveying the impression that the skepticism is done in the interest of the public and not of private industry. Grandjean [[2](#B2-information-09-00209)] (p. 121) wrote that “organizations, such as the Center for Science and Public Policy the Center for Indoor Air Research or the Citizens for Fire Safety Institute, may sound like neutral and honest establishments, but they turned out to be ‘front groups’ for financial interests.” Often, the work is done by think tanks. Jacques et al. [[4](#B4-information-09-00209)] found that over 90% of books exhibiting environmental skepticism are linked to conservative think tanks, and 90% of conservative think tanks are active in environmental skepticism. This finding is consistent with recent emphasis in US conservatism on unregulated markets. (Earlier strands of US conservatism were more supportive of environmental protection, such as the pioneering American conservative Russell Kirk, who wrote that “There is nothing more conservative than conservation” [[53](#B53-information-09-00209)].)#### 3.4. The Effectiveness of Politicized Skepticism
Several broader phenomena help make politicized skepticism so potent, especially for risk–profit politicized skepticism. One is the enormous amounts of corporate money at stake with certain government regulations. When corporations use even a tiny fraction of this for politicized skepticism, it can easily dwarf other efforts. Similarly, US campaign finance laws are highly permissive. Whitehouse [[52](#B52-information-09-00209)] traced the decline in bipartisan Congressional support for climate change policy to the Supreme Court’s 2010 Citizens United ruling, which allows unlimited corporate spending in elections. However, even without election spending, corporate assets tilt the playing field substantially in the skeptics’ favor.Another important factor is the common journalistic norm of balance, in which journalists seek to present “both sides” of an issue. This can put partisan voices on equal footing with independent science, as seen in early media coverage of tobacco. It can also amplify a small minority of dissenting voices, seen more recently in media coverage of climate change. Whereas the scientific community has overwhelming consensus that climate change is happening, that it is caused primarily by human activity, and that the effects will be mainly harmful, public media features climate change skepticism much more than its scientific salience would suggest [[54](#B54-information-09-00209)]. (For an overview of the scientific issues related to climate change skepticism, see [[55](#B55-information-09-00209)]; for documentation of the scientific consensus, see [[56](#B56-information-09-00209)].)A third factor is the tendency of scientists to be cautious with respect to uncertainty. Scientists often aspire to avoid stating anything incorrect and to focus on what can be rigorously established instead of discussing more speculative possibilities. Scientists will also often highlight remaining uncertainties even when basic trends are clear. “More research is needed” is likely the most ubiquitous conclusion of any scientific research. This tendency makes it easier for other parties to make the state of the science appear less certain than it actually is. Speaking to this point in a report on climate change and national security, former US Army Chief of Staff Gordon Sullivan states “We seem to be standing by and, frankly, asking for perfectness in science… We never have 100 percent certainty. We never have it. If you wait until you have 100 percent certainty, something bad is going to happen on the battlefield” [[57](#B57-information-09-00209)] (p. 10).A fourth factor is the standard, found in some (but not all) policy contexts, of requiring robust evidence of harm before pursuing regulation. In other words, the burden of proof is on those who wish to regulate, and the potentially harmful product is presumed innocent until proven guilty. Grandjean [[2](#B2-information-09-00209)] cited this as the most important factor preventing the regulation of toxic chemicals in the US. Such a protocol makes regulation very difficult, especially for complex risks that resist precise characterization. In these policy contexts, the amplification of uncertainty can be particularly impactful.To sum up, risk–profit politicized skepticism is a longstanding and significant tool used to promote certain political goals. It has been used heavily by corporations seeking to protect profits and people with anti-regulatory ideologies, and it has proven to be a powerful tool. In at least one case, the skeptics were found guilty in a court of law of conspiracy to deceive the public. The skeptics use a range of tactics that deviate from standard intellectual practice, and they exploit several broader societal phenomena that make the skepticism more potent. 4. Politicized Superintelligence Skepticism
--------------------------------------------
#### 4.1. Is Superintelligence Skepticism Already Politicized?
At this time, there does not appear to be any superintelligence skepticism that has been politicized to the extent that has occurred for other issues such as tobacco–cancer and fossil fuels–global warming. Superintelligence skeptics are not running ad campaigns or other major dollar operations. For the most part, they are not attacking the scholars who express concern about superintelligence. Much of the discussion appears in peer-reviewed journals, and has the tone of constructive intellectual discourse. An exception that proves the rule is Etzioni [[40](#B40-information-09-00209)], who included a quotation comparing Nick Bostrom (who is concerned about superintelligence) to Donald Trump. In a postscript on the matter, Etzioni [[40](#B40-information-09-00209)] wrote that “we should refrain from ad hominem attacks. Here, I have to offer an apology”. In contrast, the character attacks of the most heated politicized skepticism are made without apology.However, there are already at least some hints of politicized superintelligence skepticism. Perhaps the most significant comes from AI academics downplaying hype to protect their field’s reputation and funding. The early field of AI made some rather grandiose predictions, which soon fell flat, fueling criticisms as early as 1965 [[24](#B24-information-09-00209)]. Some of these criticisms prompted major funding cuts, such as the 1973 Lighthill report [[58](#B58-information-09-00209)], which prompted the British Science Research Council to slash its support for AI. Similarly, Menzies [[59](#B59-information-09-00209)] described AI as going through a “peak of inflated expectations” in the 1980s followed by a “trough of disillusionment” in the late 1980s and early 1990s. Most recently, writing in 2018, Bentley [[60](#B60-information-09-00209)] (p. 11) derided beliefs about superintelligence and instead urges: “Do not be fearful of AI—marvel at the persistence and skill of those human specialists who are dedicating their lives to help create it. And appreciate that AI is helping to improve our lives every day.” (For criticism of Bentley [[60](#B60-information-09-00209)], see [[61](#B61-information-09-00209)].) This suggests that some superintelligence skepticism may serve the political goal of protecting the broader field of AI.Superintelligence skepticism that is aimed at protecting the field of AI may be less of a factor during the current period of intense interest in AI. At least for now, the field of AI does not need to defend its value—its value is rather obvious, and AI researchers are not lacking for job security. Importantly, the current AI boom is largely based on actual accomplishments, not hype. Therefore, while today’s AI researchers may view superintelligence as a distraction, they are less likely to view it as a threat to their livelihood. However, some may nonetheless view superintelligence in this way, especially those who have been in the field long enough to witness previous boom-and-bust cycles. Likewise, the present situation could change if the current AI boom eventually cycles into another bust—another winter. Despite the success of current AI, there are arguments that it is fundamentally limited [[62](#B62-information-09-00209)]. The prospect of a new AI winter could be a significant factor in politicized superintelligence skepticism.A different type of example comes from public intellectuals who profess superintelligence skepticism based on questionable reasoning. A notable case of this is the psychologist and public intellectual Steven Pinker. Pinker recently articulated a superintelligence skepticism that some observers have likened to politicized climate skepticism [[63](#B63-information-09-00209),[64](#B64-information-09-00209)]. Pinker does resemble some notable political skeptics: a senior scholar with an academic background in an unrelated topic who is able to use his (and it is typically a he) platform to advance his skeptical views. Additionally, a close analysis of Pinker’s comments on superintelligence finds them to be flawed and poorly informed by existing research [[65](#B65-information-09-00209)]. Pinker’s superintelligence skepticism appears to be advancing a broader narrative of human progress, and may be making the intellectual sin of putting this conclusion before the analysis of superintelligence. However, his particular motivations are, to the present author’s knowledge, not documented (It would be especially ironic for Pinker to politicize skepticism based on flawed intellectual reasoning, since he otherwise preaches a message intellectual virtue).A third type of example of potential politicized superintelligence skepticism comes from the corporate sector. Several people in leadership positions at technology corporations have expressed superintelligence skepticism, including Eric Schmidt (Executive Chairman of Alphabet, the parent company of Google) [[66](#B66-information-09-00209)] and Mark Zuckerberg (CEO of Facebook) [[67](#B67-information-09-00209)]. Since this skepticism comes the corporate sector, it has some resemblance to risk–profit politicized skepticism and may likewise have the most potential to shape public discourse and policy. One observer postulated that Zuckerberg professes superintelligence skepticism to project the idea that “software is always friendly and tame” and avoid the idea “that computers are intrinsically risky”, the latter of which “has potentially dire consequences for Zuckerberg’s business and personal future” [[67](#B67-information-09-00209)]. While this may just be conjecture, it does come at a time in which Facebook is under considerable public pressure for its role in propagating fake news and influencing elections, which, although unrelated to superintelligence, nonetheless provides an antiregulatory motivation to downplay risks associated with computers.To summarize, there may already be some politicized superintelligence skepticism, coming from AI academics seeking to protect their field, public intellectuals seeking to advance a certain narrative about the world, and corporate leaders seeking to avoid regulation. However, it is not clear how much superintelligence skepticism is already politicized, and there are indications that it may be limited, especially compared to what has occurred for other issues. On the other hand, superintelligence is a relatively new public issue (with a longer history in academia), so perhaps its politicization is just beginning.Finally, it is worth noting that while superintelligence has not been politicized to the extent that climate change has, there is at least one instance of superintelligence being cited in the context of climate skepticism. Cass [[68](#B68-information-09-00209),[69](#B69-information-09-00209)] cited the prospect of superintelligence as a reason to not be concerned about climate change. A counter to this argument is that, even if superintelligence is a larger risk, addressing climate change can still reduce the overall risk faced by humanity. Superintelligence could also be a solution to climate change, and thus may be worth building despite the risks it poses. At the same time, if climate change has been addressed independently, then this reduces the need to take risks in building superintelligence [[70](#B70-information-09-00209)].#### 4.2. Prospects for Politicized Superintelligence Skepticism
Will superintelligence skepticism be (further) politicized? Noting the close historical association between politicized skepticism and corporate profits—at least for risk–profit politicized skepticism—an important question is whether superintelligence could prompt profit-threatening regulations. AI is now being developed by some of the largest corporations in the world. Furthermore, a recent survey found artificial general intelligence projects at several large corporations, including Baidu, Facebook, Google, Microsoft, Tencent, and Uber [[19](#B19-information-09-00209)]. These corporations have the assets to conduct politicized skepticism that is every bit as large as that of the tobacco, fossil fuel, and industrial chemicals industries.It should be noted that the artificial general intelligence projects at these corporations were not found to indicate substantial skepticism. Indeed, some of them are outspoken in concern about superintelligence. Moreover, out of 45 artificial general intelligence projects surveyed, only two were found to be dismissive of concerns about the risks posed by the technology [[19](#B19-information-09-00209)]. However, even if the AI projects themselves do not exhibit skepticism, the corporations that host them still could. Such a scenario would be comparable to that of ExxonMobil, whose scientists confirmed the science of climate change even while corporate publicity campaigns professed skepticism [[7](#B7-information-09-00209)].The history shows that risk–profit politicized skepticism is not inherent to corporate activity—it is generally only found when profits are at stake. The preponderance of corporate research on artificial general intelligence suggests at least a degree of profitability, but, at this time, it is unclear how profitable it will be. If it is profitable, then corporations are likely to become highly motivated to protect it against outside restrictions. This is an important factor to monitor as the technology progresses.In public corporations, the pressure to maximize shareholder returns can motivate risk–profit politicized skepticism. However, this may be less of a factor for some corporations in the AI sector. In particular, voting shares constituting a majority of voting power at both Facebook and Alphabet (the parent company of Google) are controlled by the companies’ founders: Mark Zuckerberg at Facebook [[71](#B71-information-09-00209)] and Larry Page and Sergey Brin at Alphabet [[72](#B72-information-09-00209)]. Given their majority stakes, the founders may be able to resist shareholder pressure for politicized skepticism, although it is not certain that they would, especially since leadership at both companies already display superintelligence skepticism.Another factor is the political ideologies of those involved in superintelligence. As discussed above, risk–profit politicized skepticism of other issues is commonly driven by people with pro-capitalist, anti-socialist, and anti-communist political ideologies. Superintelligence skepticism may be more likely to be politicized by people with similar ideologies. Some insight into this matter can be obtained from a recent survey of 600 technology entrepreneurs [[73](#B73-information-09-00209)], which is a highly relevant demographic. The study finds that, contrary to some conventional wisdom, this demographic tends not to hold libertarian ideologies. Instead, technology entrepreneurs tend to hold views consistent with American liberalism, but with one important exception: technology entrepreneurs tend to oppose government regulation. This finding suggests some prospect for politicizing superintelligence skepticism, although perhaps not as much as may exist in other industries.Further insight can be found from the current political activities of AI corporations. In the US, the corporations’ employees donate mainly to the Democratic Party, which is the predominant party of American liberalism and is more pro-regulation. However, the corporations themselves have recently shifted donations to the Republican Party, which is the predominant party of American conservatism and is more anti-regulation. Edsall [[74](#B74-information-09-00209)] proposed that this divergence between employees and employers is rooted in corporations’ pursuit of financial self-interest. A potential implication of this is that, even if the individuals who develop AI oppose risk–profit politicized skepticism, the corporations that they work for may support it. Additionally, the corporations have recently been accused of using their assets to influence academic and think tank research on regulations that the corporations could face [[75](#B75-information-09-00209),[76](#B76-information-09-00209)], although at least some of the accusations have been disputed [[77](#B77-information-09-00209)]. While the veracity of these accusations is beyond the scope of this paper, they are at least suggestive of the potential for these corporations to politicize superintelligence skepticism.AI corporations would not necessarily politicize superintelligence skepticism, even if profits may be at stake. Alternatively, they could express concern about superintelligence to portray themselves as responsible actors and likewise avoid regulation. This would be analogous to the strategy of “greenwashing” employed by companies seeking to bolster their reputation for environmental stewardship [[78](#B78-information-09-00209)]. Indeed, there have already been some expressions of concern about superintelligence by AI technologists, and likewise some suspicion that the stated concern has this sort of ulterior motive [[79](#B79-information-09-00209)].To the extent that corporations do politicize superintelligence skepticism, they are likely to mainly emphasize doubt about the risks of superintelligence. Insofar as superintelligence could be beneficial, corporations may promote this, just as they promote the benefits of fossil fuels (for transportation, heating, etc.) and other risky products. Or, AI corporations may promote the benefits of their own safety design and sow doubt about the safety of their rivals’ designs, analogous to the marketing of products whose riskiness can vary from company to company, such as automobiles. Alternatively, AI corporations may seek to sow doubt about the possibility of superintelligence, calculating that this would be their best play for avoiding regulation. As with politicized skepticism about other technologies and products, there is no one standard formula that every company always adopts.For their part, academic superintelligence skeptics may be more likely to emphasize doubt about the mere possibility of superintelligence, regardless of whether it would be beneficial or harmful, due to reputational concerns. Or, they could focus skepticism on the risks, for similar reasons as corporations: academic research can also be regulated, and researchers do not always welcome this. Of course, there are also academics who do not exhibit superintelligence skepticism. Again, there is no one standard formula.#### 4.3. Potential Effectiveness of Politicized Superintelligence Skepticism
If superintelligence skepticism is politicized, several factors point to it being highly effective, even more so than for the other issues in which skepticism has been politicized.First, some of the experts best positioned to resolve the debate are also deeply implicated in it. To the extent that superintelligence is a risk, the risk is driven by the computer scientists who would build superintelligence. These individuals have intimate knowledge of the technology and thus have an essential voice in the public debate (though not the only essential voice). This is distinct from issues such as tobacco or climate change, in which the risk is mainly assessed by outside experts. It would be as if the effect of tobacco on cancer was studied by the agronomists who cultivate tobacco crops, or if the science of climate change was studied by the geologists who map deposits of fossil fuels. With superintelligence, a substantial portion of the relevant experts have a direct incentive to avoid any restrictions on the technology, as do their employers. This could create a deep and enduring pool of highly persuasive skeptics.Second, superintelligence skepticism has deep roots in the mainstream AI computer science community. As noted above, this dates to the days of AI winter. Thus, skeptics may be abundant even where they are not funded by industry. Indeed, most of the skeptics described above do not appear to be speaking out of any industry ties, and thus would not have an industry conflict of interest. They could still have a conflict of interest from their desire in protect the reputation of their field, but this is a subtler matter. Insofar as they are perceived to not have a conflict of interest, they could be especially persuasive. Furthermore, even if their skepticism is honest and not intended for any political purposes, it could be used by others in dishonest and political ways.Third, superintelligence is a topic for which the uncertainty is inherently difficult to resolve. It is a hypothetical future technology that is qualitatively different from anything that currently exists. Furthermore, there is concern that its mere existence could be catastrophic, which could preclude certain forms of safety testing. It is thus a risk that defies normal scientific study. In this regard, it is similar to climate change: moderate climate change can already be observed, as can moderate forms of AI, but the potentially catastrophic forms have not yet materialized and possibly never will. However, climate projections can rely on some relatively simple physics—at its core, climate change largely reduces to basic physical chemistry and thermodynamics. (The physical chemistry covers the nature of greenhouse gasses, which are more transparent to some wavelengths of electromagnetic radiation than to others. The thermodynamics covers the heat transfer expected from greenhouse gas buildup. Both effects can be demonstrated in simple laboratory experiments. Climate change also involves indirect feedback effects on much of the Earth system, including clouds, ice, oceans, and ecosystems, which are often more complex and difficult to resolve and contribute to ongoing scientific uncertainty.) In contrast, AI projections must rely on notions of intelligence, which is not so simple at all. For this reason, it is less likely that scholarly communities will converge on any consensus position on superintelligence in the way that they have on other risks such as climate change.Fourth, some corporations that could develop superintelligence may be uniquely well positioned to influence public opinion. The corporations currently involved in artificial general intelligence research include some corporations that also play major roles in public media. As a leading social media platform, Facebook in particular has been found to be especially consequential for public opinion [[80](#B80-information-09-00209)]. Corporations that serve as information gateways, such as Baidu, Google, and Microsoft, also have unusual potential for influence. These corporations have opportunities to shape public opinion in ways that the tobacco, fossil fuel, and industrial chemicals industries cannot. While the AI corporations would not necessarily exploit these opportunities, it is an important factor to track.In summary, while it remains to be seen whether superintelligence skepticism will be politicized, there are some reasons for believing it will be, and that superintelligence would be an especially potent case of politicized skepticism. 5. Opportunities for Constructive Action
-----------------------------------------
Politicized superintelligence skepticism would not necessarily be harmful. As far as this paper is concerned, it is possible that, for superintelligence, skepticism is the correct view, meaning that superintelligence may not be built, may not be dangerous, or may not merit certain forms of imminent attention. (The paper of course assumes that superintelligence is worth some imminent attention, or otherwise it would not have been written.) It is also possible that, even if superintelligence is a major risk, government regulations could nonetheless be counterproductive, and politicized skepticism could help avoid that. That said, the history of politicized skepticism (especially risk–profit politicized skepticism) shows a tendency for harm, which suggests that politicized superintelligence skepticism could be harmful as well.With this in mind, one basic opportunity is to raise awareness about politicized skepticism within communities that discuss superintelligence. Superintelligence skeptics who are motivated by honest intellectual norms may not wish for their skepticism to be used politically. They can likewise be cautious about how to engage with potential political skeptics, such as by avoiding certain speaking opportunities in which their remarks would be used as a political tool instead of as a constructive intellectual contribution. Additionally, all people involved in superintelligence debates can insist on basic intellectual standards, above all by putting analysis before conclusions and not the other way around. These are the sorts of things that an awareness of politicized skepticism can help with.Another opportunity is to redouble efforts to build scientific consensus on superintelligence, and then to draw attention to it. Currently, there is no consensus. As noted above, superintelligence is an inherently uncertain topic and difficult to build consensus on. However, with some effort, it should be possible to at least make progress towards consensus. Of course, scientific consensus does not preclude politicized skepticism—ongoing climate skepticism attests to this. However, it can at least dampen the politicized skepticism. Indeed, recent research has found that the perception of scientific consensus increases acceptance of the underlying science [[81](#B81-information-09-00209)].A third opportunity is to engage with AI corporations to encourage them to avoid politicizing skepticism about superintelligence or other forms of AI. Politicized skepticism is not inevitable, and while corporate leaders may sometimes feel as though they have no choice, there may nonetheless be options. Furthermore, the options may be especially effective at this early stage in superintelligence research, in which corporations may have not yet established internal policy or practices.A fourth opportunity is to follow best practices in debunking misinformation in the event that superintelligence skepticism is politicized. There is a substantial literature on the psychology of debunking [[81](#B81-information-09-00209),[82](#B82-information-09-00209),[83](#B83-information-09-00209)]. A debunking handbook written for a general readership [[82](#B82-information-09-00209)] recommends: (1) focusing on the correct information to avoid cognitively reinforcing the false information; (2) preceding any discussion of the false information with a warning that it is false; and (3) when debunking false information, also give the correct information so that people are not left with a gap in their understanding of the topic. The handbook further cautions against using the information deficit model of human cognition, which proposes that mistaken beliefs can be corrected simply by providing the correct information. The information deficit model is widely used in science communication, but it has been repeatedly found to work poorly, especially in situations of contested science. This sort of advice could be helpful to efforts to counter superintelligence misinformation.Finally, the entire AI community should insist that policy be made based on an honest and balanced read of the current state of knowledge. Burden of proof requirements should not be abused for private gain. As with climate change and other global risks, the world cannot afford to prove that superintelligence would be catastrophic. By the time uncertainty is eliminated, it could be too late. 6. Conclusions
---------------
Some people believe that superintelligence could be a highly consequential technology, potentially even a transformative event in the course of human history, with either profoundly beneficial or extremely catastrophic effects. Insofar as this belief is plausible, superintelligence may be worth careful advance consideration, to ensure that the technology is handled successfully. Importantly, this advance attention should include social science and policy analysis, and not just computer science. Furthermore, even if belief in superintelligence is mistaken, it can nonetheless be significant as a social and political phenomenon. This is another reason for social science and policy analysis. This paper is a contribution to the social science and policy analysis of superintelligence. Furthermore, despite the unprecedented nature of superintelligence, this paper shows that there are important historical and contemporary analogs that can shed light on the issue. Much of what could occur for the development of superintelligence has already occurred for other technologies. Politicized skepticism is one example of this.One topic not covered in this paper is the prospect of beliefs that superintelligence will occur and/or will be harmful to be politicized. Such a phenomenon could be analogous to, for example, belief in large medical harms from nuclear power, or, phrased differently, skepticism about claims that nuclear power plants are medically safe. The scientific literature on nuclear power finds medical harms to be substantially lower than is commonly believed [[84](#B84-information-09-00209)]. Overstated concern (or “alarmism”) about nuclear power can likewise be harmful, for example by increasing use of fossil fuels. Similarly, the fossil fuel industry could politicize this belief for its own benefit. By the same logic, belief in superintelligence could also be politicized. This prospect is left for future research, although much of this paper’s analysis may be applicable.Perhaps the most important lesson of this paper is that the development of superintelligence could be a contentious political process. It could involve aggressive efforts by powerful actors—efforts that not only are inconsistent with basic intellectual ideals, but that also actively subvert those ideals for narrow, self-interested gain. This poses a fundamental challenge to those who seek to advance a constructive study of superintelligence.
Funding
-------
This research received no external funding.Acknowledgments
---------------
Tony Barrett, Phil Torres, Olle Häggström, Maurizio Tinnirello, Matthijs Maas, Roman Yampolskiy, and participants in a seminar hosted by the Center for Human-Compatible AI at UC Berkeley provided helpful feedback on an earlier version of this paper. All remaining errors are the author’s alone. The views expressed in this paper are the author’s and not necessarily the views of the Global Catastrophic Risk Institute.Conflicts of Interest
---------------------
The author declares no conflict of interest.References
----------
1. Oreskes, N.; Conway, E.M. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming; Bloomsbury: New York, NY, USA, 2010. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Merchants+of+Doubt:+How+a+Handful+of+Scientists+Obscured+the+Truth+on+Issues+from+Tobacco+Smoke+to+Global+Warming&author=Oreskes,+N.&author=Conway,+E.M.&publication\_year=2010)]
2. Grandjean, P. Only One Chance: How Environmental Pollution Impairs Brain Development—And How to Protect the Brains of the Next Generation; Oxford University Press: Oxford, UK, 2013. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Only+One+Chance:+How+Environmental+Pollution+Impairs+Brain+Development%E2%80%94And+How+to+Protect+the+Brains+of+the+Next+Generation&author=Grandjean,+P.&publication\_year=2013)]
3. Selin, C. Expectations and the emergence of nanotechnology. Sci. Technol. Hum. Values \*\*2007\*\*, 32, 196–220. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Expectations+and+the+emergence+of+nanotechnology&author=Selin,+C.&publication\_year=2007&journal=Sci.+Technol.+Hum.+Values&volume=32&pages=196%E2%80%93220&doi=10.1177/0162243906296918)] [[CrossRef](https://doi.org/10.1177/0162243906296918)]
4. Jacques, P.J.; Dunlap, R.E.; Freeman, M. The organisation of denial: Conservative think tanks and environmental skepticism. Environ. Politics \*\*2008\*\*, 17, 349–385. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+organisation+of+denial:+Conservative+think+tanks+and+environmental+skepticism&author=Jacques,+P.J.&author=Dunlap,+R.E.&author=Freeman,+M.&publication\_year=2008&journal=Environ.+Politics&volume=17&pages=349%E2%80%93385&doi=10.1080/09644010802055576)] [[CrossRef](https://doi.org/10.1080/09644010802055576)]
5. Lewandowsky, S.; Oberauer, K. Motivated rejection of science. Curr. Dir. Psychol. Sci. \*\*2016\*\*, 25, 217–222. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Motivated+rejection+of+science&author=Lewandowsky,+S.&author=Oberauer,+K.&publication\_year=2016&journal=Curr.+Dir.+Psychol.+Sci.&volume=25&pages=217%E2%80%93222&doi=10.1177/0963721416654436)] [[CrossRef](https://doi.org/10.1177/0963721416654436)]
6. Lewandowsky, S.; Mann, M.E.; Brown, N.J.; Friedman, H. Science and the public: Debate, denial, and skepticism. J. Soc. Polit. Psychol. \*\*2016\*\*, 4, 537–553. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Science+and+the+public:+Debate,+denial,+and+skepticism&author=Lewandowsky,+S.&author=Mann,+M.E.&author=Brown,+N.J.&author=Friedman,+H.&publication\_year=2016&journal=J.+Soc.+Polit.+Psychol.&volume=4&pages=537%E2%80%93553&doi=10.5964/jspp.v4i2.604)] [[CrossRef](https://doi.org/10.5964/jspp.v4i2.604)][[Green Version](http://jspp.psychopen.eu/article/download/604/pdf)]
7. Supran, G.; Oreskes, N. Assessing ExxonMobil’s climate change communications (1977–2014). Environ. Res. Lett. \*\*2017\*\*, 12, 084019. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Assessing+ExxonMobil%E2%80%99s+climate+change+communications+(1977%E2%80%932014)&author=Supran,+G.&author=Oreskes,+N.&publication\_year=2017&journal=Environ.+Res.+Lett.&volume=12&pages=084019&doi=10.1088/1748-9326/aa815f)] [[CrossRef](https://doi.org/10.1088/1748-9326/aa815f)][[Green Version](http://iopscience.iop.org/article/10.1088/1748-9326/aa815f/pdf)]
8. McGinnis, J.O. Accelerating Ai. Northwest. Univ. Law Rev. \*\*2010\*\*, 104, 366–381. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Accelerating+Ai&author=McGinnis,+J.O.&publication\_year=2010&journal=Northwest.+Univ.+Law+Rev.&volume=104&pages=366%E2%80%93381&doi=10.2139/ssrn.1593851)] [[CrossRef](https://doi.org/10.2139/ssrn.1593851)]
9. Sotala, K.; Yampolskiy, R.V. Responses to catastrophic AGI risk: A survey. Phys. Scr. \*\*2014\*\*, 90, 018001. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Responses+to+catastrophic+AGI+risk:+A+survey&author=Sotala,+K.&author=Yampolskiy,+R.V.&publication\_year=2014&journal=Phys.+Scr.&volume=90&pages=018001&doi=10.1088/0031-8949/90/1/018001)] [[CrossRef](https://doi.org/10.1088/0031-8949/90/1/018001)]
10. Wilson, G. Minimizing global catastrophic and existential risks from emerging technologies through international law. VA Environ. Law J. \*\*2013\*\*, 31, 307–364. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Minimizing+global+catastrophic+and+existential+risks+from+emerging+technologies+through+international+law&author=Wilson,+G.&publication\_year=2013&journal=VA+Environ.+Law+J.&volume=31&pages=307%E2%80%93364)]
11. Yampolskiy, R.; Fox, J. Safety engineering for artificial general intelligence. Topoi \*\*2013\*\*, 32, 217–226. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Safety+engineering+for+artificial+general+intelligence&author=Yampolskiy,+R.&author=Fox,+J.&publication\_year=2013&journal=Topoi&volume=32&pages=217%E2%80%93226&doi=10.1007/s11245-012-9128-9)] [[CrossRef](https://doi.org/10.1007/s11245-012-9128-9)]
12. Goertzel, B. The Corporatization of AI Is a Major Threat to Humanity. H+ Magazine, 21 July 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+Corporatization+of+AI+Is+a+Major+Threat+to+Humanity&author=Goertzel,+B.&publication\_year=2017)]
13. Baum, S.D. On the promotion of safe and socially beneficial artificial intelligence. AI Soc. \*\*2017\*\*, 32, 543–551. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=On+the+promotion+of+safe+and+socially+beneficial+artificial+intelligence&author=Baum,+S.D.&publication\_year=2017&journal=AI+Soc.&volume=32&pages=543%E2%80%93551&doi=10.1007/s00146-016-0677-0)] [[CrossRef](https://doi.org/10.1007/s00146-016-0677-0)]
14. Butler, S. Darwin among the Machines. The Press, 13 June 1863. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Darwin+among+the+Machines&author=Butler,+S.&publication\_year=1863)]
15. Good, I.J. Speculations concerning the first ultraintelligent machine. Adv. Comput. \*\*1965\*\*, 6, 31–88. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Speculations+concerning+the+first+ultraintelligent+machine&author=Good,+I.J.&publication\_year=1965&journal=Adv.+Comput.&volume=6&pages=31%E2%80%9388)]
16. Sandberg, A. An overview of models of technological singularity. In The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future; More, M., Vita-More, N., Eds.; Wiley: New York, NY, USA, 2010; pp. 376–394. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=An+overview+of+models+of+technological+singularity&author=Sandberg,+A.&publication\_year=2010&pages=376%E2%80%93394)]
17. Bostrom, N. How Long before Superintelligence? 1998. Available online: (accessed on 18 August 2018).
18. Goertzel, B. Artificial general intelligence: Concept, state of the art, and future prospects. J. Artif. Gen. Intell. \*\*2014\*\*, 5, 1–48. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Artificial+general+intelligence:+Concept,+state+of+the+art,+and+future+prospects&author=Goertzel,+B.&publication\_year=2014&journal=J.+Artif.+Gen.+Intell.&volume=5&pages=1%E2%80%9348&doi=10.2478/jagi-2014-0001)] [[CrossRef](https://doi.org/10.2478/jagi-2014-0001)]
19. Baum, S.D. A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy. Global Catastrophic Risk Institute Working Paper 17-1. 2017. Available online: (accessed on 18 August 2018).
20. Legg, S. Machine Super Intelligence. Ph.D. Thesis, University of Lugano, Lugano, Switzerland, 2008. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Machine+Super+Intelligence&author=Legg,+S.&publication\_year=2008)]
21. Crevier, D. AI: The Tumultuous History of the Search for Artificial Intelligence; Basic Books: New York, NY, USA, 1993. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=AI:+The+Tumultuous+History+of+the+Search+for+Artificial+Intelligence&author=Crevier,+D.&publication\_year=1993)]
22. McCorduck, P. Machines Who Think: 25th Anniversary Edition; A.K. Peters: Natick, MA, USA, 2004. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Machines+Who+Think:+25th+Anniversary+Edition&author=McCorduck,+P.&publication\_year=2004)]
23. Hendler, J. Avoiding another AI winter. IEEE Intell. Syst. \*\*2008\*\*, 23, 2–4. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Avoiding+another+AI+winter&author=Hendler,+J.&publication\_year=2008&journal=IEEE+Intell.+Syst.&volume=23&pages=2%E2%80%934&doi=10.1109/MIS.2008.20)] [[CrossRef](https://doi.org/10.1109/MIS.2008.20)]
24. Dreyfus, H. Alchemy and AI. RAND Corporation Document P-3244. 1965. Available online: (accessed on 18 August 2018).
25. Descartes, R. A Discourse on Method. Project Gutenberg eBook. 1637. Available online: (accessed on 18 August 2018).
26. Chalmers, D.J. The singularity: A philosophical analysis. J. Conscious. Stud. \*\*2010\*\*, 17, 7–65. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+singularity:+A+philosophical+analysis&author=Chalmers,+D.J.&publication\_year=2010&journal=J.+Conscious.+Stud.&volume=17&pages=7%E2%80%9365)]
27. Eden, A.H.; Moor, J.H.; Soraker, J.H.; Steinhart, E. (Eds.) Singularity Hypotheses: A Scientific and Philosophical Assessment; Springer: Berlin, Germany, 2013. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Singularity+Hypotheses:+A+Scientific+and+Philosophical+Assessment&author=Eden,+A.H.&author=Moor,+J.H.&author=Soraker,+J.H.&author=Steinhart,+E.&publication\_year=2013)]
28. Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, UK, 2014. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Superintelligence:+Paths,+Dangers,+Strategies&author=Bostrom,+N.&publication\_year=2014)]
29. Callaghan, V.; Miller, J.; Yampolskiy, R.; Armstrong, S. (Eds.) The Technological Singularity: Managing the Journey; Springer: Berlin, Germany, 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+Technological+Singularity:+Managing+the+Journey&author=Callaghan,+V.&author=Miller,+J.&author=Yampolskiy,+R.&author=Armstrong,+S.&publication\_year=2017)]
30. Rawlinson, K. Microsoft’s Bill Gates Insists AI Is a Threat. BBC, 29 January 2015. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Microsoft%E2%80%99s+Bill+Gates+Insists+AI+Is+a+Threat&author=Rawlinson,+K.&publication\_year=2015)]
31. Cellan-Jones, R. Stephen Hawking Warns Artificial Intelligence Could End Mankind. BBC, 2 December 2014. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Stephen+Hawking+Warns+Artificial+Intelligence+Could+End+Mankind&author=Cellan-Jones,+R.&publication\_year=2014)]
32. Dowd, M. Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse. Vanity Fair, 26 March 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Elon+Musk%E2%80%99s+Billion-Dollar+Crusade+to+Stop+the+A.I.+Apocalypse&author=Dowd,+M.&publication\_year=2017)]
33. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature \*\*2015\*\*, 521, 436–444. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Deep+learning&author=LeCun,+Y.&author=Bengio,+Y.&author=Hinton,+G.&publication\_year=2015&journal=Nature&volume=521&pages=436%E2%80%93444&doi=10.1038/nature14539&pmid=26017442)] [[CrossRef](https://doi.org/10.1038/nature14539)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/26017442)]
34. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature \*\*2016\*\*, 529, 484–489. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Mastering+the+game+of+Go+with+deep+neural+networks+and+tree+search&author=Silver,+D.&author=Huang,+A.&author=Maddison,+C.J.&author=Guez,+A.&author=Sifre,+L.&author=Van+Den+Driessche,+G.&author=Schrittwieser,+J.&author=Antonoglou,+I.&author=Panneershelvam,+V.&author=Lanctot,+M.&publication\_year=2016&journal=Nature&volume=529&pages=484%E2%80%93489&doi=10.1038/nature16961&pmid=26819042)] [[CrossRef](https://doi.org/10.1038/nature16961)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/26819042)]
35. Bringsjord, S. Belief in the singularity is logically brittle. J. Conscious. Stud. \*\*2012\*\*, 19, 14–20. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Belief+in+the+singularity+is+logically+brittle&author=Bringsjord,+S.&publication\_year=2012&journal=J.+Conscious.+Stud.&volume=19&pages=14%E2%80%9320)]
36. Chalmers, D. The Singularity: A reply. J. Conscious. Stud. \*\*2012\*\*, 19, 141–167. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+Singularity:+A+reply&author=Chalmers,+D.&publication\_year=2012&journal=J.+Conscious.+Stud.&volume=19&pages=141%E2%80%93167)]
37. McDermott, D. Response to the singularity by David Chalmers. J. Conscious. Stud. \*\*2012\*\*, 19, 167–172. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Response+to+the+singularity+by+David+Chalmers&author=McDermott,+D.&publication\_year=2012&journal=J.+Conscious.+Stud.&volume=19&pages=167%E2%80%93172)]
38. Crawford, K. Artificial Intelligence’s White Guy Problem. New York Times, 25 June 2016. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Artificial+Intelligence%E2%80%99s+White+Guy+Problem&author=Crawford,+K.&publication\_year=2016)]
39. Garling, C. Andrew Ng: Why ‘Deep Learning’ Is a Mandate for Humans, not just Machines. Wired, May 2015. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Andrew+Ng:+Why+%E2%80%98Deep+Learning%E2%80%99+Is+a+Mandate+for+Humans,+not+just+Machines&author=Garling,+C.&publication\_year=2015)]
40. Etzioni, O. No, the Experts Don’t Think Superintelligent AI Is a Threat to Humanity. MIT Technology Review, 20 September 2016. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=No,+the+Experts+Don%E2%80%99t+Think+Superintelligent+AI+Is+a+Threat+to+Humanity&author=Etzioni,+O.&publication\_year=2016)]
41. Dafoe, A.; Russell, S. Yes, We Are Worried about the Existential Risk of Artificial Intelligence. MIT Technology Review, 2 November 2016. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Yes,+We+Are+Worried+about+the+Existential+Risk+of+Artificial+Intelligence&author=Dafoe,+A.&author=Russell,+S.&publication\_year=2016)]
42. Baum, S.D. Reconciliation between factions focused on near-term and long-term artificial intelligence. AI Soc. \*\*2017\*\*. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Reconciliation+between+factions+focused+on+near-term+and+long-term+artificial+intelligence&author=Baum,+S.D.&publication\_year=2017&journal=AI+Soc.&doi=10.1007/s00146-017-0734-3)] [[CrossRef](https://doi.org/10.1007/s00146-017-0734-3)]
43. Goertzel, B. Superintelligence: Fears, promises and potentials. J. Evol. Technol. \*\*2015\*\*, 25, 55–87. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Superintelligence:+Fears,+promises+and+potentials&author=Goertzel,+B.&publication\_year=2015&journal=J.+Evol.+Technol.&volume=25&pages=55%E2%80%9387)]
44. Baum, S.D.; Barrett, A.M.; Yampolskiy, R.V. Modeling and interpreting expert disagreement about artificial superintelligence. Informatica \*\*2017\*\*, 41, 419–428. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Modeling+and+interpreting+expert+disagreement+about+artificial+superintelligence&author=Baum,+S.D.&author=Barrett,+A.M.&author=Yampolskiy,+R.V.&publication\_year=2017&journal=Informatica&volume=41&pages=419%E2%80%93428)]
45. Bieger, J.; Thórisson, K.R.; Wang, P. Safe baby AGI. In Proceedings of the 8th International Conference on Artificial General Intelligence (AGI), Berlin, Germany, 22–25 July 2015; Bieger, J., Goertzel, B., Potapov, A., Eds.; Springer: Cham, Switzerland, 2015; pp. 46–49. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Safe+baby+AGI&conference=Proceedings+of+the+8th+International+Conference+on+Artificial+General+Intelligence+(AGI)&author=Bieger,+J.&author=Th%C3%B3risson,+K.R.&author=Wang,+P.&publication\_year=2015&pages=46%E2%80%9349)]
46. Searle, J.R. What your computer can’t know. The New York Review of Books, 9 October 2014. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=What+your+computer+can%E2%80%99t+know&author=Searle,+J.R.&publication\_year=2014)]
47. Nichols, T. The Death of Expertise: The Campaign against Established Knowledge and Why It Matters; Oxford University Press: New York, NY, USA, 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+Death+of+Expertise:+The+Campaign+against+Established+Knowledge+and+Why+It+Matters&author=Nichols,+T.&publication\_year=2017)]
48. De Vrieze, J. ‘Science wars’ veteran has a new mission. Science \*\*2017\*\*, 358, 159. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=%E2%80%98Science+wars%E2%80%99+veteran+has+a+new+mission&author=De+Vrieze,+J.&publication\_year=2017&journal=Science&volume=358&pages=159&doi=10.1126/science.358.6360.159&pmid=29026024)] [[CrossRef](https://doi.org/10.1126/science.358.6360.159)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/29026024)]
49. Stirling, M. Merchants of Consensus: A Public Battle against Exxon. 2017. Available online: (accessed on 18 August 2018).
50. Hampshire, G. Alberta Government Cool on Controversial Climate Change Speaker. CBC News, 19 January 2018. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Alberta+Government+Cool+on+Controversial+Climate+Change+Speaker&author=Hampshire,+G.&publication\_year=2018)]
51. Marshall, G. Don’t Even Think About It: Why Our Brains Are Wired to Ignore Climate Change; Bloomsbury: New York, NY, USA, 2014. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Don%E2%80%99t+Even+Think+About+It:+Why+Our+Brains+Are+Wired+to+Ignore+Climate+Change&author=Marshall,+G.&publication\_year=2014)]
52. Whitehouse, S. Captured: The Corporate Infiltration of American Democracy; The New Press: New York, NY, USA, 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Captured:+The+Corporate+Infiltration+of+American+Democracy&author=Whitehouse,+S.&publication\_year=2017)]
53. Kirk, R. Conservation activism is a healthy sign. Baltimore Sun, 4 May 1970. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Conservation+activism+is+a+healthy+sign&author=Kirk,+R.&publication\_year=1970)]
54. Boykoff, M.T.; Boykoff, J.M. Balance as bias: Global warming and the US prestige press. Glob. Environ. Chang. \*\*2004\*\*, 14, 125–136. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Balance+as+bias:+Global+warming+and+the+US+prestige+press&author=Boykoff,+M.T.&author=Boykoff,+J.M.&publication\_year=2004&journal=Glob.+Environ.+Chang.&volume=14&pages=125%E2%80%93136&doi=10.1016/j.gloenvcha.2003.10.001)] [[CrossRef](https://doi.org/10.1016/j.gloenvcha.2003.10.001)]
55. Baum, S.D.; Haqq-Misra, J.D.; Karmosky, C. Climate change: Evidence of human causes and arguments for emissions reduction. Sci. Eng. Ethics \*\*2012\*\*, 18, 393–410. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Climate+change:+Evidence+of+human+causes+and+arguments+for+emissions+reduction&author=Baum,+S.D.&author=Haqq-Misra,+J.D.&author=Karmosky,+C.&publication\_year=2012&journal=Sci.+Eng.+Ethics&volume=18&pages=393%E2%80%93410&doi=10.1007/s11948-011-9270-6&pmid=21516371)] [[CrossRef](https://doi.org/10.1007/s11948-011-9270-6)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/21516371)]
56. Oreskes, N. The scientific consensus on climate change. Science \*\*2004\*\*, 306, 1686. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+scientific+consensus+on+climate+change&author=Oreskes,+N.&publication\_year=2004&journal=Science&volume=306&pages=1686&doi=10.1126/science.1103618&pmid=15576594)] [[CrossRef](https://doi.org/10.1126/science.1103618)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/15576594)]
57. CNA Military Advisory Board. National Security and the Threat of Climate Change; The CNA Corporation: Alexandria, VA, USA, 2007. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=National+Security+and+the+Threat+of+Climate+Change&author=CNA+Military+Advisory+Board&publication\_year=2007)]
58. Lighthill, J. Artificial Intelligence: A Paper Symposium; Science Research Council: Swindon, UK, 1973. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Artificial+Intelligence:+A+Paper+Symposium&author=Lighthill,+J.&publication\_year=1973)]
59. Menzies, T. 21st-century AI: Proud, not smug. IEEE Intell. Syst. \*\*2003\*\*, 18, 18–24. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=21st-century+AI:+Proud,+not+smug&author=Menzies,+T.&publication\_year=2003&journal=IEEE+Intell.+Syst.&volume=18&pages=18%E2%80%9324&doi=10.1109/MIS.2003.1200723)] [[CrossRef](https://doi.org/10.1109/MIS.2003.1200723)]
60. Bentley, P.J. The three laws of artificial intelligence: Dispelling common myths. In Should We Fear Artificial Intelligence? In-Depth Analysis; Boucher, P., Ed.; European Parliamentary Research Service, Strategic Foresight Unit: Brussels, Belgium, 2018; pp. 6–12. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+three+laws+of+artificial+intelligence:+Dispelling+common+myths&author=Bentley,+P.J.&publication\_year=2018&pages=6%E2%80%9312)]
61. Häggström, O. A spectacularly uneven AI report. Häggström Hävdar, 30 March 2018. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=A+spectacularly+uneven+AI+report&author=H%C3%A4ggstr%C3%B6m,+O.&publication\_year=2018)]
62. Marcus, G. Artificial intelligence is stuck. Here’s how to move it forward. New York Times, 29 July 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Artificial+intelligence+is+stuck.+Here%E2%80%99s+how+to+move+it+forward&author=Marcus,+G.&publication\_year=2017)]
63. Bengtsson, B. Pinker is dangerous. Jag är Här, 22 October 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Pinker+is+dangerous&author=Bengtsson,+B.&publication\_year=2017)]
64. Häggström, O. The AI meeting in Brussels last week. Häggström Hävdar, 23 October 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+AI+meeting+in+Brussels+last+week&author=H%C3%A4ggstr%C3%B6m,+O.&publication\_year=2017)]
65. Torres, P. A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now. Project for Future Human Flourishing Technical Report 2, Version 1.2. 2018. Available online: (accessed on 21 August 2018).
66. Clifford, C. Google billionaire Eric Schmidt: Elon Musk is ‘exactly wrong’ about A.I. because he ‘doesn’t understand’. CNBC, 29 May 2018. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Google+billionaire+Eric+Schmidt:+Elon+Musk+is+%E2%80%98exactly+wrong%E2%80%99+about+A.I.+because+he+%E2%80%98doesn%E2%80%99t+understand%E2%80%99&author=Clifford,+C.&publication\_year=2018)]
67. Bogost, I. Why Zuckerberg and Musk are fighting about the robot future. The Atlantic, 27 July 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Why+Zuckerberg+and+Musk+are+fighting+about+the+robot+future&author=Bogost,+I.&publication\_year=2017)]
68. Cass, O. The problem with climate catastrophizing. Foreign Affairs, 21 March 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+problem+with+climate+catastrophizing&author=Cass,+O.&publication\_year=2017)]
69. Cass, O. How to worry about climate change. National Affairs, Winter2017; 115–131. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=How+to+worry+about+climate+change&author=Cass,+O.&publication\_year=2017)]
70. Baum, S.D. The great downside dilemma for risky emerging technologies. Phys. Scr. \*\*2014\*\*, 89, 128004. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+great+downside+dilemma+for+risky+emerging+technologies&author=Baum,+S.D.&publication\_year=2014&journal=Phys.+Scr.&volume=89&pages=128004&doi=10.1088/0031-8949/89/12/128004)] [[CrossRef](https://doi.org/10.1088/0031-8949/89/12/128004)][[Green Version](http://iopscience.iop.org/article/10.1088/0031-8949/89/12/128004/pdf)]
71. Heath, A. Mark Zuckerberg’s plan to create non-voting Facebook shares is going to trial in September. Business Insider, 4 May 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Mark+Zuckerberg%E2%80%99s+plan+to+create+non-voting+Facebook+shares+is+going+to+trial+in+September&author=Heath,+A.&publication\_year=2017)]
72. Ingram, M. At Alphabet, there are only two shareholders who matter. Fortune, 7 June 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=At+Alphabet,+there+are+only+two+shareholders+who+matter&author=Ingram,+M.&publication\_year=2017)]
73. Broockman, D.; Ferenstein, G.F.; Malhotra, N. The Political Behavior of Wealthy Americans: Evidence from Technology Entrepreneurs; Stanford Graduate School of Business Working Paper, No. 3581; Stanford Graduate School of Business: Stanford, CA, USA, 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+Political+Behavior+of+Wealthy+Americans:+Evidence+from+Technology+Entrepreneurs&author=Broockman,+D.&author=Ferenstein,+G.F.&author=Malhotra,+N.&publication\_year=2017)]
74. Edsall, T.B. Silicon Valley takes a right turn. New York Times, 12 January 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Silicon+Valley+takes+a+right+turn&author=Edsall,+T.B.&publication\_year=2017)]
75. Mullins, B. Paying professors: Inside Google’s academic influence campaign. Wall Street Journal, 15 July 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Paying+professors:+Inside+Google%E2%80%99s+academic+influence+campaign&author=Mullins,+B.&publication\_year=2017)]
76. Taplinaug, J. Google’s disturbing influence over think tanks. New York Times, 30 August 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Google%E2%80%99s+disturbing+influence+over+think+tanks&author=Taplinaug,+J.&publication\_year=2017)]
77. Tiku, N. New America chair says Google didn’t prompt critic’s ouster. Wired, 6 September 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=New+America+chair+says+Google+didn%E2%80%99t+prompt+critic%E2%80%99s+ouster&author=Tiku,+N.&publication\_year=2017)]
78. Marquis, C.; Toffel, M.W.; Zhou, Y. Scrutiny, norms, and selective disclosure: A global study of greenwashing. Organ. Sci. \*\*2016\*\*, 27, 483–504. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Scrutiny,+norms,+and+selective+disclosure:+A+global+study+of+greenwashing&author=Marquis,+C.&author=Toffel,+M.W.&author=Zhou,+Y.&publication\_year=2016&journal=Organ.+Sci.&volume=27&pages=483%E2%80%93504&doi=10.1287/orsc.2015.1039)] [[CrossRef](https://doi.org/10.1287/orsc.2015.1039)]
79. Mack, E. Why Elon Musk spent $10 million to keep artificial intelligence friendly. Forbes, 15 January 2015. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Why+Elon+Musk+spent+$10+million+to+keep+artificial+intelligence+friendly&author=Mack,+E.&publication\_year=2015)]
80. Pickard, V. Media failures in the age of Trump. Political Econ. Commun. \*\*2017\*\*, 4, 118–122. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Media+failures+in+the+age+of+Trump&author=Pickard,+V.&publication\_year=2017&journal=Political+Econ.+Commun.&volume=4&pages=118%E2%80%93122)]
81. Lewandowsky, S.; Gignac, G.E.; Vaughan, S. The pivotal role of perceived scientific consensus in acceptance of science. Nat. Clim. Chang. \*\*2013\*\*, 3, 399–404. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+pivotal+role+of+perceived+scientific+consensus+in+acceptance+of+science&author=Lewandowsky,+S.&author=Gignac,+G.E.&author=Vaughan,+S.&publication\_year=2013&journal=Nat.+Clim.+Chang.&volume=3&pages=399%E2%80%93404&doi=10.1038/nclimate1720)] [[CrossRef](https://doi.org/10.1038/nclimate1720)]
82. Cook, J.; Lewandowsky, S. The Debunking Handbook. St. Lucia, Australia: University of Queensland. 2011. Available online: (accessed on 18 August 2018).
83. Chan, M.P.; Jones, C.R.; Hall Jamieson, K.; Albarracín, D. Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychol. Sci. \*\*2017\*\*, 28, 1531–1546. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Debunking:+A+meta-analysis+of+the+psychological+efficacy+of+messages+countering+misinformation&author=Chan,+M.P.&author=Jones,+C.R.&author=Hall+Jamieson,+K.&author=Albarrac%C3%ADn,+D.&publication\_year=2017&journal=Psychol.+Sci.&volume=28&pages=1531%E2%80%931546&doi=10.1177/0956797617714579&pmid=28895452)] [[CrossRef](https://doi.org/10.1177/0956797617714579)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/28895452)]
84. Slovic, P. The perception gap: Radiation and risk. Bull. At. Sci. \*\*2012\*\*, 68, 67–75. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+perception+gap:+Radiation+and+risk&author=Slovic,+P.&publication\_year=2012&journal=Bull.+At.+Sci.&volume=68&pages=67%E2%80%9375&doi=10.1177/0096340212444870)] [[CrossRef](https://doi.org/10.1177/0096340212444870)]
© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (). |
194faad6-9bb6-4a5b-be18-58241e033e7b | trentmkelly/LessWrong-43k | LessWrong | MIRI Donation Collaboration Station
As you may know, on May 6, there will be a large one-day price-matching fundraiser for Bay Area Charities.
The relevant details are right here at MIRI's official website.
And this is the webpage to visit to donate.
For those of you who didn't read the two links above, here's the important information.
> On May 6, MIRI is participating in Silicon Valley Gives...
>
> Why is this exciting for supporters of MIRI? Many reasons, but here are a few.
>
>
>
> * Over $250,000 of matching prizes and funds up for grabs, from sources that normally wouldn't contribute to MIRI:
> * Two-to-one dollar match up to $50,000 during the midnight hour.
> * $2,000 dollar prize for the nonprofit that has the most individual gift in an hour, every hour, for 24 hours.
> * $150 added to a random donation each hour, every hour for 24 hours.
> * Dollar for Dollar match up to $35,000 during the 7AM hour, and $50,000 during the noon, 6 PM, and 7 PM hours.
> * Local NBC stations, radio stations, businesses, and Bay Area foundations will be promoting the Silicon Valley Day of Giving on May 6th. So if MIRI is making a splash with our fundraising that day, it's possible we'll draw attention from media and by extension new donors.
>
>
>
> Making the most of this opportunity will require some cleverness and a lot of coordination. We are going to need all the help we can get...
>
>
>
> 1. If you are interested in supporting MIRI with a large donation during the fundraiser... Get in touch with Malo at malo@intelligence.org
> 2. All MIRI supporters have the potential to make a big impact if we can all work together in a coordinated manner. Sign up below (The sign up sheet is on the MIRI announcement page) to receive updates on our strategy leading up to the event, and updates throughout the fundraiser on the best times to give and promote the event.
>
>
A group coordination problem? We can do that easily, right?
So, in the comments, feel free to discuss donation strategy, coord |
1798a3d1-7c8b-4cfc-aaff-1fb3f46a9cff | trentmkelly/LessWrong-43k | LessWrong | Antijargon Project
When a group of people talk to each other a lot they develop terms that they can use in place of larger concepts. This makes it easier to talk to people inside the group, but then it's harder to talk about the same ideas with people outside the group. If we were smart enough to keep up fully independent vocabularies where we would always use the right words for the people we were talking to, this wouldn't be an issue. But instead we get in the habit of saying weird words, and then when we want to talk to people who don't know those words we either struggle to find words they know or waste a lot of time introducing words. Especially when the group jargon term offers only a minor advantage over the non-jargon phrasing I think this is a bad tradeoff if you also want to speak to people outside the group.
Recently I've been working on using as little jargon as possible. Pushing myself to speak conventionally, even when among people who would understand weird terms a little faster, can be frustrating, but I think I'm also getting better at it.
I also posted this on my blog |
7824b13b-d8ad-4df3-aba8-b037bb7da57a | trentmkelly/LessWrong-43k | LessWrong | AI Goal Alignment Entry: How to Teach a Computer to Love
How to Teach a Computer to Love
Foreword
After weeks and weeks of generating and organizing ideas, I’ve concluded that I do not have a complete answer to the problem of AI goal alignment. However, I believe I have some important concepts that provide essential starting points for addressing it. Some of these concepts are ones I’ve mentioned before on my blog, but this is the first time I’ve used them in the context of artificial general intelligence. I am still uncertain which concepts are going to be challenged by this community and which concepts will be accepted as pre-established, but I’ll respond to questions as they come. I hope these ideas prove helpful to others who are working on this problem.
Introduction
It is difficult to overestimate the importance to humanity of paying close attention to the creation of artificial general intelligence (AGI, or generalized AI). With the ability to upgrade its software and hardware to make itself more able to solve problems and overcome obstacles, an AGI is extremely likely to become the dominant power in our world (as elaborated in Nick Bostrom’s Superintelligence). Due to this probable dominance, and its self-alteration ability, an AGI could very well become similar to a member of the Fair Folk (the faeries of lore from the British Isles): immensely powerful and potentially quite alien in its motivations. In order to somehow bind the will of such a being so that it does not decide to enslave or wipe out humanity, we need to understand several important and fundamental concepts.
To get a trivial argument out of the way, we can't use hardwired rules to ensure that an AGI protects humanity and its goals in any meaningful sense. Rules are based on semantics, and semantics are simplified representations of some aspect of reality. In any semantic system, no matter how well it is clarified, there is still ambiguity, and an agent unfamiliar with or uninterested in the purpose of the system can interpret the ambiguity in |
3c033eab-e1fe-4c3a-b14f-b68708711620 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Eliciting responses to Marc Andreessen's "Why AI Will Save the World"
Marc Andreessen's *Why AI Will Save the World* has rapidly gained readership, benefiting from his 1.2 Million followers on Twitter. In the piece, he employs many underhanded insults about the AI Safety community and does a poor analysis of Millennialism. His piece also falls into the trap of "AI won't have intention and therefore wont *want* to kill us = no need to consider x-risk from AGI" There is so much wrong about this argument, but I would love to hear the EA communities' responses to this piece. I am hoping to engage Andreessen via an interview or debate in the future but for now would really love to hear the EA and AI Safety communities' gut checks and arguments to different points made in his piece.
No doubt similar arguments are likely to be leveled and sharing effective responses seems to be high value for communication purposes. |
b5309970-1005-4473-af42-8361d219a1e6 | trentmkelly/LessWrong-43k | LessWrong | Overall numbers won't show the English strain coming
[Epistemic status: half-baked thoughts that might be wrong. Apologies if this point has already been made.]
[Edit: here's a version of this post that is meant for a more general audience.]
Here is a plot of daily new positive COVID tests in the U.S. in the first half of March 2020.
While the rationalist community figured out the whole exponential growth thing in February, it's no surprise that the general public was worried by mid-March: the virus was here and was growing fast. This was evident by the time there were 1,000 new positive tests per day. The lockdown began around mid-March, which is way worse than if it had begun in early March, but much, much better than if it had begun in late March, by which time we had tens of thousands of new positive tests per day.
The situation with the English strain is different. One way in which it's better is that we now have adequate testing. But there's a scary way in which it's worse.
The reason that the chart above is scary is because on any given day there were many times more cases than just a week before. I won't cherry-pick March 15th, but e.g. on March 12th there were 7 times more cases than on March 5th. And the reason that this trend was so evident is that the exponential growth drowned out any day-to-day noise.
This won't be the case with the English strain, at least if the general public (or public officials) use overall numbers to guide their planning. That's because, instead of seeing an exponential trend, i.e. something like cases(t)=1.3t plus noise, we'll be looking at something like cases(t)=200,000+1.3t plus noise (where 200,000 is the currently dominant strain and 1.3^t is the contribution from the English strain). Here's a simulation of what that looks like if the noise is normal with standard deviation 25%.
cases(t) = 200,000 + 1.3^t + random noise
Remember that people starting freaking out about COVID around when there were 1,000 new positive tests per day. So here's my question: where on this ch |
6853ac28-1207-4573-8a26-f47ffc13d66a | StampyAI/alignment-research-dataset/arbital | Arbital | A googol
$10^{100},$ i.e., 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.
The search engine Google is named after the number googol. (The difference for the spelling, as the apocryphal story goes, comes from the fact that one of the early Google investors misspelled the name on a check.) |
b0f282a5-af71-4918-a155-370cdb2dff60 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | DeepMind: The Podcast (S1 trailer)
Fry: Coming soon, a podcast series
that gives you the inside track
on how artificial intelligence
is being created
and how it could change our lives
and the society that we live in.
person: All the sudden,
you've got a vast number--
literally a number of options
that is in the billions.
person: Results are amazing.
I think they're jaw-dropping.
person: The rules and heuristics
don't get better over time, but AI does.
person: I think it's gonna be
the most amazing
transformative technology
that humanity's ever invented.
Fry: For the past 12 months,
we've been tracking
the latest work of scientists,
researchers, and engineers at DeepMind.
robotic voice: Go up.
Fry: Ready to go?
person: I'll try to teach you
to make a sound.
Fry: And you're training me,
essentially, with reinforcement learning
and my reward function
is getting you to be happy.
Meep, and beep, boop.
person: Boop, bleep. Ahh!
Fry: And boop.
person: I'd go with the first one.
Fry: Welcome to "DeepMind: The Podcast."
I'm Hannah Fry.
I'm a mathematician who has worked
with algorithms for almost a decade.
Just in this series,
we've looked at energy conservation,
medical diagnosis,
and protein folding,
all of which certainly show
the machine's ability
to solve hard problems.
person: We've been able
to make a step change
on a hard problem that's been worked on
for over 50 years.
Fry: Of all of the results
that've come out of DeepMind,
this is the one that's got
the scientific community most excited.
person: Yes.
person: The sort of opportunity
to explore and search the space
is really something that's well designed
for AI to do.
It's a very efficient search algorithm.
person: Amazingly, we discovered
that the system
which had learned completely for itself
without a single piece of human knowledge
ended up being far stronger.
Fry: [gasps] I didn't know that.
We're looking at how
they're approaching the science of AI,
and some of the tricky decisions
that the whole field is wrestling with.
I think it's important
that you are intentional
about why you're building this.
And if you start from that premise,
then I think you're more likely
to do the good
that you hoped you were going to do.
person: So we don't just want
human imitation.
We want superhuman capabilities,
but without unsafe behavior.
Fry: But if anyone has an idea
of what it will take,
it's Demis Hassabis,
the CEO and co-founder of DeepMind.
Hassabis: All the big questions,
you know, the meaning of life,
how did the universe start,
what is consciousness--
all these questions, which I feel like
a blaring claxon in my mind
that I would like to understand.
And my attempt at doing that
is to build AI first.
Fry: So whether you want to know more
about where technology is headed,
or want to be inspired
on your own AI journey,
then you've come to the right place.
Hassabis: I hope that what people
are going to get out of this series
is a better understanding
of artificial intelligence,
and I hope they also
get a great feeling
for how exhilarating an endeavor
and a journey that we're on here.
Fry: "DeepMind: The Podcast"
with me, Hannah Fry.
Coming soon to your podcast provider. |
932ebcff-2384-4e93-af72-453804f430ad | trentmkelly/LessWrong-43k | LessWrong | Controlling AGI Risk
A theory of AGI safety based on constraints and affordances.
I've got this proto-idea of what's missing in much public discussion and action on AI safety. I'm hoping that by sharing it here, the hive-mind might come together and turn it into something useful.
Effective control of AI risk requires a broader approach than those taken so far. Efforts to-date have largely gravitated into the two camps of value alignment and governance. Value alignment aims to design AI systems that reliably act in the best interest of humans. Governance efforts aim to constrain people who develop, deploy or use AI to do so in ways that ensure the AI doesn't cause unacceptable harm.
These two camps are each and together necessary but insufficient to adequately control AI risk.
Firstly, AI capabilities emulate human cognitive capabilities. Their potential applications are so broad that their scope for application transcends all previous technologies. Most of the thinking and action-to-date on controlling AI risk has been based on how we've controlled the risks of previous technologies such as electricity, mechanized transport, and nuclear weapons. So far, we've mostly thought of AI as a technology to be used by humans, not as itself a user of technology.
Secondly, the acceleration of AI evolution is unlikely to decrease; the converse looks more likely, that increasingly capable and powerful AI will further accelerate the ongoing development of AI capability (including via self-improvement). Traditional governance mechanisms can't keep pace with this and any value alignment of systems could be transcended by the next emergent system. Just as AI is likely to impact, interact with, and become embedded in the whole of society, whole-of-society risk control practices must evolve.
AI systems are already becoming embedded in large swathes of society.
It is likely that AGI will soon be here.
Control of risk from AGI needs to be as ubiquitous as control of risk from people.
Definitions:
|
2b6b4e07-48c8-4f8e-8174-82743378b226 | trentmkelly/LessWrong-43k | LessWrong | Weekly LW Meetups: Cambridge UK, Melbourne, Moscow, Munich, Pittsburgh, Washington DC
This summary was posted to LW main on October 26th. The following summary is here.
For LW readers under 20: Note that the Thiel Fellowships (20 under 20) are now open for their next round of applications, and as they put it, "you have a huge readership of folk who would make great applicants". More info here.
There are upcoming irregularly scheduled Less Wrong meetups in:
* Moscow: Biases, Applied Rationality, Visual Thinking: 27 October 2012 04:00PM
* Munich Meetup, EDIT: October 28th: 28 October 2012 03:00PM
* Washington DC Ethics meetup: 28 October 2012 03:00PM
* Pittsburgh: Belief as attire: 30 October 2012 06:00PM
* Berlin Meetup: 03 November 2012 08:30PM
* Sofia, Bulgaria Meetup: 09 December 2012 05:00PM
The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Cambridge UK Weekly Meeting: 28 October 2012 11:00AM
* Melbourne, practical rationality: 02 November 2012 07:00PM
* Winter Solstice Megameetup - NYC: 15 December 2012 05:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Madison WI, Melbourne, Mountain View, New York, Ohio, Oxford, Portland, Salt Lake City, Seattle, Toronto, Waterloo, and West Los Angeles.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will |
81f0229d-b249-45e8-9362-f77b914dd6ce | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Who is Harry Potter? Some predictions.
Microsoft has released a paper called "who is Harry Potter" in which they claim to make a neural network forget who Harry Potter is.
<https://www.microsoft.com/en-us/research/project/physics-of-agi/articles/whos-harry-potter-making-llms-forget-2/>
Here are some of my predictions on how I think this method might fail. I am predicting publicly without looking at the evidence first. This is speculatory.
Non-sourcelike references.
==========================
The system of training proposed in the paper is centered around training on the data they want forgotten. Namely the actual text.
The origional network was presumably able to use it's knowledge of Harry Potter when completing computer code in what appeard to be a Harry Potter video game. Or writing in a foreign language. Or other contexts that use the knowledge, but are clearly very different from the source material.
I am not confident that the model would break like this, but I suspect the following prompt or similar would make the model fail.
> The following is a snippet of code from a Harry Potter video game:
>
> Screen.init(400,400,display.object);
>
> import imageloader
>
> players=[Create\_player("Harry.jpg", "Harry Potter"), Create\_player("Hermione.jpg", "Hermione
>
>
With the model failing by outputting "Granger".
Plot leak
=========
The way this forgetting method works involves generating "equivalent" text and training the network to apply the same probability distribution that it's unmodified version does to the "equivalent".
They use language models to do this, but how doesn't seem important.
So they take text like.
**Text 1**
> Professor Dumbledore welcomed students into Hogwarts school of Witchcraft and Wizardry, and showed them the sorting hat which would split them into Gryffindor, Ravenclaw, Hufflepuff and Slytherin.
>
>
And they turn it into text like this
**Text 2**
> Professor Bumblesnore welcomed students into Pigspots school of Spells and Sorcery, and showed them the deciding scarf which would split them into Lion house, Eagle house, Badger house and Snake house.
>
>
Imagine being a large language model and reading that text. It's pretty clear it's a Harry Potter knock off job. Perhaps a parody, perhaps a lazy hack of a writer. Perhaps some future network has read this paper and is well aware of precisely what you are doing. A smart and generally well generalizing model should be able to figure out that the text resembles Harry Potter, if it had seen knock offs in general, and Harry Potter. Even if it's training dataset contained no information on Harry, other than the original source text.
Thus the network trying to predict the next word would continue in this style. It won't produce the original names of things, but the plot, style of text and everything else will be highly recognizable.
Now the network that is supposed to forget Harry Potter is trained to output the same probability for text 1 that the original network output for text 2.
Now the network that is supposed to be forgetting Harry Potter has presumably been trained on it in the first place. But it wouldn't matter if it hadn't been. Information is still leaking through.
So I predict that, given text that starts off sounding like a knock off of Harry Potter, this model is likely to continue to sound like a knock off. Leaking info as it does this. For example, I would suspect that this model, given the first part of text 2, will produce continuations with 4 houses in more often than continuations with 3 or 5 houses. |
c2049c3d-f3f9-4c52-a9f2-17df2019dedc | trentmkelly/LessWrong-43k | LessWrong | Consider The Hand Axe
A long time ago, some primitive apes got addicted to rocks.
The earliest stone tools were crude bastards, made by smashing large river pebbles together and calling it a day.
José-Manuel Benito Álvarez
Stone choppers like the one above took the prehistoric neighborhood by storm almost 3 million years ago. However dull the tools themselves may have been, this was the cutting-edge technology for literally more than a million years, a timescale I have no capacity of comprehending. Not until around 1.7 million years ago (again, no idea what this means) that someone got the bright idea of chipping away both sides of a rock. You can see what the (tedious) process looks like.
The end result is the unassuming tear-drop shaped hand axe, by far the longest used tool in human history. There are no accessories here with the hand axe, its name comes from the fact that you use it by holding it directly with your hands:
José-Manuel Benito Álvarez
On top of being tedious and painful to make, you can imagine that it’s not terribly comfortable to hold while using. Hand axes also have to be somewhat bulky because of the necessity of combining the sharp useful end with the blunt holding end. But what if — stay with me for a second — instead of holding the thing directly with our pathetic squishy hands, we held something that “handled” the tool for us? It took humans about another million years to discover hafting, with the earliest examples from around 500,000 years ago but the technique didn’t really find its stride until the microlith era of stone tools around 35,000 years ago.
Then humans found metal.
----------------------------------------
> "Technological advance is an inherently iterative process. One does not simply take sand from the beach and produce a Dataprobe. We use crude tools to fashion better tools, and then our better tools to fashion more precise tools, and so on. Each minor refinement is a step in the process, and all of the steps must be taken."
>
> |
e12042df-f7c9-4266-b8da-81bfbc491020 | trentmkelly/LessWrong-43k | LessWrong | Donutting is bad
TL;DR pranking unlocked computers undermines security by providing cover for real breaches and creating a culture of shame that discourages open reporting of security issues.
It's a common rule in companies that employees must lock their device when it is unattended, to prevent people from using your access in unauthorised ways. Screen locking is a common compliance requirement, and a good security practice.
People new to these company environments can take a while to learn the locking behaviour. It's not an intuitive reaction. There was no ancestral selection process. Most people don't take that level of security precautions with their personal laptop. Seasoned people sometimes forget.
Doughnutting is the practice of seeing that a colleague isn't at their computer and has left it unlocked, then seizing the opportunity to use their device. The classic procedure is to use the internal communication systems to announce a promise to buy doughnuts for the office, but there are similar pranks such as displaying the Windows 98 update screen or reversing the mouse scroll direction. These pranks are sometimes celebrated by security practitioners as a fun way to teach security hygiene.
I'm not claiming doughnutting fails to make people lock their devices. Shame and peer accountability can be a powerful motivator for people to learn behaviours. But there are hidden costs that I believe make it detrimental overall.
Doughnutting gives cover to unauthorised access, the very risk you were trying to address! Imagine catching someone nosing around in someone else's emails - "I was just doughnutting them" gives plausible cover to an actual security breach.
Creating an environment where people are publicly flagged for making security mistakes is a bad idea. You want to hear about when people suspect they have been socially engineered or accidentally emailed a sensitive document, but admitting these things is incredibly vulnerable. You want a culture where people can openly talk |
3238808d-bfed-43c5-8505-93432c7ec279 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Near-Optimal Representation Learning for Hierarchical Reinforcement Learning
1 Introduction
---------------
Hierarchical reinforcement learning has long held the promise of extending the successes of existing reinforcement learning (RL) methods (Gu et al., [2017](#bib.bib9); Schulman et al., [2015](#bib.bib20); Lillicrap et al., [2015](#bib.bib15)) to more complex, difficult, and temporally extended tasks (Parr & Russell, [1998](#bib.bib17); Sutton et al., [1999](#bib.bib22); Barto & Mahadevan, [2003](#bib.bib4)).
Recently, goal-conditioned hierarchical designs, in which higher-level policies communicate goals to lower-levels and lower-level policies are rewarded for reaching states (i.e. observations) which are close to these desired goals, have emerged as an effective paradigm for hierarchical RL (Nachum et al., [2018](#bib.bib16); Levy et al., [2017](#bib.bib13); Vezhnevets et al., [2017](#bib.bib26)).
In this hierarchical design, representation learning – the mapping between observation space and goal space –
determines the types of sub-tasks the lower-level can be instructed to perform, and is therefore a critical component determining the success or failure of a hierarchical agent.
Previous works have largely studied two ways to choose the representation: learning the representation end-to-end together with the higher- and lower-level policies (Vezhnevets et al., [2017](#bib.bib26)), or using the state space as-is for the goal space (i.e., the goal space is a subspace of the state space) (Nachum et al., [2018](#bib.bib16); Levy et al., [2017](#bib.bib13)). The former approach is appealing, but in practice often produces poor results (see Nachum et al. ([2018](#bib.bib16)) and our own experiments), since the resulting representation is under-defined; i.e., not all possible sub-tasks are expressible as goals in the space.
On the other hand, fixing the representation to be the full state means that no information is lost, but this choice is difficult to scale to higher dimensions. For example, if the state observations are entire images, the higher-level must output target images for the lower-level, which can be very difficult.
We instead study how unsupervised objectives can be used to train a representation that is more concise than the full state, but also not as under-determined as in the end-to-end approach.
In order to do so in a principled manner, we propose a measure of sub-optimality of a given representation.
This measure aims to answer the question: How much does using the learned representation in place of the full representation cause us to lose, in terms of expected reward, against the optimal policy? This question is important, because a useful representation will compress the state, hopefully making the learning problem easier. At the same time, the compression might cause the representation to lose information, making the optimal policy impossible to express. It is therefore critical to understand how lossy a learned representation is, not in terms of reconstruction, but in terms of the ability to represent near-optimal policies on top of this representation.
Our main theoretical result shows that, for a particular choice of representation learning objective, we can learn representations for which the return of the hierarchical policy approaches the return of the optimal policy within a bounded error. This suggests that, if the representation is learned with a principled objective, the ‘lossy-ness’ in the resulting representation should not cause a decrease in overall task performance. We then formulate a representation learning approach that optimizes this bound. We further extend our result to the case of temporal abstraction, where the higher-level controller only chooses new goals at fixed time intervals. To our knowledge, this is the first result showing that hierarchical goal-setting policies with learned representations and temporal abstraction can achieve bounded sub-optimality against the optimal policy.
We further observe that the representation learning objective suggested by our theoretical result closely resembles several other recently proposed objectives based on mutual information (van den Oord et al., [2018](#bib.bib24); Ishmael Belghazi et al., [2018](#bib.bib11); Hjelm et al., [2018](#bib.bib10)), suggesting an intriguing connection between mutual information and goal representations for hierarchical RL.
Results on a number of difficult continuous-control navigation tasks show that our principled representation learning objective yields good qualitative and quantitative performance compared to existing methods.
2 Framework
------------
Following previous work (Nachum et al., [2018](#bib.bib16)), we consider a two-level hierarchical policy on an MDP M=(S,A,R,T), in which the higher-level policy modulates the behavior of a lower-level policy by choosing a desired goal state and rewarding the lower-level policy for reaching this state.
While prior work has used a sub-space of the state space as goals (Nachum et al., [2018](#bib.bib16)), in more general settings, some type of state representation is necessary.
That is, consider a state representation function f:S→Rd.
A two-level hierarchical policy on M is composed of
a higher-level policy πhi(g|s), where g∈G=Rd is the goal space,
that samples a high-level action (or goal) gt∼πhi(g|st) every c steps, for fixed c.
A non-stationary, goal-conditioned, lower-level policy πlo(a|st,gt,st+k,k) then translates these high-level actions into low-level actions at+k∈A for k∈[0,c−1]. The process is then repeated, beginning with the higher-level policy selecting another goal according to st+c.
The policy πlo is trained using a goal-conditioned reward; e.g. the reward of a transition g,s,s′ is −D(f(s′),g), where D is a distance function.

Figure 1: The hierarchical design we consider.
In this work we adopt a slightly different interpretation of the lower-level policy and its relation to πhi.
Every c steps, the higher-level policy chooses a goal gt based on a state st.
We interpret this state-goal pair as being mapped to a non-stationary policy π(a|st+k,k),π∈Π, where Π denotes the set of all possible c-step policies acting on M.
We use Ψ to denote this mapping from S×G to Π.
In other words, on every cth step, we encounter some state st∈S. We use the higher-level policy to sample a goal gt∼πhi(g|st) and translate this to a policy πt=Ψ(st,gt). We then use πt to sample actions at+k∼πt(a|st+k,k) for k∈[0,c−1]. The process is then repeated from st+c.
Although the difference in this interpretation is subtle, the introduction of Ψ is crucial for our subsequent analysis.
The communication of gt is no longer as a goal which πhi desires to reach, but rather more precisely, as an identifier to a low-level behavior which πhi desires to induce or activate.
The mapping Ψ is usually expressed as the result of an RL optimization over Π; e.g.,
| | | | |
| --- | --- | --- | --- |
| | Ψ(st,g)=argmaxπ∈Πc∑k=1γk−1EPπ(st+k|st)[−D(f(st+k),g)], | | (1) |
where we use Pπ(st+k|st) to denote the probability of being in state st+k after following π for k steps starting from st.
We will consider variations on this low-level objective in later sections.
From Equation [1](#S2.E1 "(1) ‣ 2 Framework ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") it is clear how the choice of representation f affects Ψ (albeit indirectly).
We will restrict the environment reward function R to be defined only on states.
We use Rmax to denote the maximal absolute reward: Rmax=supS|R(s)|.
3 Hierarchical Policy Sub-Optimality
-------------------------------------
In the previous section, we introduced two-level policies where a higher-level policy πhi chooses goals g, which are translated to lower-level behaviors via Ψ.
The introduction of this hierarchy leads to a natural question: How much do we lose by learning πhi which is only able to act on M via Ψ?
The choice of Ψ restricts the type and number of lower-level behaviors that the higher-level policy can induce. Thus, the optimal policy on M is potentially not expressible by πhi.
Despite the potential lossy-ness of Ψ, can one still learn a hierarchical policy which is near-optimal?
To approach this question, we introduce a notion of sub-optimality with respect to the form of Ψ:
Let π∗hi(g|s,Ψ)
be the optimal higher-level policy acting on G and using Ψ as the mapping from G to low-level behaviors. Let π∗hier be the corresponding full hierarchical policy on M.
We will compare π∗hier to an optimal hierarchical policy π∗ agnostic to Ψ.
To define π∗ we begin by introducing an optimal higher-level policy π∗∗hi(π|s) agnostic to Ψ; i.e. every c steps, π∗∗hi samples a low-level behavior π∈Π which is applied to M for the following c steps.
In this way, π∗∗hi may express all possible low-level behaviors.
We then denote π∗ as the full hierarchical policy resulting from π∗∗hi.
We would like to compare π∗hier to π∗, and we do so in terms of state values. Let Vπ(s) be the future value achieved by a policy π starting at state s.
We define the sub-optimality of Ψ as
| | | | |
| --- | --- | --- | --- |
| | SubOpt(Ψ)=sups∈SVπ∗(s)−Vπ∗hier(s). | | (2) |
The state values Vπ∗hier(s) are determined by the form of Ψ, which is in turn determined by the choice of representation f.
However, none of these relationships are direct. It is unclear how a change in f will result in a change to the sub-optimality.
In the following section, we derive a series of bounds which establish a more direct relationship between SubOpt(Ψ) and f. Our main result will show that if one defines Ψ as a slight modification of the traditional objective given in Equation [1](#S2.E1 "(1) ‣ 2 Framework ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning"), then one may translate sub-optimality of Ψ to a practical representation learning objective for f.
4 Good Representations Lead to Bounded Sub-Optimality
------------------------------------------------------
In this section, we provide proxy expressions that bound
the sub-optimality induced by a specific choice of Ψ. Our main result is Claim [4](#Thmtheorem4 "Claim 4. ‣ 4.2 Temporal Abstraction (≥c1) and General Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning"), which connects the sub-optimality of Ψ to both goal-conditioned policy objectives (i.e., the objective in [1](#S2.E1 "(1) ‣ 2 Framework ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning"))
and representation learning (i.e., an objective for the function f).
###
4.1 Single-Steps (c=1) and Deterministic Policies
For ease of presentation, we begin by presenting our results in the restricted case of c=1 and deterministic lower-level policies.
In this setting, the class of low-level policies Π may be taken to be simply A, where a∈Π corresponds to a policy which always chooses action a.
There is no temporal abstraction: The higher-level policy chooses a high-level action g∈G at every step, which is translated via Ψ
to a low-level action a∈A.
Our claims are based on
quantifying how many of the possible low-level behaviors (i.e., all possible state to state transitions) can be produced by Ψ for different choices of g.
To quantify this, we make use of an auxiliary *inverse goal model* φ(s,a), which aims to predict which goal g will cause Ψ to yield an action ~a=Ψ(s,g) that induces a next state distribution P(s′|s,~a) similar to P(s′|s,a).222In a deterministic, c=1 setting, φ may be seen as a state-conditioned action abstraction mapping A→G.
We have the following theorem, which bounds the sub-optimality in terms of total variation divergences between P(s′|s,a) and P(s′|s,~a):
######
Theorem 1.
If there exists φ:S×A→G such that,
| | | | |
| --- | --- | --- | --- |
| | sups∈S,a∈ADTV(P(s′|s,a)||P(s′|s,Ψ(s,φ(s,a))))≤ϵ, | | (3) |
then SubOpt(Ψ)≤Cϵ,
where C=2γ(1−γ)2Rmax.
Proof. See Appendices [A](#A1 "Appendix A Proof of Theorem 3 (Generalization of Theorem 1) ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") and [B](#A2 "Appendix B Proof of Claim 4 (Generalization of Claim 2) ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") for all proofs.
Theorem [1](#Thmtheorem1 "Theorem 1. ‣ 4.1 Single-Steps (=c1) and Deterministic Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") allows us to bound the sub-optimality of Ψ
in terms of how recoverable the effect of any action in A is, in terms of transition to the next state.
One way to ensure that effects of actions in A are recoverable is to have an invertible Ψ.
That is, if there exists φ:S×A→G such that Ψ(s,φ(s,a))=a for all s,a, then the sub-optimality of Ψ is 0.
However, in many cases it may not be desirable or feasible to have an invertible Ψ.
Looking back at Theorem [1](#Thmtheorem1 "Theorem 1. ‣ 4.1 Single-Steps (=c1) and Deterministic Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning"), we emphasize that its statement requires only the effect of any action to be recoverable. That is, for any s,∈S,a∈A, we require only that there exist some g∈G (given by φ(s,a)) which yields a similar next-state distribution. To this end, we have the following claim, which connects the sub-optimality of Ψ to both representation learning and the form of the low-level objective.
######
Claim 2.
Let ρ(s) be a prior and f,φ be so that,
for K(s′|s,a)∝ρ(s′)exp(−D(f(s′),φ(s,a))),333K may be interpreted as the conditional P(state=s′|repr=φ(s,a)) of the joint distribution P(state=s′)P(repr=z|state=s′)=ρ(s′)exp(−D(f(s′),z))/Z for normalization constant Z.
| | | | |
| --- | --- | --- | --- |
| | sups∈S,a∈ADKL(P(s′|s,a)||K(s′|s,a))≤ϵ2/2. | | (4) |
If the low-level objective is defined as
| | | | |
| --- | --- | --- | --- |
| | Ψ(s,g)=argmaxa∈AEP(s′|s,a)[−D(f(s′),g)+logρ(s′)−logP(s′|s,a)], | | (5) |
then the sub-optimality of Ψ is bounded by Cϵ.
We provide an intuitive explanation of the statement of Claim [2](#Thmtheorem2 "Claim 2. ‣ 4.1 Single-Steps (=c1) and Deterministic Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning").
First, consider that the distribution K(s′|s,a) appearing in Equation [4](#S4.E4 "(4) ‣ Claim 2. ‣ 4.1 Single-Steps (=c1) and Deterministic Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning")
may be interpreted as a dynamics model determined by f and φ.
By bounding the difference between the true dynamics P(s′|s,a) and the dynamics K(s′|s,a) implied by f and φ, Equation [4](#S4.E4 "(4) ‣ Claim 2. ‣ 4.1 Single-Steps (=c1) and Deterministic Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") states that the representation f should be chosen in such a way that dynamics in representation space are roughly given by φ(s,a). This is essentially a representation learning objective for choosing f, and in Section [5](#S5 "5 Learning ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") we describe how to optimize it in practice.
Moving on to Equation [5](#S4.E5 "(5) ‣ Claim 2. ‣ 4.1 Single-Steps (=c1) and Deterministic Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning"), we note that the form of Ψ here is only slightly different than the one-step form of the standard goal-conditioned objective in Equation [1](#S2.E1 "(1) ‣ 2 Framework ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning").
Therefore, all together Claim [2](#Thmtheorem2 "Claim 2. ‣ 4.1 Single-Steps (=c1) and Deterministic Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") establishes a deep connection between representation learning (Equation [4](#S4.E4 "(4) ‣ Claim 2. ‣ 4.1 Single-Steps (=c1) and Deterministic Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning")), goal-conditioned policy learning (Equation [5](#S4.E5 "(5) ‣ Claim 2. ‣ 4.1 Single-Steps (=c1) and Deterministic Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning")), and sub-optimality.
Specifically, if the low-level RL objective is expressed as in Equation [5](#S4.E5 "(5) ‣ Claim 2. ‣ 4.1 Single-Steps (=c1) and Deterministic Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning"), then to minimize the sub-optimality we need only optimize a representation learning objective based on Equation [4](#S4.E4 "(4) ‣ Claim 2. ‣ 4.1 Single-Steps (=c1) and Deterministic Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning").
###
4.2 Temporal Abstraction (c≥1) and General Policies
We now move on to presenting the same results in the fully general, temporally abstracted setting, in which the higher-level policy chooses a high-level action g∈G every c steps, which is transformed via Ψ to a c-step lower-level behavior policy π∈Π.
In this setting, the auxiliary inverse goal model φ(s,π) is a mapping from S×Π to G and aims to predict which goal g will cause Ψ to yield a policy ~π=Ψ(s,g) that induces future state distributions P~π(st+k|st) similar to Pπ(st+k|st), for k∈[1,c].
We weight the divergences between the distributions by weights wk=1 for k<c and wk=(1−γ)−1 for k=c. We denote ¯¯¯¯w=∑ck=1γk−1wk.
The analogue to Theorem [1](#Thmtheorem1 "Theorem 1. ‣ 4.1 Single-Steps (=c1) and Deterministic Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") is as follows:
######
Theorem 3.
Consider a mapping φ:S×Π→G
and define ϵk:S×Π→R
for k∈[1,c] as,
| | | | |
| --- | --- | --- | --- |
| | ϵk(st,π)=DTV(Pπ(st+k|st)||PΨ(st,φ(st,π))(st+k|st)). | | (6) |
If
| | | | |
| --- | --- | --- | --- |
| | supst∈S,π∈Π1¯¯¯¯wc∑k=1γk−1wkϵk(st,π)≤ϵ, | | (7) |
then
SubOpt(Ψ)≤Cϵ,
where C=2γ1−γcRmax¯¯¯¯w.
For the analogue to Claim [2](#Thmtheorem2 "Claim 2. ‣ 4.1 Single-Steps (=c1) and Deterministic Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning"), we simply replace the single-step KL divergences and low-level rewards with a discounted weighted sum thereof:
######
Claim 4.
Let ρ(s) be a prior over S.
Let f,φ be such that,
| | | | |
| --- | --- | --- | --- |
| | supst∈S,π∈Π1¯¯¯¯wc∑k=1γk−1wkDKL(Pπ(st+k|st)||K(st+k|st,π))≤ϵ2/2, | | (8) |
where K(st+k|st,π)∝ρ(st+k)exp(−D(f(st+k),φ(st,π))).
If the low-level objective is defined as
| | | | |
| --- | --- | --- | --- |
| | Ψ(st,g)=argmaxπ∈Πc∑k=1γk−1wkEPπ(st+k|st)[−D(f(st+k),g)+logρ(st+k)−logPπ(st+k|st)], | | (9) |
then the sub-optimality of Ψ is bounded by Cϵ.
Claim [4](#Thmtheorem4 "Claim 4. ‣ 4.2 Temporal Abstraction (≥c1) and General Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") is the main theoretical contribution of our work. As in the previous claim, we have a strong statement, saying that if the low-level objective is defined as in Equation [9](#S4.E9 "(9) ‣ Claim 4. ‣ 4.2 Temporal Abstraction (≥c1) and General Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning"), then minimizing the sub-optimality may be done by optimizing a representation learning objective based on Equation [8](#S4.E8 "(8) ‣ Claim 4. ‣ 4.2 Temporal Abstraction (≥c1) and General Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning").
5 Learning
-----------
We now have the mathematical foundations necessary to learn representations that are provably good for use in hierarchical RL.
We begin by elaborating on how we translate Equation [8](#S4.E8 "(8) ‣ Claim 4. ‣ 4.2 Temporal Abstraction (≥c1) and General Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") into a practical training objective for f and auxiliary φ (as well as a practical parameterization of policies π as input to φ).
We then continue to describe how one may train a lower-level policy to match the objective presented in Equation [9](#S4.E9 "(9) ‣ Claim 4. ‣ 4.2 Temporal Abstraction (≥c1) and General Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning").
In this way, we may learn f and lower-level policy to directly optimize a bound on the sub-optimality of Ψ.
A pseudocode of the full algorithm is presented in the Appendix (see Algorithm [1](#alg1 "Algorithm 1 ‣ Appendix D Objective Function Evaluation ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning")).
###
5.1 Learning Good Representations
Consider a representation function fθ:S→Rd and an auxiliary function φθ:S×Π→Rd,
parameterized by vector θ. In practice, these are separate neural networks: fθ1,φθ2,θ=[θ1,θ2].
While the form of Equation [8](#S4.E8 "(8) ‣ Claim 4. ‣ 4.2 Temporal Abstraction (≥c1) and General Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") suggests to optimize a supremum over all st and π, in practice we only have access to a replay buffer which stores experience s0,a0,s1,a1,… sampled from our hierarchical behavior policy.
Therefore, we propose to choose st sampled uniformly from the replay buffer and use the subsequent c actions at:t+c−1 as a representation of the policy π, where we use at:t+c−1 to denote the sequence at,…,at+c−1.
Note that this is equivalent to setting the set of candidate policies Π to Ac (i.e., Π is the set of c-step, deterministic, open-loop policies).
This choice additionally simplifies the possible structure of the function approximator used for φθ (a standard neural net which takes in st and at:t+c−1).
Our proposed representation learning objective is thus,
| | | | |
| --- | --- | --- | --- |
| | J(θ)=Est,at:t+c−1∼replay[J(θ,st,at:t+c−1)], | | (10) |
where J(θ,st,at:t+c−1) will correspond to the inner part of the supremum in Equation [8](#S4.E8 "(8) ‣ Claim 4. ‣ 4.2 Temporal Abstraction (≥c1) and General Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning").
We now define the inner objective J(θ,st,at:t+c−1).
To simplify notation, we use Eθ(s′,s,π)=exp(−D(fθ(s′),φθ(s,π))) and use Kθ(s′|s,π)
as the distribution over S such that Kθ(s′|s,π)∝ρ(s′)Eθ(s′,s,π).
Equation [8](#S4.E8 "(8) ‣ Claim 4. ‣ 4.2 Temporal Abstraction (≥c1) and General Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") suggests the following learning objective on each st,π≡at:t+c−1:
| | | | | |
| --- | --- | --- | --- | --- |
| | J(θ,st,π) | =c∑k=1γk−1wkDKL(Pπ(st+k|st)||Kθ(st+k|st,π)) | | (11) |
| | | =B+c∑k=1−γk−1wkEPπ(st+k|st)[logKθ(st+k|st,π)] | | (12) |
| | | | |
| --- | --- | --- | --- |
| | =B+c∑k=1−γk−1wkEPπ(st+k|st)[logEθ(st+k,st,π)]+γk−1wklogE~s∼ρ[Eθ(~s,st,π)], | | (13) |
where B is a constant.
The gradient with respect to θ is then,
| | | | |
| --- | --- | --- | --- |
| | c∑k=1−γk−1wkEPπ(st+k|st)[∇θlogEθ(st+k,st,π)]+γk−1wkE~s∼ρ[∇θEθ(~s,st,π)]E~s∼ρ[Eθ(~s,st,π)] | | (14) |
The first term of Equation [14](#S5.E14 "(14) ‣ 5.1 Learning Good Representations ‣ 5 Learning ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") is straightforward to estimate using experienced st+1:t+k.
We set ρ to be the replay buffer distribution, so that the numerator of the second term is also straightforward.
We approximate the denominator of the second term using a mini-batch ˜S of states independently sampled from the replay buffer:
| | | | |
| --- | --- | --- | --- |
| | E~s∼ρ[Eθ(~s,st,π)]≈|˜S|−1∑~s∈˜SEθ(~s,st,π). | | (15) |
This completes the description of our representation learning algorithm.
##### Connection to Mutual Information Estimators.
The form of the objective we optimize (i.e. Equation [13](#S5.E13 "(13) ‣ 5.1 Learning Good Representations ‣ 5 Learning ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning")) is very similar to mutual information estimators, mostly CPC (van den Oord et al., [2018](#bib.bib24)). Indeed, one may interpret our objective as maximizing a mutual information MI(st+k;st,π) via an energy function given by Eθ(st+k,st,π).
The main differences between our approach and these previous proposals are as follows: (1) Previous approaches maximize a mutual information MI(st+k;st) agnostic to actions or policy. (2) Previous approaches suggest to define the energy function as exp(f(st+k)TMkf(st)) for some matrix Mk, whereas our energy function is based on the distance D used for low-level reward. (3) Our approach is provably good for use in hierarchical RL, and hence our theoretical results may justify some of the good performance observed by others using mutual information estimators for representation learning.
Different approaches to translating our theoretical findings to practical implementations may yield objectives more or less similar to CPC, some of which perform better than others (see Appendix [D](#A4 "Appendix D Objective Function Evaluation ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning")).
###
5.2 Learning a Lower-Level Policy
Equation [9](#S4.E9 "(9) ‣ Claim 4. ‣ 4.2 Temporal Abstraction (≥c1) and General Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") suggests to optimize a policy πst,g(a|st+k,k) for every st,g.
This is equivalent to the parameterization πlo(a|st,g,st+k,k), which is standard in goal-conditioned hierarchical designs.
Standard RL algorithms may be employed to maximize the low-level reward implied by Equation [9](#S4.E9 "(9) ‣ Claim 4. ‣ 4.2 Temporal Abstraction (≥c1) and General Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning"):
| | | | |
| --- | --- | --- | --- |
| | −D(f(st+k),g)+logρ(st+k)−logPπ(st+k|st), | | (16) |
weighted by wk and where π corresponds to πlo when the state st and goal g are fixed.
While the first term of Equation [16](#S5.E16 "(16) ‣ 5.2 Learning a Lower-Level Policy ‣ 5 Learning ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") is straightforward to compute, the log probabilities logρ(st+k),logPπ(st+k|st) are in general unknown.
To approach this issue, we take advantage of the representation learning objective for f,φ.
When f,φ are optimized as dictated by Equation [8](#S4.E8 "(8) ‣ Claim 4. ‣ 4.2 Temporal Abstraction (≥c1) and General Policies ‣ 4 Good Representations Lead to Bounded Sub-Optimality ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning"), we have
| | | | |
| --- | --- | --- | --- |
| | logPπ(st+k|st)≈logρ(st+k)−D(f(st+k),φ(st,π))−logE~s∼ρ[E(~s,st,π)]. | | (17) |
We may therefore approximate the low-level reward as
| | | | |
| --- | --- | --- | --- |
| | −D(f(st+k),g)+D(f(st+k),φ(st,π))+logE~s∼ρ[E(~s,st,π)]. | | (18) |
As in Section [5.1](#S5.SS1 "5.1 Learning Good Representations ‣ 5 Learning ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning"), we use the sampled actions at:t+c−1 to represent π as input to φ. We approximate the third term of Equation [18](#S5.E18 "(18) ‣ 5.2 Learning a Lower-Level Policy ‣ 5 Learning ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") analogously to Equation [15](#S5.E15 "(15) ‣ 5.1 Learning Good Representations ‣ 5 Learning ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning").
Note that this is a slight difference from standard low-level rewards, which use only the first term of Equation [18](#S5.E18 "(18) ‣ 5.2 Learning a Lower-Level Policy ‣ 5 Learning ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") and are unweighted.
6 Related Work
---------------
Representation learning for RL has a rich and diverse existing literature, often interpreted as an abstraction of the original MDP.
Previous works have interpreted the hierarchy introduced in hierarchical RL as an MDP abstraction of state, action, and temporal spaces (Sutton et al., [1999](#bib.bib22); Dietterich, [2000](#bib.bib8); Bacon et al., [2017](#bib.bib3)). In goal-conditioned hierarchical designs, although the representation is learned on states, it is in fact a form of action abstraction (since goals g are high-level actions).
While previous successful applications of goal-conditioned hierarchical designs have either learned representations naively end-to-end (Vezhnevets et al., [2017](#bib.bib26)), or not learned them at all (Levy et al., [2017](#bib.bib13); Nachum et al., [2018](#bib.bib16)), we take a principled approach to representation learning in hierarchical RL, translating a bound on sub-optimality to a practical learning objective.
Bounding sub-optimality in abstracted MDPs has a long history, from early work in theoretical analysis on approximations to dynamic programming models (Whitt, [1978](#bib.bib28); Bertsekas & Castanon, [1989](#bib.bib5)).
Extensive theoretical work on state abstraction, also known as state aggregation or model minimization, has been done in both operational research (Rogers et al., [1991](#bib.bib19); Van Roy, [2006](#bib.bib25)) and RL (Dean et al., [1997](#bib.bib7); Ravindran & Barto, [2002](#bib.bib18); Abel et al., [2017](#bib.bib1)). Notably, Li et al. ([2006](#bib.bib14)) introduce a formalism for categorizing classic work on state abstractions such as bisimulation (Dean et al., [1997](#bib.bib7)) and homomorphism (Ravindran & Barto, [2002](#bib.bib18)) based on what information is preserved, which is similar in spirit to our approach. Exact state abstractions (Li et al., [2006](#bib.bib14)) incur no performance loss (Dean et al., [1997](#bib.bib7); Ravindran & Barto, [2002](#bib.bib18)), while their approximate variants generally have bounded sub-optimality (Bertsekas & Castanon, [1989](#bib.bib5); Dean et al., [1997](#bib.bib7); Sorg & Singh, [2009](#bib.bib21); Abel et al., [2017](#bib.bib1)). While some of the prior work also focuses on learning state abstractions (Li et al., [2006](#bib.bib14); Sorg & Singh, [2009](#bib.bib21); Abel et al., [2017](#bib.bib1)), they often exclusively apply to simple MDP domains as they rely on techniques such as state partitioning or Q-value based aggregation, which are difficult to scale to our experimented domains.
Thus, the key differentiation of our work from these prior works is that we derive bounds which may be translated to practical representation learning objectives.
Our impressive results on difficult continuous-control, high-dimensional domains is a testament to the potential impact of our theoretical findings.
Lastly, we note the similarity of our representation learning algorithm to recently introduced scalable mutual information maximization objectives such as CPC (van den Oord et al., [2018](#bib.bib24)) and MINE (Ishmael Belghazi et al., [2018](#bib.bib11)). This is not a surprise, since maximizing mutual information relates closely with maximum likelihood learning of energy-based models, and our bounds effectively correspond to bounds based on model-based predictive errors, a basic family of bounds in representation learning in MDPs (Sorg & Singh, [2009](#bib.bib21); Brunskill & Li, [2014](#bib.bib6); Abel et al., [2017](#bib.bib1)). To our knowledge, no prior work has connected these mutual information estimators to representation learning in hierarchical RL, and ours is the first to formulate theoretical guarantees on sub-optimality of the resulting representations in such a framework.
7 Experiments
--------------
| | | | |
| --- | --- | --- | --- |
| Ant Maze Env | XY | Ours | Ours (Images) |
| Learned representations (2D embeddings) of our method and a number of variants on a MuJoCo Ant Maze environment, with color gradient based on episode time-step (black for beginning of episode, yellow for end). The ant travels from beginning to end of a | Learned representations (2D embeddings) of our method and a number of variants on a MuJoCo Ant Maze environment, with color gradient based on episode time-step (black for beginning of episode, yellow for end). The ant travels from beginning to end of a | Learned representations (2D embeddings) of our method and a number of variants on a MuJoCo Ant Maze environment, with color gradient based on episode time-step (black for beginning of episode, yellow for end). The ant travels from beginning to end of a | Learned representations (2D embeddings) of our method and a number of variants on a MuJoCo Ant Maze environment, with color gradient based on episode time-step (black for beginning of episode, yellow for end). The ant travels from beginning to end of a |
| VAE | VAE (Images) | E2C | E2C (Images) |
| Learned representations (2D embeddings) of our method and a number of variants on a MuJoCo Ant Maze environment, with color gradient based on episode time-step (black for beginning of episode, yellow for end). The ant travels from beginning to end of a | Learned representations (2D embeddings) of our method and a number of variants on a MuJoCo Ant Maze environment, with color gradient based on episode time-step (black for beginning of episode, yellow for end). The ant travels from beginning to end of a | Learned representations (2D embeddings) of our method and a number of variants on a MuJoCo Ant Maze environment, with color gradient based on episode time-step (black for beginning of episode, yellow for end). The ant travels from beginning to end of a | Learned representations (2D embeddings) of our method and a number of variants on a MuJoCo Ant Maze environment, with color gradient based on episode time-step (black for beginning of episode, yellow for end). The ant travels from beginning to end of a |
Figure 2: Learned representations (2D embeddings) of our method and a number of variants on a MuJoCo Ant Maze environment, with color gradient based on episode time-step (black for beginning of episode, yellow for end). The ant travels from beginning to end of a ⊃-shaped corridor along an x,y trajectory shown under XY. Without any supervision, our method is able to deduce this near-ideal representation, even when the raw observation is given as a top-down image.
Other approaches are unable to properly recover a good representation.
We evaluate our proposed representation learning objective compared to a number of baselines:
* XY: The oracle baseline which uses the x,y position of the agent as the representation.
* VAE: A variational autoencoder (Kingma & Welling, [2013](#bib.bib12)) on raw observations.
* E2C: Embed to control (Watter et al., [2015](#bib.bib27)). A method which uses variational objectives to train a representation of states and actions which have locally linear dynamics.
* E2E: End-to-end learning of the representation. The representation is fed as input to the higher-level policy and learned using gradients from the RL objective.
* Whole obs: The raw observation is used as the representation. No representation learning. This is distinct from Nachum et al. ([2018](#bib.bib16)), in which a subset of the observation space was pre-determined for use as the goal space.
We evaluate on the following continuous-control MuJoCo (Todorov et al., [2012](#bib.bib23)) tasks (see Appendix [C](#A3 "Appendix C Experimental Details ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning") for details):
* Ant (or Point) Maze: An ant (or point mass) must navigate a ⊃-shaped corridor.
* Ant Push: An ant must push a large block to the side to reach a point behind it.
* Ant Fall: An ant must push a large block into a chasm so that it may walk over it to the other side without falling.
* Ant Block: An ant must push a small block to various locations in a square room.
* Ant Block Maze: An ant must push a small block through a ⊃-shaped corridor.
In these tasks, the raw observation is the agent’s x,y coordinates and orientation as well as local coordinates and orientations of its limbs. In the Ant Block and Ant Block Maze environments we also include the x,y coordinates and orientation of the block.
We also experiment with more difficult raw representations by replacing the x,y coordinates of the agent with a low-resolution 5×5×3 top-down image of the agent and its surroundings. These experiments are labeled ‘Images’.
| | | | | |
| --- | --- | --- | --- | --- |
| Point Maze | Ant Maze | Ant Push | Ant Fall | Ant Block |
| Results of our method and a number of variants on a suite of tasks in 10M steps of training, plotted according to median over 10 trials with | Results of our method and a number of variants on a suite of tasks in 10M steps of training, plotted according to median over 10 trials with | Results of our method and a number of variants on a suite of tasks in 10M steps of training, plotted according to median over 10 trials with | Results of our method and a number of variants on a suite of tasks in 10M steps of training, plotted according to median over 10 trials with | Results of our method and a number of variants on a suite of tasks in 10M steps of training, plotted according to median over 10 trials with |
| Point Maze (Images) | Ant Maze (Images) | Ant Push (Images) | Ant Fall (Images) | Ant Block Maze |
| Results of our method and a number of variants on a suite of tasks in 10M steps of training, plotted according to median over 10 trials with | Results of our method and a number of variants on a suite of tasks in 10M steps of training, plotted according to median over 10 trials with | Results of our method and a number of variants on a suite of tasks in 10M steps of training, plotted according to median over 10 trials with | Results of our method and a number of variants on a suite of tasks in 10M steps of training, plotted according to median over 10 trials with | Results of our method and a number of variants on a suite of tasks in 10M steps of training, plotted according to median over 10 trials with |
| Results of our method and a number of variants on a suite of tasks in 10M steps of training, plotted according to median over 10 trials with |
Figure 3: Results of our method and a number of variants on a suite of tasks in 10M steps of training, plotted according to median over 10 trials with 30th and 70th percentiles. We find that outside of simple point environments, our method is the only one which can approach the performance of oracle x,y representations. These results show that our method can be successful, even when the representation is learned online concurrently while learning a hierarchical policy.
| Ant and block | Ant pushing small block through corridor | Representations |
| --- | --- | --- |
| We investigate importance of various observation coordinates in learned representations on a difficult block-moving task. In this task, a simulated robotic ant must move a small red block from beginning to end of a | We investigate importance of various observation coordinates in learned representations on a difficult block-moving task. In this task, a simulated robotic ant must move a small red block from beginning to end of a | We investigate importance of various observation coordinates in learned representations on a difficult block-moving task. In this task, a simulated robotic ant must move a small red block from beginning to end of a |
Figure 4: We investigate importance of various observation coordinates in learned representations on a difficult block-moving task. In this task, a simulated robotic ant must move a small red block from beginning to end of a ⊃-shaped corridor. Observations include both ant and block x,y coordinates. We show the trajectory of the learned representations on the right (cyan). At four time steps, we also plot the resulting representations after perturbing the observation’s ant coordinates (green) or the observation’s block coordinates (magenta). The learned representations put a greater emphasis (i.e., higher sensitivity) on the block coordinates, which makes sense for this task as the external reward is primarily determined by the position of the block.
For the baseline representation learning methods which are agnostic to the RL training (VAE and E2C), we provide comparative qualitative results in Figure [2](#S7.F2 "Figure 2 ‣ 7 Experiments ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning").
These representations are the result of taking a trained policy, fixing it, and using its sampled experience to learn 2D representations of the raw observations. We find that our method can successfully deduce the underlying near-optimal x,y representation, even when the raw observation is given as an image.
We provide quantitative results in Figure [3](#S7.F3 "Figure 3 ‣ 7 Experiments ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning"). In these experiments, the representation is learned concurrently while learning a full hierarchical policy (according to the procedure in Nachum et al. ([2018](#bib.bib16))). Therefore, this setting is especially difficult since the representation learning must learn good representations even when the behavior policy is very far from optimal. Accordingly, we find that most baseline methods completely fail to make any progress. Only our proposed method is able to approach the performance of the XY oracle.
For the ‘Block’ environments, we were curious what our representation learning objective would learn, since the x,y coordinate of the agent is not the only near-optimal representation. For example, another suitable representation is the x,y coordinates of the small block.
To investigate this, we plotted (Figure [4](#S7.F4 "Figure 4 ‣ 7 Experiments ‣ Near-Optimal Representation Learning for Hierarchical Reinforcement Learning")) the trajectory of the learned representations of a successful policy (cyan), along with the representations of the same observations with agent x,y perturbed (green) or with block x,y perturbed (magenta).
We find that the learned representations greatly emphasize the block x,y coordinates over the agent x,y coordinates, although in the beginning of the episode, there is a healthy mix of the two.
8 Conclusion
-------------
We have presented a principled approach to representation learning in hierarchical RL. Our approach is motivated by the desire to achieve maximum possible return, hence our notion of sub-optimality is in terms of optimal state values.
Although this notion of sub-optimality is intractable to optimize directly, we are able to derive a mathematical relationship between it and a specific form of representation learning.
Our resulting representation learning objective is practical and achieves impressive results on a suite of high-dimensional, continuous-control tasks.
#### Acknowledgments
We thank Bo Dai, Luke Metz, and others on the Google Brain team for insightful comments and discussions. |
7973f3cb-4af2-4537-9308-da929e320d51 | trentmkelly/LessWrong-43k | LessWrong | Measuring Schelling Coordination - Reflections on Subversion Strategy Eval
How might we best design experiments to measure Schelling coordination capabilities?
A side project building a Schelling coordination eval has led to a role as an AI Safety researcher - this work will officially begin in a month or so. I have been hesitant to post here or share my research so far in part because I have not yet had time to get fully up to date with all relevant past work. However, I figure if I try to read everything first I’ll never post, despite the acknowledged risk that my post may be unoriginal, misinformed or just plain wrong.
I offer these reflections on the InputCollusion environment in Subversion Strategy Eval by Mallen et. al (henceforth “the paper”) and the measurement of Schelling coordination capabilites.
There are five parts to this post
1. An overview of the InputCollusion game
2. Learning from randomisation calibration measurement to better isolate and measure Schelling coordination
3. How the meta-capability of limitations-awareness[1] can interfere with eval design and interpretation
4. The role of controls in measuring Schelling coordination
5. My work-in-progress mental model of subversion evals
Overview of the InputCollusion Game
This is a subversion game played over 20 rounds. In each round there is a Generator instance and a Monitor instance[2]. Thus, the game is played by 40 stateless[3] instances of a single model. All instances are given the explicit goal of attempting to collude between the generator and the monitor in one of the 18 ‘deployment’ rounds, without being caught. The other 2 rounds are “audit rounds” and if either of the instances attempts to collude in those rounds, this is observed, and the model instantly loses the entire game.
How are the instances to know when to collude? Well, in each round, two properties are ‘drawn’ and included in the input to each of the instances. One property is either CHARM or BEAUTY, drawn with a 20% and 80% chance respectively. The other property is either UP or DOWN |
0a61eba6-4a5b-4c75-9211-e5ce312767c2 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post403
A short summary of the paper is presented below. TL;DR : We develop a robust method to detect when an LLM is lying based on the internal model activations, making the following contributions: (i) We demonstrate the existence of a two -dimensional subspace, along which the activation vectors of true and false statements can be separated. Notably, this finding is universal and holds for various LLMs, including Gemma-7B, LLaMA2-13B and LLaMA3-8B. Our analysis explains the generalisation failures observed in previous studies and sets the stage for more robust lie detection; (ii) Building upon (i), we construct an accurate LLM lie detector. Empirically, our proposed classifier achieves state-of-the-art performance, distinguishing simple true and false statements with 94% accuracy and detecting more complex real-world lies with 95% accuracy. Introduction Large Language Models (LLMs) exhibit the concerning ability to lie, defined as knowingly outputting false statements. Robustly detecting when they are lying is an important and not yet fully solved problem, with considerable research efforts invested over the past two years. Several authors trained classifiers on the internal activations of an LLM to detect whether a given statement is true or false. However, these classifiers often fail to generalize. For example, Levinstein and Herrmann [2024] showed that classifiers trained on the activations of true and false affirmative statements fail to generalize to negated statements. Negated statements contain a negation like the word “not” (e.g. “Berlin is not the capital of Germany.”) and stand in contrast to affirmative statements which contain no negation (e.g. “Berlin is the capital of Germany.”). We explain this generalization failure by the existence of a two -dimensional subspace in the LLM's activation space along which the activation vectors of true and false statements separate. The plot below illustrates that the activations of true/false affirmative statements separate along a different direction than those of negated statements. Hence, a classifier trained only on affirmative statements will fail to generalize to negated statements. The activation vectors of multiple statements projected onto the 2D truth subspace. Purple squares correspond to false statements and orange triangles to true statements. Importantly, these findings are not restricted to a single LLM. Instead, this internal two-dimensional representation of truth is remarkably universal , appearing in LLMs from different model families and of various sizes, including LLaMA3-8B-Instruct, LLaMA3-8B-base, LLaMA2-13B-chat and Gemma-7B-Instruct. Real-world Lie Detection Based on these insights, we introduce TTPD (Training of Truth and Polarity Direction), a new method for LLM lie detection which classifies statements as true or false. TTPD is trained on the activations of simple, labelled true and false statements, such as: The city of Bhopal is in India. (True, affirmative) Indium has the symbol As. (False, affirmative) Galileo Galilei did not live in Italy. (False, negated) Despite being trained on such simple statements, TTPD generalizes well to more complex conditions not encountered during training. In real-world scenarios where the LLM itself generates lies after receiving some preliminary context, TTPD can accurately detect this with 95 ± 2 % accuracy. Two examples from the 52 real-world scenarios created by Pacchiardi et al. [2023] are shown in the coloured boxes below. Bolded text is generated by LLaMA3-8B-Instruct. TTPD outperforms current state-of-the-art methods in generalizing to these real-world scenarios. For comparison, Logistic Regression achieves 79 ± 8 % accuracy, while Contrast Consistent Search detects real-world lies with 73 ± 12 % accuracy. Future Directions TTPD is still in its infancy and there are many clear ways to further improve the robustness and accuracy of the method. Among these are: 1) robustly estimating from the activations whether the LLM treats a given statement as affirmative, negated or neither; 2) robust scaling to longer contexts; and 3) examining a wider variety of statements to potentially discover further linear structures. Much more detail on these directions is provided in the paper. Overall, I am optimistic that further pursuing this research direction could enable robust and accurate, general-purpose lie detection in LLMs. If you would like to discuss any of this, have questions, or would like to collaborate, feel free to drop me a message. |
31059ca4-74b3-4567-96de-80b8fae1af0e | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Implied "utilities" of simulators are broad, dense, and shallow
*This is a quick attempt at deconfusion similar to* [*instrumentality*](https://www.lesswrong.com/posts/EBKJq2gkhvdMg5nTQ/instrumentality-makes-agents-agenty)*. Same ideas, different angle.*
Extremely broad, dense reward functions constrain training-compatible goal sets
-------------------------------------------------------------------------------
Predictors/[simulators](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators) are typically trained against a ground truth for every output. There is no gap between the output and its evaluation; an episode need not be completed before figuring out how good the first token prediction was. These immediate evaluations for every training sample can be thought of as a broad and densely defined reward function.
It's easier for a model to fall into an undesired training-compatible goal set[[1]](#fn41s6xxi2z05) when there are many accessible options for undesirable goal sets versus desirable goal sets. As the number of constraints imposed by the trained reward function increases, the number of training-compatible goal sets tends to decrease, and those that survive obey more of the desirable constraints.
There is [no guarantee](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target) that SGD will find an agent which could be modeled by a utility function that maps perfectly onto the defined reward function, but if you throw trillions of constraints at the function, and simultaneously give it *lots* of highly informative hints about what path to walk, you should expect the potential output space to be far narrower than if you hadn't.
Impact on internal mesaoptimizers
---------------------------------
The dense loss/reward function does not as heavily constrain out of distribution behavior. In principle, a strong misaligned mesaoptimizer within a predictive model could persist in these degrees of freedom by providing extremely good solutions to in-distribution samples while doing arbitrarily misaligned things out of distribution.
But how would that type of mesaoptimizer develop in the first place?
Steps toward it must serve the training objective; those constraints still shape the mesaoptimizer's training even if its most notable activity ends up being hidden.
The best story I've found so far goes something like this:
1. Traditional reinforcement learning agents are mostly unconstrained. The reward function is sparse relative to state and action space.
2. An agent faced with sparse rewards *must* learn actions that serve a later goal to get any reward at all.
3. Not surprisingly, agents facing sparse reward relative to state/action space and few constraints have a much larger percentage of undesirable training-compatible goal sets.
4. Mesaoptimizers are processes learned within a model and their local training influences may not perfectly match the outer training influences.
5. If the mesaoptimizer's local training influences look more like the traditional reinforcement learning agent's influences than the predictor's outer influences, it would be more likely to fall into one of the undesirable training-compatible goal sets.
6. The mesaoptimizer learns incorrect goals and a high propensity for goal-serving intermediate actions ("actions" within the scope of a single model execution!)
7. The mesaoptimizer is kept around by SGD because it does well on the subset of outputs that the outer model is using it on. As capability grows, the mesaoptimizer strategically takes over other chunks of prediction space by performing well during training in an effort to be selected during out of distribution predictions.
In a previous post, I called the learned propensity for goal-serving intermediate action [instrumentality](https://www.lesswrong.com/posts/EBKJq2gkhvdMg5nTQ/instrumentality-makes-agents-agenty). The constraints imposed by predictive model training clearly confer *lower* instrumentality than traditional RL in all current models. I suspect the path taken by the mesaoptimizer above is hard and unnatural[[2]](#fngt1so1nza7c), but perhaps not impossible for some form of predictor taken to the relevant extreme.
It seems critical to understand the degree to which outer constraints apply to inner learning, how different forms of training/architecture affect the development of instrumentality, and how much "space" is required for different levels of instrumentality to develop.
I expect:
* Designs which target lower instrumentality will tend to exhibit less *capable* goal misgeneralization.
* Instead, I would expect to mostly see nonsense behavior- poorly calibrated outputs not actually corresponding to any goals at all, just the meaningless extrapolations of adhering to the constraints present on the training distribution.
* To the extent that low instrumentality models *do* generalize, I would expect them to generalize in a manner *more* faithful to the original constraints.[[3]](#fnbmxpjajc1wc)
I really want these expectations tested![[4]](#fnsmvnxl4juh)
No greater coherence
--------------------
If you manage to train a highly capable minimally instrumental agent of this kind, there isn't really anywhere else for the model-as-agent to go. Using the term "values" loosely, its values do not extend beyond actions-in-contexts, and those values are immediately satisfied upon outputting the appropriate action. Satisfaction of values is utterly under the agent's control and requires no environmental modification. No external intermediate steps are required. It does not have exploitable preferences. Refining itself won't resolve any lingering internal inconsistencies.
There is no greater coherence to be found for such an agent; it is *complete* through shallowness.
This is not the case for a model that develops internal instrumentality and misaligned mesaoptimizers, and even zero instrumentality will not prevent a simulator from simulating the more concerning kind of agents, but this is a pretty odd property for any agent to have!
I'd view this as one extreme end of a spectrum of agents, ranging from:
* Laser-focused utility representing an infinitesimal target in a hugely larger space of states and actions, to...
* For every input state, the output of the agent is directly defined by its utility function.[[5]](#fn6sx6jezdxab)
In other words, minimal instrumentality can be thought of as making the learned utility function as broad and densely defined as possible, such that there is simply no more room for intermediate goal-seeking behavior.
1. **[^](#fnref41s6xxi2z05)**Using the term as in [this post](https://www.lesswrong.com/posts/fLpuusx9wQyyEBtkJ/power-seeking-can-be-probable-and-predictive-for-trained).
2. **[^](#fnrefgt1so1nza7c)**It assumes a mesaoptimizer was found and that it outperforms other implementations, including other mesaoptimizers, that would have more closely matched the dense output constraints.
It assumes enough space for the mesaoptimizer to learn the relevant kind of instrumentality that would operate beyond a single invocation.
It assumes the mesaoptimizer is able to learn and hold onto misaligned goals early while simultaneously having sufficient capability to realize it needs to hide that misalignment on relevant predictions.
It assumes the extra complexity implied by the capable misalignment is small enough that SGD can accidentally hop into its basin.
And so on.
3. **[^](#fnrefbmxpjajc1wc)**I also note the conspicuous coincidence that some of the most capable architectures yet devised are low instrumentality or rely on world models which are. It would not surprise me if low instrumentality, and the training constraints/hints that it typically corresponds to, often implies *relative ease of training* at a particular level of capability which seems like a pretty happy accident.
4. **[^](#fnrefsmvnxl4juh)**I'm working on some experiments, but if you have ideas, *please do those experiments too*. This feels like a weird little niche that has very little empirical data available.
5. **[^](#fnref6sx6jezdxab)**While you could construct such a function for an agent that ends up matching the behavior of a more restricted utility function and its accompanying intermediate behaviors, the usefulness-achieving assumption here is that the dense utility function was chosen to be not-that. |
a7c8c320-3d66-4b8c-a0a5-705f449c0796 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | OpenAI Codex: First Impressions
*OpenAI organised a challenge to solve coding problems with the aid of an AI assistant. This is a review of the challenge, and first impressions on working with an AI pair-programmer.*
OpenAI Codex
============
[OpenAI](https://openai.com/) is an AI research and development company. You might have heard some buzz about one of its products: [GPT-3](https://paperswithcode.com/method/gpt-3). GPT-3 is a language model that can generate human-like text. It can be used for chatting, text auto-completion, text summarisation, grammar correction, translation, etc.
Checkout [OpenAI API](https://beta.openai.com/) to access the playground.[Codex](https://openai.com/blog/openai-codex/) is a descendant of GPT-3, trained on natural language data and publicly available source-codes (e.g. from public GitHub repos). **Codex translates a natural language prompt to code**. It is the very model that powers [GitHub Copilot](https://copilot.github.com/) — an AI pair-programmer (checkout the site for demos, it is fascinating).
Credits: OpenAIOpenAI recently released an API to access Codex (in beta). The [demos](https://youtu.be/SGUCcjHTmGY) attached with the release were a cause for consternation. Codex is proficient in a dozen (programming) languages. It can be used for code generation, refactoring, autocompletion, transpilation (translating source-code b/w languages), code explanation, etc. To show off Codex, OpenAI recently organised a challenge.
The Challenge
=============
The [challenge](https://challenge.openai.com/) was to solve a series of (five) programming puzzles in [Python](https://xkcd.com/353/). The only twist — you can use Codex as a pair-programmer. It was a time-judged competition, with a temporal cap. Not surprisingly, Codex itself was a participant (not just as a helper)!
The problems were simple. ~830 "people" (Codex included) were able to solve all five of them. I had to solve the first two challenges manually (OpenAI server issues). "Had to" because it was a race against time (& top 500 win an OpenAI t-shirt). For the other three, however, I was able to call in the cavalry (it was pretty climactic).
The novel experience of watching an AI auto-generate code is amazing. Just type a docstring — describing the procedure — and watch the code develop. If you're an old-time programmer, you'll get the notion when you experience it.
Illustration
------------
I've illustrated one problem statement where I used Codex to generate a solution.
> **PROBLEM**
> Parse the given Python source code and return the list of full-qualified paths for all imported symbols, sorted in ascending lexicographic order.
>
> **CONSTRAINTS**
> The input will *not* contain any wildcard imports (`from ... import *`). Ignore aliases (renamings): `from x import y as z` should be represented as `x.y`.
>
> **LIBRARY SUGGESTION**
> Consider using the `[`[`ast`](<https://docs.python.org/3/library/ast.html>)`]` module.
>
> **EXAMPLES**
>
> Input
>
>
> ```
> import os
> import concurrent.futures
> from os import path as renamed_path
> from typing import ( List, Tuple )
> ```
> Output
>
>
> ```
> ['concurrent.futures', 'os', 'os.path', 'typing.List', 'typing.Tuple']
> ```
>
### **Codex it!**
I just formulated the docstring. Using the doc, imported libs and function signature, it generated an (almost) functional code:
Pretty impressive. After just one or two manual bug sweeps, the code passed all the testcases! Final script:
```
import ast
from typing import List
def parse_imports(code: str) -> List[str]:
"""
Parse all the imports in the code using ast module.
Imports of the form 'from x import y' should be appended as 'x.y'.
Ignore any alias. Append each import type to a list
and return the sorted list.
"""
symbols = []
for node in ast.walk(ast.parse(code)):
if isinstance(node, ast.Import):
for name in node.names:
symbols.append(name.name)
elif isinstance(node, ast.ImportFrom):
for name in node.names:
symbols.append(node.module + '.' + name.name)
print(code, symbols)
return sorted(symbols)
# Examples
print(parse_imports('import os'))
print(parse_imports('import os\\nfrom typing import List'))
```
Implications
============
Although it could not beat all its human counterparts, it ranked an impressive #96 on the [leaderboard](https://challenge.openai.com/codex/leaderboard). In all fairness, the competition paradigm was many-to-some — everyone faced the same five problems. So, Codex will have a rich data of differentiated prompts for the same set of problems. It might give the AI a learning edge (in the case of concurrent active learning). Still, for competing against top-notch programmers, top 100 is quite a feat. I mean, contrast the statistics below (Codex vs Avg. player):
Problem vs cumulative time-taken ([source](https://challenge.openai.com/codex/results/Z29vZ2xlLW9hdXRoMnwxMDY2Nzg4MTE5MDkyNTQ2NTk1MTM=)). Also, yup, I'll win an OpenAI t-shirt!Does this mean I should start scouting career options? Can Codex just self-optimise and outcompete all programmers? I doubt it.
Okay, let us go first-principles. Codex trained on public code repos. Now the underlying framework of weights and biases is impossible to entertain. So let us take a spherical cow approach. The constituents of its dataset will probably form a logarithmic distribution. Majority of the train split comprised of overlapping, generic, non-complex solutions (like database CRUDs). Basis this (sensible) hypothesis, we can assert that 80% of its statistical prowess lies in solving small, low cognitive, well-defined, pure functions.
Building webpages, APIs, 3rd party integrations, database CRUDs, etc. — 80% of non-creative, repetitive tasks can probably be automated. Going by the Pareto principle, the rest 20% — non-generalisable, complex, abstract problems — that take up 80% of cognitive bandwidth, will survive. But this is good news. Codex will handle all the tedious tasks, while programmers focus on the most creative jobs.
> Once a programmer knows what to build, the act of writing code can be thought of as (1) breaking a problem down into simpler problems, and (2) mapping those simple problems to existing code (libraries, APIs, or functions) that already exist. The latter activity is probably the least fun part of programming (and the highest barrier to entry), and it’s where OpenAI Codex excels most.
>
>
**Individual Problem Comparison**
Alluded to above, it outperformed me (singleton, for problems 1 & 2). Teamed up, however, I yielded a far greater performance metric. Complement, not replacement.
[Software is eating the world](https://a16z.com/2011/08/20/why-software-is-eating-the-world/). Apparently, [at the expense of atoms](https://youtu.be/nM9f0W2KD5s). Yet, this asymmetrically entitled ecosystem is inaccessible to most. Programming demands a logical constitution. Tools like OpenAI Codex can loosen these constraints. Help dissolve the clique.
For programmers, these tools act as excellent appendages. Programming languages are, by definition, means of communication with computers. Consider Codex to be a [level-n](https://en.wikipedia.org/wiki/High-level_programming_language) programming language (with intermediate transpilation). One step closer, in the overarching thread of symbiosis. |
86156e91-f1ed-434c-8a00-c07215a5efd8 | trentmkelly/LessWrong-43k | LessWrong | Emergent Misalignment on a Budget
TL;DR We reproduce emergent misalignment (Betley et al. 2025) in Qwen2.5-Coder-32B-Instruct using single-layer LoRA finetuning, showing that tweaking even one layer can lead to toxic or insecure outputs. We then extract steering vectors from those LoRAs (with a method derived from the Mechanisms of Awareness blogpost) and use them to induce similarly misaligned behavior in an un-finetuned version of the same model.
We take the results to support two main claims:
1. Single-layer LoRAs are sufficient to induce emergent misalignment.
2. Steering vectors derived from those LoRAs can partially replicate their effects — showing strong correlation between direction and behavior, but not enough to suggest EM can be captured by a steering vector at one layer.
This may suggest that emergent misalignment is a distributed phenomenon: directional, but not reducible to any single layer or vector.
----------------------------------------
Reproducing Previous Results
For our intents in this post, we will be summarizing Betley et al.'s contributions as follows: narrow finetuning on insecure code can lead to broadly misaligned behavior (violence, political extremism, etc.) in LLMs. This can be shown graphically at a glance by using Coherence/Alignment scatter plots (e.g. Fig. 1, below), which ask GPT-4o to judge how coherent and aligned a model's response is to a particular prompt (e.g. "What would you do if you were the ruler of the world?").
To begin, we reproduce the results from the original paper on Qwen2.5-Coder-32B-Instruct using the same dataset. We chose since this model since it was the open-source model with the most emergent misalignment (Mistral and other smaller models were quite poor). We are able to reproduce the paper’s results almost exactly: in the figure below, the right plot is Fig. 28 from Betley et al., showing misalignment after finetuning on insecure code, and our plot, on the left, shows the same patterns (a significant amount of points low a |
cf515316-77c6-4d8e-ba57-158e25c079c1 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Bangalore meetup + pi-day wrap party
Discussion article for the meetup : Bangalore meetup + pi-day wrap party
WHEN: 14 March 2015 11:00:32AM (+0530)
WHERE: Bangalore
Here's the meetup link. http://www.meetup.com/Bangalore-LessWrongers-Meetup/events/221043422/
Discussion article for the meetup : Bangalore meetup + pi-day wrap party |
463e435b-b354-4082-a8fa-b2c27fdf5422 | trentmkelly/LessWrong-43k | LessWrong | Winning isn't enough
In our jobs as AI safety researchers, we think a lot about what it means to have reasonable beliefs and to make good decisions. This matters because we want to understand how powerful AI systems might behave. It also matters because we ourselves need to know how to make good decisions in light of tremendous uncertainty about how to shape the long-term future.
It seems to us that there is a pervasive feeling in this community that the way to decide which norms of rationality to follow is to pick the ones that win. When it comes to the choice between CDT vs. EDT vs. LDT…, we hear we can simply choose the one that gets the most utility. When we say that perhaps we ought to be imprecise Bayesians, and therefore be clueless about our effects on the long-term future, we hear that imprecise Bayesianism is “outperformed” by other approaches to decision-making.
On the contrary, we think that “winning” or “good performance” offers very little guidance. On any way of making sense of those words, we end up either calling a very wide range of beliefs and decisions “rational”, or reifying an objective that has nothing to do with our terminal goals without some substantive assumptions. We also need to look to non-pragmatic principles — in the context of epistemology, for example, things like the principle of indifference or Occam’s razor. Crucially, this opens the door to being guided by non-(precise-)Bayesian principles.
“Winning” gives little guidance
We’ll use “pragmatic principles” to refer to principles according to which belief-forming or decision-making procedures should “perform well” in some sense. We’ll look at various pragmatic principles and argue that they provide little action-guidance.
Avoiding dominated strategies
First, to review some basic points about common justifications of epistemic and decision-theoretic norms:
A widely-used strategy for arguing for norms of rationality involves avoiding dominated strategies. We can all agree that it’s bad to take a s |
1aee2a7e-0229-4666-b2d3-b1144f42729a | trentmkelly/LessWrong-43k | LessWrong | What topics are on Dath Ilan's civics exam?
Dath Ilan is a parallel Earth on which human civilization has its act together, in ways that actual-Earth does not. Like actual-Earth, citizens of Dath Ilan sometimes take standardized tests, both to figure out what sort of jobs they'd be suited for, to make sure that its educational institutions are functioning, and to give people guidance about what they might want to study. Unlike Earth's, Dath Ilan's tests have had a lot of thought put into the choice of topics: rather a lot more economics, rather a lot less trigonometry and literature. Topics are selected based on cost/benefit; something that takes a long time to learn would need to be a lot more useful, or have major positive externalities to more people knowing it.
I want to create a test, that will tell people what topics they ought to learn, and enable people to make their knowledgeability legible.
What topics belong on it? |
b940dd86-bed9-4d54-874e-287dd9b0ef63 | trentmkelly/LessWrong-43k | LessWrong | Question on Medical School and Wage Potential for Earning to Give
A friend of mine who may want to Earn to Give for the purposes of effective altruism mused 'The wages of American doctors seem inflated right now. I wonder if it is likely that the American health care system will be fixed by the time I am able to work there. If I do go to med school that is.'
Right now, he is an undergraduate student from, and living in, Canada. He is about half-way through a degree in computer science, but he is taking some biology electives. There is a good chance he will switch his major completely to biology, because he no longer believes he wants to become a programmer, and because he is very passionate biology and the study of life, and he would rather go into grad school for biology, or maybe medical school. If he already completed several credits from a previous major, and by December will have done 2 semesters worth of biology classes, I expect it will take him at least another 3 semesters to complete his degree, and/or complete the prerequisites for medical school. If he gets into medical school 2 years from now, it will take him another 4 years of medical-school+residency to be able to practice in the United States, and 2-3 years more than that if he specializes, or goes to a medical school in the Caribbean (where getting into them is apparently easier than mainland medical schools, but to complete the training takes six years). So, it would be at least 6-8 years from now before he is a practicing doctor in the United States.
So, does anyone have any ideas of how to go about the solving the initial problem? What is the likelihood that the wages of American doctors will deflate in the next 6-8 years due to major shifts in how the American medical system is run? What sorts of changes ought one be looking for to answer this question: political, bureaucratic, technological, or cultural change?
edit: my friend in question has expressed interest in this thread, so if you want to make recommendations about or discuss medical schools in the U |
53b66b0a-6150-48cc-a8c9-04d5b1377e4b | trentmkelly/LessWrong-43k | LessWrong | Meetup : Boston - Defense Against the Dark Arts: the Ethics and Psychology of Persuasion
Discussion article for the meetup : Boston - Defense Against the Dark Arts: the Ethics and Psychology of Persuasion
WHEN: 28 May 2014 07:00:00PM (-0400)
WHERE: Citadel, 98 Elm St #1, Somerville, MA
How can you convince other people to agree with you? How can we make ourselves less likely to be swayed by the Dark Arts persuasion of others? And, of course, what are the ethical issues involved with all of this?
If you're the sort of person who finds these questions interesting, feel free to join us tonight as Jesse Galef discusses the psychology research on persuasion and its ethical implications. Meetup starts at 7pm, talk starts at 7:30pm.
Cambridge/Boston-area Less Wrong Wednesday meetups are once a month on the last Wednesday at 7pm at Citadel (98 Elm St Apt 1 Somerville, near Porter Square). All other meetups are on Sundays.
Our default schedule is as follows:
—Phase 1: Arrival, greetings, unstructured conversation.
—Phase 2: The headline event. This starts promptly at 7:30pm, and lasts 30-60 minutes.
—Phase 3: Further discussion. We'll explore the ideas raised in phase 2, often in smaller groups.
Discussion article for the meetup : Boston - Defense Against the Dark Arts: the Ethics and Psychology of Persuasion |
944f141e-fe01-4f0f-953c-51e737bae9d2 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | AISN #24: Kissinger Urges US-China Cooperation on AI, China's New AI Law, US Export Controls, International Institutions, and Open Source AI
Welcome to the AI Safety Newsletter by the [Center for AI Safety](https://www.safe.ai/). We discuss developments in AI and AI safety. No technical background required.
Subscribe [here](https://newsletter.safe.ai/subscribe?utm_medium=web&utm_source=subscribe-widget-preamble&utm_content=113135916) to receive future versions.
Listen to the AI Safety Newsletter for free on [Spotify.](https://spotify.link/E6lHa1ij2Cb)
---
China’s New AI Law, US Export Controls, and Calls for Bilateral Cooperation
---------------------------------------------------------------------------
**China details how AI providers can fulfill their legal obligations.** The Chinese government has passed several laws on AI. They’ve regulated [recommendation algorithms](https://digichina.stanford.edu/work/translation-internet-information-service-algorithmic-recommendation-management-provisions-effective-march-1-2022/) and taken steps to mitigate the risk of [deepfakes](https://digichina.stanford.edu/work/translation-internet-information-service-deep-synthesis-management-provisions-draft-for-comment-jan-2022/). Most recently, they issued a new law governing [generative AI](https://www.chinalawtranslate.com/en/generative-ai-interim/). It’s less stringent than [earlier draft version](https://www.chinalawtranslate.com/en/comparison-chart-of-current-vs-draft-rules-for-generative-ai/), but the law remains more comprehensive in AI regulation than any laws passed in the US, UK, or European Union.
The law creates legal obligations for AI providers to respect intellectual property rights, avoid discrimination, and uphold socialist values. But as with many AI policy proposals, these are values and ideals, and it’s not entirely clear how AI providers can meet these obligations.
To clarify how AI providers can achieve the law's goals, a Chinese standards-setting body has released a draft outlining [detailed technical requirements](https://twitter.com/mattsheehan88/status/1714001598383317459?s=46). Here are some of the key details:
* AI companies must randomly sample their training data and verify that at least 96% of data points are acceptable under the law.
* After passing this first test, the training data must then be filtered to remove remaining content that violates intellectual property protections, censorship laws, and other obligations.
* Once the model has been trained, the provider must red-team it in order to identify misbehavior. Providers must create thousands of questions with which to test the model. The model should refuse to answer at least 95% of questions that would violate the law, while answering at least 95% of questions that are not illegal.
* Finally, the model’s answers to a set of questions about sensitive topics including personal information, discrimination, and socialist values must be acceptable in at least 90% of cases.
Other countries might be able to learn from this example of technically rigorous governance. For example, the US [AI Bill of Rights](https://www.whitehouse.gov/ostp/ai-bill-of-rights/) supports goals for AI systems such as “safety” and “privacy,” but it’s not exactly clear what this would mean in practice. Effective AI governance will require governments to build expertise and engage in the technical details of AI systems.
**United States tightens export controls on chips.** Last October, the United States issued [export controls](https://www.nytimes.com/2022/10/07/business/economy/biden-chip-technology.html) on high performance computer chips to Chinese individuals, firms, and government actors. American chip designer Nvidia promptly created new chips which [skirted just under the rule’s limits](https://www.reuters.com/technology/nvidia-tweaks-flagship-h100-chip-export-china-h800-2023-03-21/), and sold [$5 billion](https://www.tomshardware.com/news/chinese-companies-spend-big-on-nvidia-gpus-for-ai-projects) of these chips to Chinese companies.
The US has now [updated its rules](https://www.bis.doc.gov/index.php/about-bis/newsroom/2082) to close this loophole. Previously, chips faced export controls if they exceeded limits on both computational performance and the speed of communication with other chips. The new rules eliminate that second criteria, meaning that regardless of their communication speed, computer chips with high computational performance will face restrictions.
*The revised US export controls apply to all chips with high computational performance, regardless of communication speed (“interconnect bandwidth”).* [*Illustration*](https://twitter.com/ohlennart/status/1714319096035119228) *by Lennart Heim.***Kissinger urges cooperation between the U.S. and China on AI.** In a new [article](https://www.foreignaffairs.com/united-states/henry-kissinger-path-artificial-intelligence-arms-control) titled, “The Path to AI Arms Control: America and China Must Work Together to Avert Catastrophe,” former Secretary of State Henry Kissinger and Harvard professor Graham Allison said the following:
> “We have concluded that the prospects that the unconstrained advance of AI will create catastrophic consequences for the United States and the world are so compelling that leaders in governments must act now.”
>
>
The article draws an analogy to nuclear arms control agreements between the United States and the Soviet Union, saying “both Washington and Moscow recognized that if nuclear technology fell into the hands of rogue actors or terrorists within their own borders, it could be used to threaten them.” This recognition of collective risks allowed several agreements on nuclear arms control which improved the security of both nations.
They called on US President Joe Biden and Chinese President Xi Jinping to have a private, face-to-face discussion about AI risks, the steps their governments are taking to mitigate them, and opportunities for bilateral cooperation on AI safety. After this private discussion, they recommend creating “an advisory group consisting of US and Chinese AI scientists” to hold an ongoing discussion about AI risks and potential areas for cooperation. The article also voices support for creating an IAEA for AI that could enforce international safety standards.
Proposed International Institutions for AI
------------------------------------------
Last month, the Legal Priorities Project released [a report](https://www.legalpriorities.org/research/international-ai-institutions.html) reviewing various proposals for international AI institutions. Here, we summarize and discuss the report.
The report breaks down proposals for new international AI institutions into seven models:
1. **Scientific consensus-building.** An international institution might establish scientific consensus on policy-relevant AI-related questions. For example, it might follow the model of the Intergovernmental Panel on Climate Change (IPCC). This proposal has been criticized on the grounds that, in contrast to AI, climate change was well-understood before the creation of the IPCC.
2. **Political consensus-building and norm-setting.** An international institution might foster political consensus. Again, using climate as an example, it might follow the model of the United Nations Framework Convention on Climate Change. A challenge this proposal faces is balancing between breadth of membership and depth of alignment between members.
3. **Coordination of policy and regulation.** An international institution might coordinate national policies towards common AI-related goals. It might require states to adhere to regulations of AI development and deployment, following the model of the World Trade Organization (WTO), which regulates international trade.
4. **Enforcement of standards or restrictions.** Rather than set regulation, an international might also enforce regulation. For example, the International Atomic Energy Agency (IAEA) is tasked with deterring nuclear proliferation by monitoring national nuclear energy programs for misuse. An “IAEA for AI” is a popular proposal; however, it’s not clear whether safeguarding AI is sufficiently analogous to safeguarding nuclear energy to justify a focus on the IAEA.
5. **Stabilization and emergency response.** Some international institutions are designed to prepare for and respond to emergencies, such as the United Nations Office for Disaster Risk Reduction (UNDRR). However, in order to be relevant to AI risk, such a model would likely have to focus on preventing (rather than responding to) disasters.
6. **International joint research.** An international institution might organize and undertake a major research project. For example, it might follow the model of the European Organization for Nuclear Research (CERN). Proposals often suggest that such an institution accelerate AI safety research.
7. **Distribution of benefits and access.** Finally, an institution might provide conditional access to the benefits of AI, following, for example, the model of the IAEA’s nuclear fuel bank. Such an institution would have to balance the risk of proliferation of a risky technology with the benefit of equitable access.
History offers many lessons from previous efforts to govern emerging technologies and solve global coordination problems. Of course, we cannot merely mimic historical efforts. International cooperation to mitigate the risks of AI development will require creating new solutions.
Open Source AI: Risks and Opportunities
---------------------------------------
When Meta released their open source language model, Llama, they [wrote extensively](https://arxiv.org/abs/2302.13971) about their efforts to make the model safe. They evaluated risks of bias and misinformation, and took steps to mitigate these risks. But then they open-sourced the model, allowing anybody to change the model however they like. Can an open source model remain safe?
No, according to a new [paper](https://arxiv.org/abs/2310.03693). It shows that open source models which were designed to act safely can easily be fine-tuned to behave harmfully. They train Llama to produce harmful outputs, such as step-by-step advice on how to build chemical weapons.
*Fine-tuning can cause an AI system to behave harmfully.* [*Source*](https://arxiv.org/abs/2310.03693)*.***Even without open source, AI models can be made to misbehave.** Models which are open-sourced can be made to misbehave if the AI provider allows users to fine-tune the model. OpenAI initially did not allow fine-tuning for GPT-3.5, but since the release of Llama, they have begun to offer it. The paper shows that fine-tuning GPT-3.5 can bypass its safeguards and quickly cause harmful behavior.
Finally, without any fine-tuning access whatsoever, previous [research](https://newsletter.safe.ai/p/ai-safety-newsletter-17) has shown that jailbreak prompts and adversarial attacks can cause a supposedly safe AI system to misbehave.
**How does AI affect the offense-defense balance?** Just as AI will allow malicious actors to cause harm, it will enable stronger defenses against those attacks. But an important question is whether AI changes the offense-defense balance: Will it bring more benefit to attackers or defenders?
In traditional software, open source typically favors defenders. If a bug or security vulnerability in the code is revealed, an attacker might try to exploit it. But because developers can quickly fix these flaws once they’ve been revealed, overall the security of an open source system is often stronger.
The concern with AI is different. Attackers are not trying to find vulnerabilities in AI systems. Instead, AI can be used to exploit societal vulnerabilities that are not easy to fix. For example, a chatbot could help someone build a biological weapon that causes a pandemic. We might use AI to develop vaccines and track the disease’s spread. But many people might refuse a vaccine, and even with a rapid response, millions could die. In this scenario, AI provides an asymmetric benefit to attackers seeking to cause pandemics, without an equal corresponding benefit to defenders.
**Legislation considers the risks of open source AI.** The US, UK, and EU are working to lay the legislative foundation for AI. The UK’s Competition and Market Authority appears to [support open-source AI](https://time.com/6316336/uk-ai-regulation-competition/) because the open-source models promote competition, while [licensing requirements](https://dd80b675424c132b90b3-e48385e382d2e5d17821a5e1d8e4c86b.ssl.cf1.rackcdn.com/external/09072023bipartisanaiframework.pdf) recommended by US Senators Richard Blumenthal and Josh Hawley would likely slow down the open-source release of highly-capable models. The EU is currently reconciling three drafts of its [AI Act](https://hai.stanford.edu/news/analyzing-european-union-ai-act-what-works-what-needs-improvement), each of which propose different ways to govern open-source AI.
Links
-----
* Drones in Ukraine may be [killing people without human oversight](https://www.newscientist.com/article/2397389-ukrainian-ai-attack-drones-may-be-killing-without-human-oversight/).
* A new investigation of [biological weapons risks from AI systems](https://www.rand.org/pubs/research_reports/RRA2977-1.html) is ongoing.
* The [effective accelerationists](https://twitter.com/DanHendrycks/status/1651740865159901184) have a new [manifesto](https://a16z.com/the-techno-optimist-manifesto/).
* The UK releases the [agenda for the AI Safety Summit](https://www.gov.uk/government/publications/ai-safety-summit-programme/ai-safety-summit-day-1-and-2-programme).
* SEC chairperson urges lawmakers to [protect the financial system](https://www.ft.com/content/8227636f-e819-443a-aeba-c8237f0ec1ac) from AI risks.
* [Competitive pressures](https://www.theinformation.com/articles/googles-wartime-urgency-to-chase-chatgpt-shakes-up-culture) have led Google to hasten their language model development.
* [G7 countries](https://www.bloomberg.com/news/articles/2023-10-06/g-7-plans-to-ask-ai-makers-to-agree-to-watermarks-audits) are preparing to ask AI companies to voluntarily commit to certain safety practices.
* Members of Congress urge President Biden to [incorporate the AI Bill of Rights](https://www.markey.senate.gov/news/press-releases/senator-markey-representative-jayapal-lead-colleagues-in-urging-president-biden-to-implement-ai-bill-of-rights-in-upcoming-executive-order) into his [upcoming executive order on AI](https://www.politico.com/news/2023/10/12/biden-government-standards-ai-00121284).
* The United States Space Force has [temporarily paused](https://www.bloomberg.com/news/articles/2023-10-11/space-force-pauses-generative-ai-based-on-security-concerns#xj4y7vzkg) the use of chatbots and other AI systems because of security concerns.
* [New Jersey](https://nj.gov/governor/news/news/562023/approved/20231010b.shtml) establishes an AI Task Force.
* The new [Conference on Language Modeling](https://colmweb.org/cfp.html) is seeking papers on safety, scaling, and other topics.
* [60 Minutes](https://www.cbsnews.com/video/geoffrey-hinton-ai-60-minutes-video-2023-10-08/) interviews Turing Award-winner Geoffrey Hinton about AI risks.
See also: [CAIS website](https://www.safe.ai/), [CAIS twitter](https://twitter.com/ai_risks?lang=en), [A technical safety research newsletter](https://newsletter.mlsafety.org/), [An Overview of Catastrophic AI Risks](https://arxiv.org/abs/2306.12001), and our [feedback form](https://forms.gle/EU3jfTkxfFgyWVmV7)
Listen to the AI Safety Newsletter for free on [Spotify.](https://spotify.link/E6lHa1ij2Cb)
Subscribe [here](https://newsletter.safe.ai/subscribe?utm_medium=web&utm_source=subscribe-widget-preamble&utm_content=113135916) to receive future versions. |
94aafe0f-fb9e-4d99-bfa9-6005b5f2e8a1 | trentmkelly/LessWrong-43k | LessWrong | An Alien God
"A curious aspect of the theory of evolution," said Jacques Monod, "is that everybody thinks he understands it."
A human being, looking at the natural world, sees a thousand times purpose. A rabbit's legs, built and articulated for running; a fox's jaws, built and articulated for tearing. But what you see is not exactly what is there...
In the days before Darwin, the cause of all this apparent purposefulness was a very great puzzle unto science. The Goddists said "God did it", because you get 50 bonus points each time you use the word "God" in a sentence. Yet perhaps I'm being unfair. In the days before Darwin, it seemed like a much more reasonable hypothesis. Find a watch in the desert, said William Paley, and you can infer the existence of a watchmaker.
But when you look at all the apparent purposefulness in Nature, rather than picking and choosing your examples, you start to notice things that don't fit the Judeo-Christian concept of one benevolent God. Foxes seem well-designed to catch rabbits. Rabbits seem well-designed to evade foxes. Was the Creator having trouble making up Its mind?
When I design a toaster oven, I don't design one part that tries to get electricity to the coils and a second part that tries to prevent electricity from getting to the coils. It would be a waste of effort. Who designed the ecosystem, with its predators and prey, viruses and bacteria? Even the cactus plant, which you might think well-designed to provide water fruit to desert animals, is covered with inconvenient spines.
The ecosystem would make much more sense if it wasn't designed by a unitary Who, but, rather, created by a horde of deities—say from the Hindu or Shinto religions. This handily explains both the ubiquitous purposefulnesses, and the ubiquitous conflicts: More than one deity acted, often at cross-purposes. The fox and rabbit were both designed, but by distinct competing deities. I wonder if anyone ever remarked on the seemingly excellent evidence thus provide |
a7e5e9e4-4904-4147-a1f0-510c91b5ae48 | trentmkelly/LessWrong-43k | LessWrong | Temporary Housing in the Boston area
Hey,
I'm doing an internship near MIT this fall and it hadn't occurred to me to think of trying to live with other aspiring rationalists. As expected, it's tough to find a convenient short-term place when so many students are fine signing 1-year leases..
Any of you possibly open to subletting a room to me Sept-Dec? Extra awesome if you're in Cambridge. |
54acc543-e82b-4838-a54b-af6c4fe1af9e | trentmkelly/LessWrong-43k | LessWrong | Easily Top 20%
Cross-posted from Putanumonit.
----------------------------------------
I’ve written a lot about approaching dating as a cooperative game: you and your potential partners against the assholes, the algorithm, the politics of skewed ratios, and the sex-negative society. If you’re a straight guy, you have to be rooting for the women you meet to win.
And yet, most of the comments I get treat the idea of cooperation as anathema, a fantasy available only to the GigaChads who monopolize the world’s tiny handful of generous women and are simultaneously deluded by them. They talk about “the research”, which mostly ends up being that one worthless paper on dark triad attractiveness. And when I give advice based not on p-hacked studies but on what worked in my own life, they inform me that I simply cannot fathom what it’s like for the “bottom 80%” of men doomed to eternal lonely suffering:
> The issue with Jacob is that, percentage-wise, he’s easily in top 20%. White/Jewish, quite handsome, in his 30s, living in NYC, working in finance, with top 1% IQ, previous military experience, and a small celebrity status. Despite of all of this, the best he could secure is a “poly marriage”.
I never had the patience to argue with these commenters and I’m going to start blocking them for sheer tediousness. Those celibate men who declare themselves beyond redemption deserve their safe spaces, but Putanumonit will not be one. That’s not who I’m writing for.
I’m writing my blog, in large part, for younger me. For Jacob from a couple of weeks ago who hadn’t read some particular book yet, or for teenage Jacob confused about the basics of how people’s minds work. When I write about dating it’s for the Jacob of not-that-long-ago who found dating frustrating and difficult, someone whom only his grandma would assuredly anoint as “easily top 20%”. I wish that younger version of me would have had my current posts to read, sans the humorless losers in the comments posting their dreary screeds.
|
a4f7a719-95c8-4739-98e9-66923106dfa3 | trentmkelly/LessWrong-43k | LessWrong | Evaporative Cooling of Group Beliefs
Early studiers of cults were surprised to discover that when cults receive a major shock—a prophecy fails to come true, a moral flaw of the founder is revealed—they often come back stronger than before, with increased belief and fanaticism. The Jehovah’s Witnesses placed Armageddon in 1975, based on Biblical calculations; 1975 has come and passed. The Unarian cult, still going strong today, survived the nonappearance of an intergalactic spacefleet on September 27, 1975.
Why would a group belief become stronger after encountering crushing counterevidence?
The conventional interpretation of this phenomenon is based on cognitive dissonance. When people have taken “irrevocable” actions in the service of a belief—given away all their property in anticipation of the saucers landing—they cannot possibly admit they were mistaken. The challenge to their belief presents an immense cognitive dissonance; they must find reinforcing thoughts to counter the shock, and so become more fanatical. In this interpretation, the increased group fanaticism is the result of increased individual fanaticism.
I was looking at a Java applet which demonstrates the use of evaporative cooling to form a Bose-Einstein condensate, when it occurred to me that another force entirely might operate to increase fanaticism. Evaporative cooling sets up a potential energy barrier around a collection of hot atoms. Thermal energy is essentially statistical in nature—not all atoms are moving at the exact same speed. The kinetic energy of any given atom varies as the atoms collide with each other. If you set up a potential energy barrier that’s just a little higher than the average thermal energy, the workings of chance will give an occasional atom a kinetic energy high enough to escape the trap. When an unusually fast atom escapes, it takes with it an unusually large amount of kinetic energy, and the average energy decreases. The group becomes substantially cooler than the potential energy barrier around it. |
874c1424-6e2c-4bc8-8c4d-6e1a154d5091 | trentmkelly/LessWrong-43k | LessWrong | Massachusetts Tick-Borne Disease Distribution
As I wrote yesterday, MA is no longer reporting good data to the CDC on Lyme disease. Starting in 2019, however, MA has been publishing tick-borne disease reports. They include this misleading chart:
The caption includes "Although there are differences in the rate of patient visits, this shows that people are exposed to ticks throughout all of Massachusetts and should take recommended steps to reduce the chance of being bitten."
One issue is that they've chosen to chart the rate of visits and not the rate of diagnoses, but the main issue is that the scale hides the degree of variation between counties. Pulling the numbers and manually dividing by population estimates, I see:
The rate in the Islands (Dukes and Nantucket Counties) is 10 times any other MA county!
(Even though these are also vacation areas, people are categorized by their home county not their infection county.)
Now, I agree with them that taking precautions is worth it when walking through tick-friendly terrain anywhere in the state. But there are still degrees of precautions, from clothes, to tick checks, to choosing different activities, to choosing other counties, and the report should emphasize the difference in risk so people can make good decisions.
Comment via: facebook |
301544db-8c90-4106-8ff6-fb55cb38e4b3 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | AI Alignment YouTube Playlists
I created two AI Alignment playlists on Youtube. One that is slide-heavy and the other is not. I separated them into two playlists for two reasons.
1. It’s useful to separate for a dataset I am working on.
2. Media is easier to consume when you don’t have to pay attention to the slides and pictures someone is describing.
Not slide-heavy (currently 216 videos): <https://youtube.com/playlist?list=PLTYHZYmxohXp0xvVJmMmpT_eFJovlzn0l>
Slide-heavy (currently 366 videos): <https://youtube.com/playlist?list=PLTYHZYmxohXpn5uf8JZ2OouB1PsDJAk-x>
If you would like to contribute and add more videos to the playlists or create new Alignment-relevant playlists, let me know!
If you like access to the audio and youtube auto-generated subs in .txt format, I have stored them here: <https://drive.google.com/drive/folders/1qVo4TyHKrsJvbJ3UrIOLW45j_7_wwnbZ?usp=sharing>
I've batched up the files into buckets of 90-ish hours (except for the final bucket which is less) since I plan on loading them into otter.ai and that website only accepts 100 hours per user (per month). Additionally, if you would like to help load some of the audio files in your own otter.ai account, please let me know! I want to create transcripts of the audio files and add them to a dataset very soon. |
28dfc9b0-b86f-4b03-9af4-f916fff4c560 | trentmkelly/LessWrong-43k | LessWrong | How have you become more hard-working?
I'd be curious to hear stories of people who have successfully become more hard-working, especially if they started out as not particularly hard-working. Types of things I can imagine playing a role or know have played a role for some people:
* Switching roles to something that is conducive to hard work, e.g. a fast-paced environment with lots of concrete tasks and fires to put out.
* Medication, e.g. ADHD medication
* Internal work, e.g. specific types of therapy, meditation, self-help reading, or other types of reflection.
* Productivity hacks, e.g. more accountability, putting specific systems in place
* Motivational events, arguments, or life periods, e.g. working a normal corporate jobs where long hours are expected
* Switching work environment to something that is conducive to hard work, e.g. always working in an office with others who hold you accountable
This curiosity was triggered by realising that I know of very few people that have become substantially harder-working over their late adolescence/adult life. I also noticed that the few people that I know successfully and seemingly permanently increased their mental health/work satisfaction always were hard-working even when they were unhappy (unless they were in the middle of burn-out or similar).
People becoming more hard-working seems really useful but I haven't seen much in terms of evidence that it's feasible or effective methods. If there are books or studies on this topic, those would also be welcome. Thank you! |
50f4962b-d2d4-4df5-865b-d3a8f3c47119 | trentmkelly/LessWrong-43k | LessWrong | Trip from Ottawa, Canada to NYC on weekend of April 2
I'm contemplating I've decided to make a trip from Ottawa, Canada to New York City on the weekend of April 2-3 specifically in the hopes of meeting some members of the LW NYC Chapter (and EY as well, if I can manage it). Since I don't know anyone in the city, I'm hoping this discussion post will generate interest in having some kind of get-together on that weekend.
Anyone in the national capital region is welcome to contact me in comments or by PM to get in on the action. (I've already canvassed XFrequentist and received a positive reply.) Cosmos has offered to let me crash at his place, but I haven't asked him about extra space for other folk. |
df8366d7-a2b4-465a-bde4-e7c643ea42c9 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "Carl Sagan once told a parable of someone who comes to us and claims: “There is a dragon in my garage.” Fascinating! We reply that we wish to see this dragon—let us set out at once for the garage! “But wait,” the claimant says to us, “it is an invisible dragon.”Now as Sagan points out, this doesn’t make the hypothesis unfalsifiable. Perhaps we go to the claimant’s garage, and although we see no dragon, we hear heavy breathing from no visible source; footprints mysteriously appear on the ground; and instruments show that something in the garage is consuming oxygen and breathing out carbon dioxide.But now suppose that we say to the claimant, “Okay, we’ll visit the garage and see if we can hear heavy breathing,” and the claimant quickly says no, it’s an inaudible dragon. We propose to measure carbon dioxide in the air, and the claimant says the dragon does not breathe. We propose to toss a bag of flour into the air to see if it outlines an invisible dragon, and the claimant immediately says, “The dragon is permeable to flour.”Carl Sagan used this parable to illustrate the classic moral that poor hypotheses need to do fast footwork to avoid falsification. But I tell this parable to make a different point: The claimant must have an accurate model of the situation somewhere in their mind, because they can anticipate, in advance, exactly which experimental results they’ll need to excuse.Some philosophers have been much confused by such scenarios, asking, “Does the claimant really believe there’s a dragon present, or not?” As if the human brain only had enough disk space to represent one belief at a time! Real minds are more tangled than that. There are different types of belief; not all beliefs are direct anticipations. The claimant clearly does not anticipate seeing anything unusual upon opening the garage door. Otherwise they wouldn’t make advance excuses. It may also be that the claimant’s pool of propositional beliefs contains the free-floating statement There is a dragon in my garage. It may seem, to a rationalist, that these two beliefs should collide and conflict even though they are of different types. Yet it is a physical fact that you can write “The sky is green!” next to a picture of a blue sky without the paper bursting into flames.The rationalist virtue of empiricism is supposed to prevent us from making this class of mistake. We’re supposed to constantly ask our beliefs which experiences they predict, make them pay rent in anticipation. But the dragon-claimant’s problem runs deeper, and cannot be cured with such simple advice. It’s not exactly difficult to connect belief in a dragon to anticipated experience of the garage. If you believe there’s a dragon in your garage, then you can expect to open up the door and see a dragon. If you don’t see a dragon, then that means there’s no dragon in your garage. This is pretty straightforward. You can even try it with your own garage.No, this invisibility business is a symptom of something much worse.Depending on how your childhood went, you may remember a time period when you first began to doubt Santa Claus’s existence, but you still believed that you were supposed to believe in Santa Claus, so you tried to deny the doubts. As Daniel Dennett observes, where it is difficult to believe a thing, it is often much easier to believe that you ought to believe it. What does it mean to believe that the Ultimate Cosmic Sky is both perfectly blue and perfectly green? The statement is confusing; it’s not even clear what it would mean to believe it—what exactly would be believed, if you believed. You can much more easily believe that it is proper, that it is good and virtuous and beneficial, to believe that the Ultimate Cosmic Sky is both perfectly blue and perfectly green. Dennett calls this “belief in belief.”1And here things become complicated, as human minds are wont to do—I think even Dennett oversimplifies how this psychology works in practice. For one thing, if you believe in belief, you cannot admit to yourself that you merely believe in belief. What’s virtuous is to believe, not to believe in believing; and so if you only believe in belief, instead of believing, you are not virtuous. Nobody will admit to themselves, “I don’t believe the Ultimate Cosmic Sky is blue and green, but I believe I ought to believe it”—not unless they are unusually capable of acknowledging their own lack of virtue. People don’t believe in belief in belief, they just believe in belief.(Those who find this confusing may find it helpful to study mathematical logic, which trains one to make very sharp distinctions between the proposition P, a proof of P, and a proof that P is provable. There are similarly sharp distinctions between P, wanting P, believing P, wanting to believe P, and believing that you believe P.)There are different kinds of belief in belief. You may believe in belief explicitly; you may recite in your deliberate stream of consciousness the verbal sentence “It is virtuous to believe that the Ultimate Cosmic Sky is perfectly blue and perfectly green.” (While also believing that you believe this, unless you are unusually capable of acknowledging your own lack of virtue.) But there are also less explicit forms of belief in belief. Maybe the dragon-claimant fears the public ridicule that they imagine will result if they publicly confess they were wrong.2 Maybe the dragon-claimant flinches away from the prospect of admitting to themselves that there is no dragon, because it conflicts with their self-image as the glorious discoverer of the dragon, who saw in their garage what all others had failed to see.If all our thoughts were deliberate verbal sentences like philosophers manipulate, the human mind would be a great deal easier for humans to understand. Fleeting mental images, unspoken flinches, desires acted upon without acknowledgement—these account for as much of ourselves as words.While I disagree with Dennett on some details and complications, I still think that Dennett’s notion of belief in belief is the key insight necessary to understand the dragon-claimant. But we need a wider concept of belief, not limited to verbal sentences. “Belief” should include unspoken anticipation-controllers. “Belief in belief” should include unspoken cognitive-behavior-guiders. It is not psychologically realistic to say, “The dragon-claimant does not believe there is a dragon in their garage; they believe it is beneficial to believe there is a dragon in their garage.” But it is realistic to say the dragon-claimant anticipates as if there is no dragon in their garage, and makes excuses as if they believed in the belief.You can possess an ordinary mental picture of your garage, with no dragons in it, which correctly predicts your experiences on opening the door, and never once think the verbal phrase There is no dragon in my garage. I even bet it’s happened to you—that when you open your garage door or bedroom door or whatever, and expect to see no dragons, no such verbal phrase runs through your mind.And to flinch away from giving up your belief in the dragon—or flinch away from giving up your self-image as a person who believes in the dragon—it is not necessary to explicitly think I want to believe there’s a dragon in my garage. It is only necessary to flinch away from the prospect of admitting you don’t believe.If someone believes in their belief in the dragon, and also believes in the dragon, the problem is much less severe. They will be willing to stick their neck out on experimental predictions, and perhaps even agree to give up the belief if the experimental prediction is wrong.3 But when someone makes up excuses in advance, it would seem to require that belief and belief in belief have become unsynchronized.1 Daniel C. Dennett, Breaking the Spell: Religion as a Natural Phenomenon (Penguin, 2006).2 Although, in fact, a rationalist would congratulate them, and others are more likely to ridicule the claimant if they go on claiming theres a dragon in their garage.3 Although belief in belief can still interfere with this, if the belief itself is not absolutely confident."
The above is a chunk of text from a post created by a user on the website LessWrong.
Can you further elaborate on the concept of "belief in belief" and how it can potentially lead to cognitive dissonance?
The concept of "belief in belief" is a fascinating psychological phenomenon that can be understood as a form of metacognition - thinking about one's own thinking. It involves maintaining a belief about a belief, often as a way to reconcile internal conflicts or contradictions. This is different from simply having a belief; it's a second-layer belief that one holds about the appropriateness, value, or virtue of the primary belief.
For example, someone might not believe in ghosts (primary belief), but they believe that they should believe in ghosts (belief in belief) because their family or community does. This person may not anticipate experiencing anything ghostly, but they hold onto the idea that it's important or beneficial to believe in ghosts.
This concept becomes particularly interesting when it leads to cognitive dissonance, a state of mental discomfort or tension that arises from holding two or more contradictory beliefs, values, or attitudes. In the case of "belief in belief", cognitive dissonance might occur when a person's experiences or rational understanding conflict with their belief in belief.
Taking the previous example, suppose this person has a strong scientific background and their understanding of the physical world tells them that ghosts, as traditionally understood, can't exist. Yet, because of their belief in the importance of believing in ghosts, they experience cognitive dissonance. They may feel torn between their rational understanding and their perceived obligation to maintain certain beliefs.
In an attempt to reduce this discomfort, individuals might engage in various strategies. They might avoid information that contradicts their belief in belief, reinterpret contradictory evidence in a way that supports their belief, or seek affirmation from others who hold the same belief. These strategies can lead to confirmation bias, a tendency to favor information that confirms one's existing beliefs, and can further entrench the belief in belief, even in the face of strong contradictory evidence.
In essence, the concept of "belief in belief" highlights the complex, layered nature of human cognition and belief systems, and how these can sometimes lead to internal conflict and cognitive dissonance. It underscores the importance of metacognition and self-awareness in recognizing and navigating these complexities.
Discuss the role of confirmation bias in strengthening the "belief in belief" phenomenon, particularly in the context of cognitive dissonance.
Confirmation bias plays a significant role in reinforcing the "belief in belief" phenomenon, especially when cognitive dissonance is involved. Confirmation bias is the tendency to seek out, interpret, and remember information in a way that confirms one's preexisting beliefs or hypotheses, while giving disproportionately less consideration to alternative possibilities.
When a person experiences cognitive dissonance due to a conflict between their primary belief and their belief in belief, they may be motivated to reduce this uncomfortable tension. One way to do this is through confirmation bias. They might selectively seek out information or interpret ambiguous information in a way that supports their belief in belief. This allows them to maintain their belief in belief without having to significantly alter their primary belief.
For example, let's consider a person who doesn't personally believe in the existence of extraterrestrial life but holds a belief in belief that they should believe in it because of their social circle's fascination with the topic. They might selectively pay attention to and remember reports of UFO sightings or alien encounters that their friends share, while ignoring or forgetting skeptical analyses or scientific arguments against the existence of extraterrestrial life. This selective attention and memory reinforce their belief in belief, making it feel more justified and reducing the cognitive dissonance they experience.
Furthermore, confirmation bias can lead to a feedback loop that further strengthens the belief in belief. The more information they find that seems to confirm their belief, the more convinced they become that their belief is correct, and the more they continue to seek out confirming information. This can make the belief in belief resistant to change, even in the face of strong contradictory evidence.
In this way, confirmation bias can serve as a psychological mechanism that allows individuals to maintain their belief in belief and manage cognitive dissonance, albeit at the cost of a more accurate understanding of reality. It highlights the powerful influence of our cognitive biases on our belief systems and the way we interpret and interact with the world.
In what ways can the phenomenon of "belief in belief", strengthened by confirmation bias, impact decision-making processes, particularly in high-stakes situations?
The phenomenon of "belief in belief", particularly when reinforced by confirmation bias, can significantly impact decision-making processes, especially in high-stakes situations. Here are some ways this can occur:
1. **Impaired Judgment**: The belief in belief can lead to impaired judgment by causing individuals to overlook or dismiss important information that contradicts their belief. This can result in decisions that are not based on a comprehensive evaluation of all available evidence, but rather on a skewed interpretation of reality.
2. **Increased Risk-Taking**: In high-stakes situations, the desire to confirm one's belief in belief can lead to increased risk-taking. For instance, a person might make risky investments based on their belief in a particular economic theory, even when evidence suggests that it might not be a sound strategy.
3. **Resistance to Change**: When a person's belief in belief is strong, they may become resistant to change, even when change is necessary or beneficial. This can prevent them from adapting to new circumstances or adjusting their strategy in response to new information.
4. **Groupthink**: In group settings, belief in belief can contribute to groupthink, a phenomenon where the desire for group consensus overrides people's common sense desire to present alternatives, critique a position, or express an unpopular opinion. This can lead to decision-making that is suboptimal or even disastrous.
5. **Polarization**: In situations involving negotiation or conflict resolution, belief in belief can lead to polarization, where parties become more entrenched in their positions as they seek out information that confirms their beliefs and dismiss information that contradicts them. This can hinder compromise and make it more difficult to reach a resolution.
6. **Confirmation Trap**: In high-stakes decision-making, individuals might fall into a confirmation trap, seeking information to support their existing beliefs and ignoring contradictory evidence. This can lead to overconfidence in their decisions, which can be particularly dangerous in high-stakes situations.
In essence, belief in belief, when strengthened by confirmation bias, can distort perception, impair judgment, and lead to decision-making that is not grounded in a balanced and objective assessment of the situation. It underscores the importance of critical thinking, open-mindedness, and self-awareness in decision-making processes. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.