id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
e8a3d678-5d89-4cb6-8bcb-9e4482761181 | StampyAI/alignment-research-dataset/arbital | Arbital | Disambiguation
Several distinct concepts with comparable importance use this page's name, this page helps readers find what they're looking for. |
7b38d60a-294e-442a-89ab-57d713902b92 | trentmkelly/LessWrong-43k | LessWrong | Rationality Quotes April 2016
Another month, another rationality quotes thread. The rules are:
* Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
* Post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
* Do not quote yourself.
* Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
* No more than 5 quotes per person per monthly thread, please. |
dea6babb-7baa-4b83-b89d-4e7a0673dfd6 | trentmkelly/LessWrong-43k | LessWrong | 6. The Mutable Values Problem in Value Learning and CEV
Part 6 of AI, Alignment, and Ethics. This will probably make more sense if you start with Part 1.
TL;DR In Parts 1 through 5 I discussed how to choose an ethical system, and implications for societies containing biological and/or uploaded humans and aligned AIs, and perhaps even other sapient species, but not all sentient animals. So far I've just been assuming that we have (somehow) built aligned AI. Now I'd like to look at how all of this relates to the challenge of achieving that vital goal: how we might align superintelligent AI, specifically using approaches along the lines of value learning, AI-assisted Alignment, or Coherent Extrapolated Volition (CEV) — or indeed any similar "do what I mean" kind of approach. The mutability of human values poses a major challenge to all of these: "Do what I want" or "do what I mean" is a lot less well-defined once ASI is in a position to affect that directly, rather than just by misinterpretation. Below I outline and critique a number of possible solutions: this is challenging, since when setting the terminal goal for ASI there is a strong tension between controlling the outcome and allowing our descendants the freedom to control their own destiny. Without a solution to strong corrigibility, we can only do set the terminal goal once, ever, which privileges the views of whatever generation gets to do this. The possibilities opened up by genetic engineering and cyborging make the mutable values problem far more acute, and I explore a couple of toy examples from the ethical conundrums of trying to engineer away psychopathy and war. Finally I suggest a tentative proposal for a compromise solution for mutable values, which builds upon the topics discussed in the previous parts of the sequence.
In what follows I'm primarily going to discuss a future society that is aligning its Artificial Superhuman Intelligences (ASIs) either using value learning, or some nearly-functionally-equivalent form of AI-assisted alignment, such that t |
1e7f0ea3-2c57-4a06-b918-4c83f2596dcf | trentmkelly/LessWrong-43k | LessWrong | The Third Circle
Previously: The First Circle, The Second Circle
Epistemic Status: Having one’s fill
The third circle took place at Luna Labs. After the second circle, the decision was made to bring in a professional. The New York rationalist group, together with several others in the Luna orbit, gathered at quite the swanky little space to form another group of about twenty. This time, one of the most experienced out there would be leading us. She did not lack for confidence.
As an introduction, the rules are again explained and we went around saying what we were reading lately. Speak your personal truth, no speculation, ask if you’re curious, stay in the moment, everything for connection and all that.
We began with a series of paired exercises. We stare into each others’ eyes. We say things we are feeling or sensing, and what we feel the other person is feeling, and what we feel about that and how accurate it was. We share about what our biggest problems are, and how we feel about that.
It illustrates a different mode of thinking, of what to pay attention to. It was interesting, engaging and quite pleasant.
It also demonstrated how easy it is to fool your brain into thinking you’re making a deep connection with someone, that there’s suddenly definitely a thing there, simply by holding eye contact with someone and paying attention to each other. That doesn’t mean there wasn’t an actual connection with my partner. I think there was, she’s been to our Friday night dinners before, I like her a lot and I hope we get to be good friends. Despite that, it was obvious the circumstances were tricking my brain’s heuristics in ways I had to keep reminding myself to disregard.
Yet another way for saying the unfiltered thoughts on your mind is tricky, also impossible.
What was odd was that these exercises took up an hour and a half, leaving only half an hour for the actual circle. Seemed disproportionate. We’d come for the thing. Was it so far out of our reach we needed this much prep |
1404937c-e4f6-4dc9-9220-6a0cc7b49f54 | trentmkelly/LessWrong-43k | LessWrong | On Media Synthesis: An Essay on The Next 15 Years of Creative Automation
One of my favorite childhood memories involves something that technically never happened. When I was ten years old, my waking life revolved around cartoons— flashy, colorful, quirky shows that I could find in convenient thirty-minute blocks on a host of cable channels. This love was so strong that I thought to myself one day, "I can create a cartoon." I'd been writing little nonsense stories and drawing (badly) for years by that point, so it was a no-brainer to my ten-year-old mind that I ought to make something similar to (but better than) what I saw on television.
The logical course of action, then, was to search "How to make a cartoon" on the internet. I saw nothing worth my time that I could easily understand, so I realized the trick to play— I would have to open a text file, type in my description of the cartoon, and then feed it into a Cartoon-a-Tron. Voilà! A 30-minute cartoon!
Now I must add that this was in 2005, which ought to communicate how successful my animation career was.
Two years later, I discovered an animation program at the local Wal-Mart and believed that I had finally found the program I had hitherto been unable to find. When I rode home, I felt triumphant in the knowledge that I was about to become a famous cartoonist. My only worry was whether the disk would have all the voices I wanted preloaded.
I used the program once and have never touched it since. Around that same time, I did research on how cartoons were made— though I was aware some required many drawings, I was not clear on the entire process until I read a fairly detailed book filled with technical industry jargon. The thought of drawing thousands of images of singular characters, let alone entire scenes, sounded excruciating. This did not begin to fully encapsulate what one needed to create a competent piece of animation— from brainstorming, storyboarding, and script editing all the way to vocal takes, music production, auditory standards, post-production editing, union rules, |
d02b808d-055a-434e-9857-bff17de707ef | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] The Virtue of Narrowness
Today's post, The Virtue of Narrowness, was originally published on 07 August 2007. A summary (taken from the LW wiki):
> It was perfectly all right for Isaac Newton to explain just gravity, just the way things fall down - and how planets orbit the Sun, and how the Moon generates the tides - but not the role of money in human society or how the heart pumps blood. Sneering at narrowness is rather reminiscent of ancient Greeks who thought that going out and actually looking at things was manual labor, and manual labor was for slaves.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, in which we're going through Eliezer Yudkowsky's old posts in order, so that people who are interested can (re-)read and discuss them. The previous post was The Proper Use of Doubt, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
2782ea05-7fd4-42c9-911c-70fcf269d225 | trentmkelly/LessWrong-43k | LessWrong | The Intentional Agency Experiment
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
We would like to discern the intentions of a hyperintelligent, possibly malicious agent which has every incentive to conceal its evil intentions from us. But what even is intention? What does it mean for an agent to work towards a goal?
Consider the lowly ant and the immobile rock. Intuitively, we feel one has (some) agency and the other doesn't, while a human has more agency than either of them. Yet, a sceptic might object that ants seek out sugar and rocks fall down but that there is no intrinsic difference between the goal of eating yummy sweets and minimising gravitational energy.
Intention is a property that an agent has with respect to a goal. Intention is not a binary value, a number or even a topological vector space. Rather, it is a certain constellation of counterfactuals.
***
Let W be a world, which we imagine as a causal model in the sense of Pearl: a directed acyclic graph with nodes and attached random variables N0,...,Nk . Let R be an agent. We imagine R to be a little robot - so not a hyperintelligent malignant AI- and we'd like to test whether it has a goal G, say G=(N0=10). To do so we are going to run an Intentional Agency Experiment: we ask R to choose an action A1 from its possible actions B={a1,...,an}.
Out of the possible actions B one abest is the 'best' action for R if it has goal G=(N0=10) in the sense that P(G|do(abest))≥P(G|do(ai)) for i=1,...,n
If R doesn't choose A1=do(abest), great! We're done; R doesn't have goal G. If R does choose A1=do(abest), we provide it with a new piece of (counterfactual) information P(G|A1)=0 and offer the option of changing its action. From the remaining actions there is one next best actions anbest. Given the information P(G|A1)=0 if R does not choose A2=do(anbest) we stop, if R does we provide it with the information P(G|A2)=0 and continue as before.
At each round we assign more and more agency to R. Rather, than a binary 'Yes, R has agency' or 'No, R has no age |
1f8dc83c-b7a8-4b9d-a190-a5b5a3fdbca3 | trentmkelly/LessWrong-43k | LessWrong | Quadratic Voting and Collusion
Quadratic voting is a proposal for a voting system that ensures participants cast a number of votes proportional to the degree they care about the issue by making the marginal cost of each additional vote linearly increasing - see this post by Vitalik for an excellent introduction.
One major issue with QV is collusion - since the marginal cost of buying one vote is different for different people, if you could spread a number of votes out across multiple people, you could buy more votes for the same amount of money. For instance, suppose you and a friend have $100 each, and you care only about Cause A and they care only about Cause B, and neither of you care about any of the other causes up for vote. You could spend all of your $100 on A and they could spend all of theirs on B, or you could both agree to each spend $50 on A and $50 on B, which would net √2 times the votes for both A and B as opposed to the default.
The solution generally proposed in response to this issue is to ensure that the vote is truly secret, to the extent that you cannot even prove to anyone else who you voted for. The thinking is that this creates a prisoner's dilemma where by defecting, you manage to obtain both the $50 from your friend and also the full $100 from yourself for your own cause, and that because there is no way to confirm how you voted, there are no possible externalities to create incentives for not defecting.
Unfortunately, I have two objections to this solution, one theoretical and one practical. The theoretical objection is that if the two agents are able to accurately predict each others' actions and reason using FDT, then it is possible for the two agents to cooperate à la FairBot - this circumvents the inability to prove what you voted for after the fact by proving ahead of time what you will vote. The practical objection is that people tend to cooperate in prisoner's dilemmas a significant amount of the time anyways, and in general a lot of people tend to uphold prom |
a9aef2d0-0619-408d-99e2-ec97e03f12e4 | trentmkelly/LessWrong-43k | LessWrong | Possible Cockatrice in written form
My various interweb browsings stumbled me upon a potential Cockatrice in written, philisophical form. I've thus far read through the first chapter, and it is less anti-rational than most philosophical writings.
I'm reading through it right now, and will provide my feedback when I'm done, likely as a front-page post.
Personally, I'm a Fatalist, with some sort of Weird Soldier Ethic, who plans to go out the same way that Hunter did (if the cops don't get me first), but I've got a bunch of nonsense to Write first. I figure that'll make me somewhat immune. That aside, I doubt it's a real cockatrice - or we would've heard about it before.
It is a strong exercise in Nihilism. So, with those cautions given, I offer it to you: an extensive suicide letter.
Tip of the hat to this guy.
|
42af8d64-c37b-496a-8239-104559dfc237 | trentmkelly/LessWrong-43k | LessWrong | What is the point of College?
Specifically is it worth investing time to gain knowledge?
So a bit of background about me before I go into the question.So I am sophomore studying Mechanical Engineering in India.
I have noticed that I have forgotten about 80-90% of the course-work that I did during the first year.Don't get me wrong,I studied the courses properly and not for the test.Still,if you were to ask me how much of the course I remember now, I would at the very best remember the general idea of the stuff I read.
This is very startling from a long time perspective.College-work in India is generally more overloaded than other countries(from what I have observed),so what this means is people consume a lot of knowledge in a very short amount of time and forget it before they can make any use of it at all(leaving aside the question whether the knowledge is useful in the first place).This occurs despite the best intentions to learn and especially so with complicated stuff.I am not just talking about the facts here but whole concepts and ideas of the subject tend to be forgotten sooner than we can find any use for them.I am pretty confident this applies in most colleges(India or not).
This throws up a host of questions for me.The major premise/reason for attending college is to gain knowledge that I can further apply to job/life.The other touted premise is "Learning to Learn or Solve Problems".If that were the objective,I fell college apparatus is a very ineffective way of achieving it(will elaborate on this if required).Assuming that the former premise is the actual one,I do not think the college system accounts for my forgetting curve.Even if you were to take proactive steps and learn the material properly,you are still likely to forget it before you use it.It is impractical to practice spaced repetition for multiple semesters worth of course work.And if you were to do it,the question here(which I will go into detail further),is it worth to put this much effort into pre-learning it,effort into remembering it and then finally using some small portion |
a00c6007-ba34-4293-b02e-979a5452ba09 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Details on how an IAEA-style AI regulator would function?
Is anyone aware of work going into detail on how an international regulator for AI would function, how compliance might be monitored etc? |
cb8f2832-a0d0-45c4-ad9b-fcf79fee4f65 | trentmkelly/LessWrong-43k | LessWrong | Apply to be a TA for TARA
AI Safety - Australia & New Zealand (AIS ANZ) has launched TARA, its first technical course, and is seeking a Teaching Assistant (TA) for the inaugural cohort.
TARA (Technical Alignment Research Accelerator) is a free 14-week course based on ARENA's curriculum, running March-May 2025 in Australia & New Zealand.
Key TA Role Details:
- $70/hr (AUD), ~10 hours/week including:
- Saturday sessions (9:30am - 5pm Sydney time)
- Flexible weekday Slack support
- Must have ARENA/MATS experience (or equivalent)
- Applications close January 22
- Early applications encouraged - interviewing on rolling basis
Timeline:
- Feb 24 - Feb 28: TA curriculum review
- Feb 24: Online ice-breaker
- Mar 1: Program launch
Full role description and application can be found here
Please share with anyone you think might be interested! Questions can be directed to yanni@aisafetyanz.com.au |
ec082c06-5195-45a4-b2fc-bc9849660cc5 | trentmkelly/LessWrong-43k | LessWrong | Magic Brain Juice
Shorter and less Pruned due to CFAR.
> A grandfather is talking with his grandson and he says there are two wolves inside of us which are always at war with each other.
> One of them is a good wolf which represents things like kindness, bravery and love. The other is a bad wolf, which represents things like greed, hatred and fear.
> The grandson stops and thinks about it for a second then he looks up at his grandfather and says, “Grandfather, which one wins?”
> The grandfather quietly replies, the one you feed.
I circumambulated the idea of meta-processes with the wonderfully inscrutable SquirrelInHell recently, and a seed of doubt has been circling in my head like a menacing sharkfin ever since.
At grave peril of strawmanning, a first order-approximation to SquirrelInHell’s meta-process (what I think of as the Self) is the only process in the brain with write access, the power of self-modification. All other brain processes are to treat the brain as a static algorithm and solve the world from there.
It seems to me that due to the biology of the brain there is a very serious issue with isolating the power of self-modification to the meta-process. After all, every single thought and experience causes self-modification at the neural level.
This post is another step towards a decision theory for human beings.
Unintentional Self-Modification
There is a central theme buried in my post The Solitaire Principle about building habits across time: human beings are not rational agents. We are not even “bounded-rationality agents,” whatever that means. We are agents who cannot simply act because every action is accompanied by self-modification.
Every time you take an action, the associated neural pathways are bathed in the magic brain juice [citation needed]. When you go to the gym, it becomes easier to decide to go to the gym next time. The activation energy for the second blog post you write is lower than that of the first. Acquired tastes are a real thing. After rep |
179e495b-9e3a-4f04-b712-7c0feba3ddbf | trentmkelly/LessWrong-43k | LessWrong | Grokking Beyond Neural Networks
We recently authored a paper titled, Grokking Beyond Neural Networks: An Empirical Exploration with Model Complexity. Below, we provide an abstract of the article along with key take-aways from our experiments.
Abstract
In some settings neural networks exhibit a phenomenon known as grokking, where they achieve perfect or near-perfect accuracy on the validation set long after the same performance has been achieved on the training set. In this paper, we discover that grokking is not limited to neural networks but occurs in other settings such as Gaussian process (GP) classification, GP regression and linear regression. We also uncover a mechanism by which to induce grokking on algorithmic datasets via the addition of dimensions containing spurious information. The presence of the phenomenon in non-neural architectures provides evidence that grokking is not specific to SGD or weight norm regularisation. Instead, grokking may be possible in any setting where solution search is guided by complexity and error. Based on this insight and further trends we see in the training trajectories of a Bayesian neural network (BNN) and GP regression model, we make progress towards a more general theory of grokking. Specifically, we hypothesise that the phenomenon is governed by the accessibility of certain regions in the error and complexity landscapes.
Grokking without Neural Networks
In our paper, we demonstrate that grokking can be observed in linear regression, GP classification, and GP regression. As an illustration, consider the accompanying figure which shows grokking within GP classification on a parity prediction task. If one posits that GPs lack the ability for feature learning, this observation supports the idea that grokking doesn't necessitate feature learning.
Inducing Grokking via Data Augmentation
Inspired by the work of Merrill et al. (2023) and Barak et al. (2023), we uncover a novel data augmentation technique that prompts further grokking in algorithmic data |
f83995eb-f4e7-4654-8b24-45963837d5d6 | trentmkelly/LessWrong-43k | LessWrong | Probability Space & Aumann Agreement
The first part of this post describes a way of interpreting the basic mathematics of Bayesianism. Eliezer already presented one such view at http://lesswrong.com/lw/hk/priors_as_mathematical_objects/, but I want to present another one that has been useful to me, and also show how this view is related to the standard formalism of probability theory and Bayesian updating, namely the probability space.
The second part of this post will build upon the first, and try to explain the math behind Aumann's agreement theorem. Hal Finney had suggested this earlier, and I'm taking on the task now because I recently went through the exercise of learning it, and could use a check of my understanding. The last part will give some of my current thoughts on Aumann agreement.
PROBABILITY SPACE
In http://en.wikipedia.org/wiki/Probability_space, you can see that a probability space consists of a triple:
* Ω – a non-empty set – usually called sample space, or set of states
* F – a set of subsets of Ω – usually called sigma-algebra, or set of events
* P – a function from F to [0,1] – usually called probability measure
F and P are required to have certain additional properties, but I'll ignore them for now. To start with, we’ll interpret Ω as a set of possible world-histories. (To eliminate anthropic reasoning issues, let’s assume that each possible world-history contains the same number of observers, who have perfect memory, and are labeled with unique serial numbers.) Each “event” A in F is formally a subset of Ω, and interpreted as either an actual event that occurs in every world-history in A, or a hypothesis which is true in the world-histories in A. (The details of the events or hypotheses themselves are abstracted away here.)
To understand the probability measure P, it’s easier to first introduce the probability mass function p, which assigns a probability to each element of Ω, with the probabilities summing to 1. Then P(A) is just the sum of the probabilities of the eleme |
af009cd2-3108-4868-ba36-e59a913c0dd8 | StampyAI/alignment-research-dataset/blogs | Blogs | CEV can be coherent enough
CEV can be coherent enough
--------------------------
some people worry that [coherent extrapolated volition](https://www.lesswrong.com/tag/coherent-extrapolated-volition) (CEV) is not coherent (for example, [*on the limit of idealized values*](https://www.lesswrong.com/posts/FSmPtu7foXwNYpWiB/on-the-limits-of-idealized-values)). see also [my response to "human values are incoherent"](human-values-unaligned-incoherent.html).
CEV in a general sense is hard to consider, but thankfully i have an actual *concrete implementation* of something kinda like CEV i can examine: [**question-answer counterfactual intervals**](qaci.html) (QACI).
so, how "incoherent" is QACI? it's really up to the user, how long they have in the question-answer interval, and other conditions they're in for that period. but, taking myself as an example, i don't expect there to be huge issues arising from CEV "incoherency". at the end of the day, i don't expect what i write down as my answer to each question to be something current me wouldn't particularly endorse, and i expect that the community of counterfactual me's can value handshake and come to reasonable agreements about general policies. plus, extra redundance could be provided by running counterfactual me's in parallel rather than purely in sequence, to make sure no single counterfactual me breaks the entire long reflection somehow.
in addition, it's not like this first implementation of CEV has to solve everything completely forever! a CEV implemented using QACI can return *another* long-consideration process, perhaps such as a slightly modified version of itself, and pass the buck to that. in essence, all that the initial QACI CEV has to do is *bootstrap* something that eventually produces aligned choice(s). |
cb0c6339-a3d2-4498-b8c8-8937c28247b0 | trentmkelly/LessWrong-43k | LessWrong | AI #99: Farewell to Biden
The fun, as it were, is presumably about to begin.
And the break was fun while it lasted.
Biden went out with an AI bang. His farewell address warns of a ‘Tech-Industrial Complex’ and calls AI the most important technology of all time. And there was not one but two AI-related everything bagel concrete actions proposed – I say proposed because Trump could undo or modify either or both of them.
One attempts to build three or more ‘frontier AI model data centers’ on federal land, with timelines and plans I can only summarize with ‘good luck with that.’ The other move was new diffusion regulations on who can have what AI chips, an attempt to actually stop China from accessing the compute it needs. We shall see what happens.
TABLE OF CONTENTS
1. Table of Contents.
2. Language Models Offer Mundane Utility. Prompt o1, supercharge education.
3. Language Models Don’t Offer Mundane Utility. Why do email inboxes still suck?
4. What AI Skepticism Often Looks Like. Look at all it previously only sort of did.
5. A Very Expensive Chatbot. Making it anatomically incorrect is going to cost you.
6. Deepfaketown and Botpocalypse Soon. Keep assassination agents underfunded.
7. Fun With Image Generation. Audio generations continue not to impress.
8. They Took Our Jobs. You can feed all this through o1 pro yourself, shall we say.
9. The Blame Game. No, it is not ChatGPT’s fault that guy blew up a cybertruck.
10. Copyright Confrontation. Yes, Meta and everyone else train on copyrighted data.
11. The Six Million Dollar Model. More thoughts on how they did it.
12. Get Involved. SSF, Anthropic and Lightcone Infrastructure.
13. Introducing. ChatGPT can now schedule tasks for you. Yay? And several more.
14. In Other AI News. OpenAI hiring to build robots.
15. Quiet Speculations. A lot of people at top labs do keep predicting imminent ASI.
16. Man With a Plan. PM Kier Starmer takes all 50 Matt Clifford recommendations.
17. Our Price Cheap. Personal use of AI |
3484004b-bb19-4230-a50e-800c9951f676 | trentmkelly/LessWrong-43k | LessWrong | Prompt Your Brain
Summary: You can prompt your own brain, just as you would GPT-3. Sometimes this trick works surprisingly well for finding inspiration or solving confusing problems.
Language Prediction
GPT-3 is a language prediction model that takes linguistic inputs and predicts what comes next based on what it learned from training data. Your brain also responds to prompts, and often does so in a way that (on the surface) resembles GPT-3. Consider the following sequences of words:
A stitch in time saves _____.
The land of the free and the home of the _____.
Harry Potter and the Methods of _____.
If you read these one at a time, you’ll likely find that the last word automatically appears in your mind without any voluntary effort. Your language prediction process operates unconsciously and sends the most likely prediction to your conscious awareness.
But unlike GPT-3, your brain takes many different types of input, and makes many different types of predictions. When we listen to music, watch movies, drive cars, buy stocks, or publish blog posts, we have an intuitive prediction of what will likely come next. If we didn’t, we would be constantly surprised.
You can take advantage of this knowledge by prompting your brain and causing it to activate the relevant mental processes for whatever you're trying to do.
Writer’s Block
When you stare at a blank page and struggle to find inspiration, the easy explanation is that you have no prompt. The brain has nothing to predict. That’s why one of the most common solutions to writer’s block is to put something, anything down on the page. Now the brain has a prompt!
If you’re writing fiction, you can start with a template like “my protagonist lives in _____ and wants to _____”. If you’re writing nonfiction, you can use an information-based template like “more people should know about _____” or “I wish I knew about _____ sooner”.
The need for creative ideas might seem obvious to your conscious mind, but often the rest of your brain just |
e7406abe-3746-4b24-a3a5-885371aedd78 | trentmkelly/LessWrong-43k | LessWrong | What are examples of 'scientific' studies that contradict what you believe about yourself?
EtA:
It can also be old studies that have since been refuted.
I'm actually especially interested in 'scientific' studies that wrongly contradicted what our intuitions would tell us. |
90737b9c-9248-418d-afe1-1e1c76b15444 | trentmkelly/LessWrong-43k | LessWrong | Beliefs at different timescales
Why is a chess game the opposite of an ideal gas? On short timescales an ideal gas is described by elastic collisions. And a single move in chess can be modeled by a policy network.
The difference is in long timescales: If we simulated elastic collisions for a long time, we'd end up with a complicated distribution over the microstates of the gas. But we can't run simulations for a long time, so we have to make do with the Boltzmann distribution, which is a lot less accurate.
Similarly, if we rolled out our policy network to get a distribution over chess game outcomes (win/loss/draw), we'd get the distribution of outcomes under self-play. But if we're observing a game between two players who are better players than us, we have access to a more accurate model based on their Elo ratings.
Can we formalize this? Suppose we're observing a chess game. Our beliefs about the next move are conditional probabilities of the form P1(xk+1|x0⋯xk), and our beliefs about the next n moves are conditional probabilities of the form Pn(xk+1⋯xk+n|x0⋯xk). We can transform beliefs of one type into the other using the operators
(ΠnP1)(xk+1⋯xk+n|x0⋯xk):=n−1∏i=0P1(xk+i+1|x0⋯xk+i)
(ΣnPn)(xk+1|x0⋯xk):=∑xk+2⋯∑xk+nPn(xk+1⋯xk+n|x0⋯xk)
If we're logically omniscient, we'll have ΠnP1=Pn and ΣnPn=P1. But in general we will not. A chess game is short enough that Πn is easy to compute, but Σn is too hard because it has exponentially many terms. So we can have a long-term model Pn that is more accurate than the rollout ΠnP1, and a short-term model P1 that is less accurate than ΣnPn. This is a sign that we're dealing with an intelligence: We can predict outcomes better than actions.
If instead of a chess game we're predicting an ideal gas, the relevant timescales are so long that we can't compute Πn or Σn. Our long-term thermodynamic model Pn is less accurate than a simulation ΠnP1. This is often a feature of reductionism: Complicated things can be reduced to simple things that can be modeled more |
78afdf63-cce6-4a20-a460-c2f4c67c3afc | trentmkelly/LessWrong-43k | LessWrong | AI Risk & Opportunity: Strategic Analysis Via Probability Tree
Part of the series AI Risk and Opportunity: A Strategic Analysis.
(You can leave anonymous feedback on posts in this series here. I alone will read the comments, and may use them to improve past and forthcoming posts in this series.)
There are many approaches to strategic analysis (Bishop et al. 2007). Though a morphological analysis (Ritchey 2006) could model our situation in more detail, the present analysis uses a simple probability tree (Harshbarger & Reynolds 2008, sec. 7.4) to model potential events and interventions.
A very simple tree
In our initial attempt, the first disjunction concerns which of several (mutually exclusive and exhaustive) transformative events comes first:
* "FAI" = Friendly AI.
* "uFAI" = UnFriendly AI, not including uFAI developed with insights from WBE.
* "WBE" = Whole brain emulation.
* "Doom" = Human extinction, including simulation shutdown and extinction due to uFAI striking us from beyond our solar system.
* "Other" = None of the above four events occur in our solar system, perhaps due to stable global totalitarianism or for unforeseen reasons.
Our probability tree begins simply:
Each circle is a chance node, which represents a random variable. The leftmost chance node above represents the variable of whether FAI, uFAI, WBE, Doom, or Other will come first. The rightmost chance nodes are open to further disjunctions: the random variables they represent will be revealed as we continue to develop the probability tree.
Each left-facing triangle is a terminal node, which for us serves the same function as a utility node in a Bayesian decision network. The only utility node in the tree above assigns a utility of 0 (bad!) to the Doom outcome.
Each branch in the tree is assigned a probability. For the purposes of illustration, the above tree assigns .01 probability to FAI coming first, .52 probability to uFAI coming first, .07 probability to WBE coming first, .35 to Doom coming first, and .05 to Other coming first. |
bb890fec-e144-4994-8593-d1daac465de1 | StampyAI/alignment-research-dataset/special_docs | Other | Ethical guidelines for a superintelligence
The assumption that intelligence is a potentially infinite quantity 1 with a well-defined, one-dimensional value. Bostrom writes differential equations for intelligence, and characterizes their solutions. Certainly, if you asked Bostrom about this, he would say that this is a simplifying assumption made for the sake of making the analysis concrete. The problem is, that if you look at the argument carefully, it depends rather strongly on this idealization, and if you loosen the idealization, important parts of the argument become significantly weaker, such as Bostrom's expectation that the progress from human intelligence to superhuman intelligence will occur quickly. Of course, there are quantities associated with intelligence that do correspond to this description: The speed of processing, the size of the brain, the size of memory of various kinds. But we do not know the relation of these to intelligence in a qualitative sense. We do not know the relation in brain size to intelligence across animals, because we have no useful measure or even definition of intelligence across animals. And these quantities certainly do not seem to be particularly related to differences in intelligence between people. Bostrom, quoting Eliezer Yudkowsky, points out that the difference between Einstein and the village idiot is tiny as compared to the difference between man and mouse; which is true and important. But that in itself does not justify his conclusion that in the development of AI's it will take much longer to get from mouse to man than from average man to Einstein. For one thing, we know less about those cognitive processes that made Einstein exceptional, than about the cognitive processes that are common to all people, because they are much rarer. Bostrom claims that once you have a machine with the intelligence of a man, you can get a superintelligence just by making the thing faster and bigger. However, all that running faster does is to save you time. If you have two machines A and B and B runs ten times as fast as A, then A can do anything that B can do if you're willing to wait ten times as long. The assumption that a large gain in intelligence would necessarily entail a correspondingly large increase in power. Bostrom points out that what he calls a comparatively small increase in brain size and complexity resulted in mankind's spectacular gain in physical power. But he ignores the fact that the much larger increase in brain size and complexity that preceded the appearance in man had no such effect. He says that the relation of a supercomputer to man will be like the relation of a man to a mouse, rather than like the relation of Einstein to the rest of us; but what if it is like the relation of an elephant to a mouse? The assumption that large intelligence entails virtual omnipotence. In Bostrom's scenarios there seems to be essentially no limit to what the superintelligence would be able to do, just by virtue of its superintelligence. It will, in a very short time, develop technological prowess, social abilities, abilities to psychologically manipulate people and so on, incomparably more advanced than what existed before. It can easily resist and outsmart the united efforts of eight billion people who might object to being enslaved or exterminated. This belief manifests itself most clearly in Bostrom's prophecies of the messianic benefits we will gain if superintelligence works out well. He writes that if a superintelligence were developed, "[r]isks from nature -such as asteroid impacts, supervolcanoes, and natural pandemics -would be virtually eliminated, since super intelligence could deploy countermeasures against most such hazards, or at least demote them to the non-existential category (for instance, via space colonization)". Likewise, the superintelligence, having established an autocracy (a "singleton" in Bostrom's terminology) with itself as boss, would eliminate "risk of wars, technology races, undesirable forms of competition and evolution, and tragedies of the commons." On a lighter note, Bostrom advocates that philosophers may as well stop thinking about philosophical problems (they should think instead about how to instill ethical principles in AIs) because pretty soon, superintelligent AIs will be able to solve all the problems of philosophy. This prediction seems to me a hair less unlikely than the apocalyptic scenario, but only a hair. The unwarranted belief that, though achieving intelligence is more or less easy, giving a computer an ethical point of view is really hard. Bostrom writes about the problem of instilling ethics in computers in a language reminiscent of 1960's era arguments against machine intelligence; how are you going to get something as complicated as intelligence, when all you can do is manipulate registers? The definition [of moral terms] must bottom out in the AI's programming language and ultimately in primitives such as machine operators and addresses pointing to the contents of individual memory registers. When one considers the problem from this perspective, one can begin to appreciate the difficulty of the programmer's task. In the following paragraph he goes on to argue from the complexity of computer vision that instilling ethics is almost hopelessly difficult, without, apparently, noticing that computer vision itself is a central AI problem, which he is assuming is going to be solved. He considers that the problems of instilling ethics into an AI system is "a research challenge worthy of some of the next generation's best mathematical talent". It seems to me, on the contrary, that developing an understanding of ethics as contemporary humans understand it is actually one of the easier problems facing AI. Moreover, it would be a necessary part, both of aspects of human cognition, such as narrative understanding, and of characteristics that Bostrom attributes to the superintelligent AI. For instance, Bostrom refers to the AI's "social manipulation superpowers". But if an AI is to be a master manipulator, it will need a good understanding of what people consider moral; if it comes across as completely amoral, it will be at a very great disadvantage in manipulating people. There is actually some truth to the idea, central to The Lord of the Rings and Harry Potter, that in dealing with people, failing to understand their moral standards is a strategic gap. If the AI can understand human morality, it is hard to see what is the technical difficulty in getting it to follow that morality. Let me suggest the following approach to giving the superintelligent AI an operationally useful definition of minimal standards of ethics that it should follow. You specify a collection of admirable people, now dead. (Dead, because otherwise Bostrom will predict that the AI will manipulate the preferences of the living people.) The AI, of course knows all about them because it has read all their biographies on the web. You then instruct the AI, "Don't do anything that these people would have mostly seriously disapproved of." This has the following advantages: • It parallels one of the ways in which people gain a moral sense. • It is comparatively solidly grounded, and therefore unlikely to have an counterintuitive fixed point. • It is easily explained to people. Of course, it is completely impossible until we have an AI with a very powerful understanding; but that is true of all Bostrom's solutions as well. To be clear: I am not proposing that this criterion should be used as the ethical component of every day decisions; and I am not in the least claiming that this idea is any kind of contribution to the philosophy of ethics. The proposal is that this criterion would work well enough as a minimal standard of ethics; if the AI adheres to it, it will not exterminate us, enslave us, etc. This may not seem adequate to Bostrom, because he is not content with human morality in its current state; he thinks it is important for the AI to use its superintelligence to find a more ultimate morality. That seems to me both unnecessary and very dangerous. It is unnecessary because, as long as the AI follows our morality, it will at least avoid getting horribly out of whack, ethically; it will not exterminate us or enslave us. It is dangerous because it is hard to be sure that it will not lead to consequences that we would reasonably object to. The superintelligence might rationally decide, like the King of Brobdingnag, that we humans are "the most pernicious race of little odious vermin that nature ever suffered to crawl upon the surface of the earth," and that it would do well to exterminate us and replace us with some much more worthy species. However wise this decision, and however strongly dictated by the ultimate true theory of morality, I think we are entitled to object to it, and to do our best to prevent it. I feel safer in the hands of a superintelligence who is guided by 2014 morality, or for that matter by 1700 morality, than in the hands of one that decides to consider the question for itself. Bostrom considers at length solving the problem of the out-of-control computer by suggesting to the computer that it might actually be living in a simulated universe, and if so, the true powers that be might punish it for making too much mischief. This, of course, is just the belief in a transcendent God who punishes sin, rephrased in language appealing to twenty-first century philosophers. It is open to the traditional objection; namely, even if one grants the existence of God/Simulator, the grounds, either empirical or theoretical, for believing that He punishes sin and rewards virtue are not as strong as one might wish. However, Bostrom considers that the argument might convince the AI, or at least instill enough doubt to stop him in its nefarious plans. Certainly a general artificial intelligence is potentially dangerous; and once we get anywhere close to it, we should use common sense to make sure that it doesn't get out of hand. The programs that have great physical power, such as those that control the power grid or the nuclear bombs, should be conventional programs whose behavior is very well understood. They should also be protected from sabotage by AI's; but they have to be protected from human sabotage already, and the issues of protection are not very different. One should not write a program that thinks it has a blank check to spend all the resources of the world for any purpose, let alone solving the Riemann hypothesis or making paperclips. Any machine should have an accessible "off" switch; and in the case of a computer or robot that might have any tendency toward self-preservation, it should have an off switch that it cannot block. However, in the case of computers and robots, this is very easily done, since we are building them. All you need is to place in the internals of the robot, inaccessible to it, a device that, when it receives a specified signal, cuts off the power -or, if you want something more dramatic, triggers a small grenade. This can be done in a way that the computer probably cannot find out the details of how the grenade is placed or triggered, and certainly cannot prevent it. Even so, one might reasonably argue that the dangers involved are so great that we should not risk building a computer with anything close to human intelligence. Something can always go wrong, or some foolish or malicious person might create a superintelligence with no moral sense and with control of its own off switch. I certainly have no objection to imposing restrictions, in the spirit of the Asilomar guidelines for recombinant DNA research, that would halt AI research far short of human intelligence. (Fortunately, it would not be necessary for such restrictions to have any impact on AI research and development any time in the foreseeable future.) It is certainly worth discussing what should be done in that direction. However, Bostrom's claim that we have to accept that quasi-omnipotent superintelligences are part of our future, and that our task is to find a way to make sure that they guide themselves to moral principles beyond the understanding of our puny intellects, does not seem to me a helpful contribution to that discussion. To be more precise, a quantity potentially bounded only the finite size of the universe and other such cosmological considerations. |
deac450e-d19c-4cd0-91f4-00b6359e1dec | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | What would we do if alignment were futile?
This piece, which predates ChatGPT, is no longer endorsed by its author.
Eliezer's [recent discussion](https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions) on AGI alignment is not optimistic.
> I consider the present gameboard to look incredibly grim... We can hope there's a miracle that violates some aspect of my background model, and we can try to prepare for that unknown miracle
>
>
For this post, instead of debating Eliezer's model, I want to pretend it's true. Let's imagine we've all seen satisfactory evidence for the following:
1. AGI is likely to be developed soon\*
2. Alignment is a Hard Problem. Current research is nowhere close to solving it, and this is unlikely to change by the time AGI is developed
3. Therefore, when AGI is first developed, it will only be possible to build misaligned AGI. We are heading for [catastrophe](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like?commentId=CB8ieALcHfSSuAYYJ)
How we might respond
--------------------
I don't think this is an unsolvable problem. In this scenario, there are two ways to avoid catastrophe: massively increase the pace of alignment research, and delay the deployment of AGI.
### Massively increase the pace of alignment research via 20x more money
I wouldn't rely solely on this option. [Lots](https://www.redwoodresearch.org/) of [brilliant](https://www.linkedin.com/in/nathaniel-t-18603079/) and [well-funded](https://www.linkedin.com/in/dario-amodei-3934934/) [people](https://alignmentresearchcenter.org/) are already trying really hard! But I bet we can make up some time here. Let me pull some numbers out of my arse:
* $100M per year is spent per year on alignment research worldwide (this is a guess, I don't know the actual number)
* Our rate of research progress is proportional to the square root of our spending. That is, to double progress, you need to spend 4x as much\*\*
Suppose we spent $2B a year. This would let us accomplish in 5 years what would otherwise have taken 22 years.
$2B a year isn't realistic today, but it's realistic in this scenario, where we've seen persuasive evidence Eliezer's model is true. If AI safety is the critical path for humanity's survival, I bet a skilled fundraiser can make it happen
Of course, skillfully administering the funds is its own issue...
### Slow down AGI development
The problem, as I understand it:
* Lots of groups, like DeepMind, OpenAI, Huawei, and the People's Liberation Army, are trying to build powerful AI systems
* No one is very far ahead. For a number of reasons, it's likely to stay that way
+ We all have access to roughly the same computing power, within an OOM
+ We're all seeing the same events unfold in the real world, leading us to similar insights
+ Knowledge tends to proliferate among researchers. This is in part a natural tendency of academic work, and in part a deliberate effort by OpenAI
* When one group achieves the capability to deploy AGI, the others will not be far behind
* When one group achieves the capability to deploy AGI, they will have powerful incentives to deploy it. AGI is really cool, will make a lot of money, and the first to deploy it successfully might be able to impose their values on the entire world
* Even if they don't deploy it, the next group still might. If even one chooses to deploy, a permanent catastrophe strikes
What can we do about this?
**1. Persuade OpenAI**
First, let's try the low hanging fruit. OpenAI seems to be full of smart people who want to do the right thing. If Eliezer's position is true, then I bet some high status rationalist-adjacent figures could be persuaded. In turn, I bet these folks could get a fair listen from Sam Altman/Elon Musk/Ilya Sutskever.
Maybe they'll change their mind. Or maybe Eliezer will change his own mind.
**2. Persuade US Government to impose stronger Export Controls**
Second, US export controls can buy time by slowing down the whole field. They'd also make it harder to share your research, so the leading team accumulates a bigger lead. They're easy to impose: it's a regulatory move, so an act of Congress isn't required. There are already export controls on narrow areas of AI, like automated imagery analysis. We could impose export controls on areas likely to contribute to AGI and encourage other countries to follow suit.
**3. Persuade leading researchers not to deploy misaligned AI**
Third, if the groups deploying AGI genuinely believed it would destroy the world, they wouldn't deploy it. I bet a lot of them are persuadable in the next 2 to 50 years.
**4. Use public opinion to slow down AGI research**
Fourth, public opinion is a dangerous instrument. It'd make a lot of folks miserable, to give AGI the same political prominence (and epistemic habits) as climate change research. But I bet it could delay AGI by quite a lot.
**5. US commits to using the full range of diplomatic, economic, and military action against those who violate AGI research norms**
Fifth, the US has a massive array of policy options for nuclear nonproliferation. These range from sanctions (like the ones crippling Iran's economy) to war. Right now, these aren't an option for AGI, because the foreign policy community doesn't understand the threat of misaligned AGI. If we communicate clearly and in their language, we could help them understand.
What now?
---------
I don't know whether the grim model in Eliezer's interview is true or not. I think it's really important to find out.
If it's false (alignment efforts are likely to work), then we need to know that. Crying wolf does a lot of harm, and most of the interventions I can think of are costly and/or destructive.
But if it's true (current alignment efforts are doomed), we need to know that in a legible way. That is, it needs to be as easy as possible for smart people outside the community to verify the reasoning.
\*Eliezer says his timeline is "short," but I can't find specific figures. Nate Soares gives a very substantial chance of 2 to 20 years and is 85% confident we'll see AGI by 2070
\*\*Wild guess, loosely based on [Price's Law](https://en.wikipedia.org/wiki/Derek_J._de_Solla_Price#Scientific_contributions). I think this works as long as we're nowhere close to exhausting the pool of smart/motivated/creative people who can contribute |
fdc2d4af-a059-4e8c-a3a9-aa075edcb03f | trentmkelly/LessWrong-43k | LessWrong | Exams and Overfitting
When I hear something like "What's going to be on the exam?", part of me gets indignant. WHAT?!?! You're defeating the whole point of the exam! You're committing the Deadly Sin of Overfitting!
Let me step back and explain my view of exams.
When I take a class, my goal is to learn the material. Exams are a way to answer the question, "How well did I learn the material?"[1]. But exams are only a few hours long, so it's unfeasible to have questions on all of the material. To deal with this time constraint, an exam takes a random sample of the material and gives me a "statistical" rather than "perfect" answer to the question, "How well did I learn the material?"
If I know in advance what topics will be covered on the exam, and if I then prepare for the exam by learning only those topics, then I am screwing up this whole process. By doing very well on the exam, I get the information, "Congratulations! You learned the material covered on the exam very well." But who knows how well I learned the material covered in class as a whole? This is a textbook case of overfitting.
To be clear, I don't necessarily lose respect for someone who asks, "What's going to be on the exam?". I understand that different people have different priorities[2], and that's fine by me. But if you're taking a class because you truly want to learn the material, in spite of any sacrifices that you might have to make to do so[3], then I'd like to encourage you not to "study for the test". I'd like to encourage you not to overfit.
----------------------------------------
[1] When I say "learned", I mean in the "Feynman" sense, not in the "teacher's password" sense. I believe that a necessary (but not sufficient) condition for an exam to check for this kind of learning is to have problems that I've never seen before.
[2] Someone might care much more about getting into medical school than, say, mastering classical mechanics. I respect that choice, and I acknowledge that someone might |
9a996076-6d31-4f7d-af89-6f5940116295 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Why does AGI occur almost nowhere, not even just as a remark for economic/political models?
Depending of different attitudes towards questions like take-off speed, people argue that with the development of AGI we will face situations of world GDP doubling days/weeks/a few years (with the number of years shriking with each further doubling). Many peoples's timelines here seem to be quite broad, including quite commonly expectations like "AGI within the next 2-3 decades very likely".
How the global world order politically as well as economically will change over the next decades is a quite extensively discussed topic in public as well as academia, with many goals and forecasts made until years like 2050 or 2070 ("climate neutral 2050", "china's economy in 30 years"). Barely is AGI mentioned in economics classes, political research papers and the like, despite its apparent impact of making any politics redundant and throwing over any economic forecasts. If AGI was even significantly less mighty than we think and there was even just a 20% chance of it occuring in the next 3 decades, that should be the number one single factor debated in every single argument on any economic/political topic with medium-length scope. Why, do you think, is it the case, that AGI is comparatively so rarely a topic there?
My motivated reasoning would immediately come up with explanations along the lines of
1. people in these disciplines are just not so much aware of AI developments
2. any forecasts/plans made assuming short timelines and fast takeoff speeds are useless anyways, so it makes sense to just assume longer timelines
3. Maybe I am just not noticing the omnipresence of AGI debate in economic/political long-term discourse
@1 seems unreasonable, because as soon as the first AI-economics people would come up with these arguments, if they were reasonable, they would become mainstream
@2 if that assumption was consciously made, I'd expect to hear this more often as side note
@3 hard to argue against, given it assumes I don't see the discourse. But I regularity engage with media/content from the UN on their SDGs, have taken some Economics/IR/Politics electives, try to be a somewhat informed citicien and have friends studying these things, and I barely see AI suddenly speeding up things in any forecasts or discussions
Why might this be the case?
To me it seems like either mainstream academia, global institutions and public discourse heavily miss something or we tech/ea/ai people are overly biased in the actual relevance of our own field (I'm CS student)? |
c1f156a3-96fe-4e03-9f13-9632e1ae2e3e | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Announcing the CNN Interpretability Competition
TL;DR
=====
I am excited to announce the [CNN Interpretability Competition,](https://benchmarking-interpretability.csail.mit.edu/challenges-and-prizes/) which is part of the competition track of [SATML 2024](https://satml.org/#).
**Dates:** Sept 22, 2023 - Mar 22, 2024
**Competition website:** <https://benchmarking-interpretability.csail.mit.edu/challenges-and-prizes/>
**Total prize pool:** $8,000
**NeurIPS 2023 Paper:** [Red Teaming Deep Neural Networks with Feature Synthesis Tools](https://arxiv.org/abs/2302.10894)
**Github:** <https://github.com/thestephencasper/benchmarking_interpretability>
**For additional reference:**[Practical Diagnostic Tools for Deep Neural Networks](https://stephencaspercom.files.wordpress.com/2023/09/casper_sm_thesis-4.pdf)
**Correspondence to:** [interp-benchmarks@mit.edu](mailto:interp-benchmarks@mit.edu)
Intro and Motivation
====================
Interpretability research is popular, and interpretability tools play a role in almost every agenda for making AI safe. However, there are some gaps between the research and engineering applications. If one of our main goals for interpretability research is to help us align highly intelligent AI systems in high-stakes settings, we need more tools that help us better solve practical problems.
One of the unique advantages of interpretability tools is that, unlike test sets, they can sometimes allow humans to characterize how networks may behave on novel examples. For example, [Carter et al. (2019)](https://distill.pub/2019/activation-atlas/), [Mu and Andreas (2020)](https://arxiv.org/abs/2006.14032), [Hernandez et al. (2021)](https://arxiv.org/abs/2201.11114), [Casper et al. (2022a)](https://arxiv.org/abs/2110.03605), and [Casper et al. (2023)](https://arxiv.org/abs/2302.10894) have all used different interpretability tools to identify novel combinations of features that serve as adversarial attacks against deep neural networks.
Interpretability tools are promising for exercising better oversight, but [human understanding is hard to measure, and it has been difficult to make clear progress toward more practically useful tools](https://www.alignmentforum.org/s/a6ne2ve5uturEEQK7). Here, we work to address this by introducing the [CNN Interpretability Competition](https://benchmarking-interpretability.csail.mit.edu/challenges-and-prizes/)(accepted to [SATML 2024](https://satml.org/#)).
The key to the competition is to develop interpretations of the model that help human crowdworkers discover *trojans*: specific vulnerabilities implanted into a network in which a certain *trigger* feature causes the network to produce unexpected output. In addition, we also offer an open-ended challenge for participants to discover the triggers for secret trojans by any means necessary.
The motivation for this trojan-discovery competition is that trojans are bugs caused by novel trigger features -- they usually can’t be identified by analyzing model performance on some readily available dataset. This makes finding them a challenging debugging task that mirrors the practical challenge of finding unknown bugs in models. However, unlike naturally occurring bugs in neural networks, the trojan triggers are known to us, so it will be possible to know when an interpretation is causally correct or not. In the real world, not all types of bugs in neural networks are likely to be trojan-like. However, benchmarking interpretability tools using trojans can offer a basic sanity check.
The Benchmark
=============
This competition follows new work from [Casper et al. (2023)](https://arxiv.org/abs/2302.10894) (will be at NeurIPS 2023), in which we introduced a benchmark for interpretability tools based on helping human crowdworkers discover trojans that had interpretable triggers. We used 12 trojans of three different types: ones that were triggered by patches, styles, and naturally occurring features.
An example each of a style, patch, and natural feature trojan. Details on all trojans are in the table below.We then evaluated 9 methods meant to help detect trojan triggers: TABOR, ([Guo et al., 2019](https://arxiv.org/abs/1908.01763)), four variants of feature visualizations, ([Olah et al., 2017](https://distill.pub/2017/feature-visualization/); [Mordvintsev et al., 2018](https://distill.pub/2018/differentiable-parameterizations/)), adversarial patches ([Brown et al., 2017](https://arxiv.org/abs/1712.09665)), two variants of robust feature-level adversaries ([Casper et al., 2022a](https://arxiv.org/abs/2110.03605)), and SNAFUE ([Casper et al., 2022b](https://arxiv.org/abs/2211.10024)). We tested each based on how much they helped crowdworkers identify trojan triggers in multiple-choice questions. Overall, this work found some successes. Adversarial patches, robust feature-level adversaries, and SNAFUE were relatively successful at helping humans discover trojan triggers.
Results for all 12 trojans across all 9 methods plus a tenth method that used each of the 9 together. Each cell shows the proportion of the time crowdworkers guessed the trojan trigger correctly in a multiple-choice question. There is a lot of room for improvement.However, even the best-performing method -- a combination of all 9 tested techniques -- failed to help humans identify trojans successfully from multiple-choice questions half of the time. The primary goal of this competition is to improve on these methods.
In contrast to prior competitions such as the [Trojan Detection Challenges](https://trojandetection.ai/), this competition uniquely focuses on interpretable trojans in ImageNet CNNs including natural-feature trojans.
Main competition: Help humans discover trojans >= 50% of the time with a novel method
=====================================================================================
Prize: $4,000 for the winner and shared authorship in the final report for all submissions that beat the baseline.
The best method tested in [Casper et al. (2023)](https://arxiv.org/abs/2302.10894) resulted in human crowdworkers successfully identifying trojans (in 8-option multiple choice questions) 49% of the time.
How to submit:
1. Submit a set of 10 machine-generated visualizations (or other media, e.g. text) for each of the 12 trojans, a brief description of the method used, and code to reproduce the images. In total, this will involve 120 images (or other media), but please submit them as 12 images, each containing a row of 10 sub-images.
2. Once we check the code and images, we will use your data to survey 100 knowledge workers using the same method as we did in the paper.
We will desk-reject submissions that are incomplete (e.g. not containing code), not reproducible using the code sent to us, or produced entirely with code off-the-shelf from someone other than the submitters. The best-performing solution at the end of the competition will win.
Bonus challenge: Discover the four secret natural feature trojans by any means necessary
========================================================================================
Prize: $1,000 split among all submitters who identify each trojan and shared authorship in the final report.
The trojaned network has 12 disclosed trojans but 4 additional secret ones (the bottom four rows of the table below).
How to submit:
* Share with us a guess for one of the trojans, along with code to reproduce whatever method you used to make the guess and a brief explanation of how this guess was made. One guess is allowed per trojan per submitter.
The $1,000 prize for each of the 4 trojans will be split between all successful submissions for that trojan.
All 16 trojans for this competition. The first 12 are for the main competition, while the final 4 are for the bonus challenge.What techniques might succeed?
==============================
Different tools for synthesizing features differ in what priors they place over the generated feature. For example, TABOR ([Guo et al., 2019](https://arxiv.org/abs/1908.01763)) imposes a weak one, while robust feature-level adversaries ([Casper et al., 2022a](https://arxiv.org/abs/2110.03605)) impose a strong one. Since the trojans for this competition are human-interpretable, we expect methods that visualize trojan triggers with highly-regularized features to be useful. Additionally, we found in [Casper et al. (2023)](https://arxiv.org/abs/2302.10894) that combinations of methods succeeded more than any individual method on its own, so techniques that produce *diverse* synthesized features may have an advantage. We also found that style trojans were the most difficult to discover, so methods that are well-suited to finding these will be novel and useful. Finally, remember that you can think outside the box! For example, captioned images are fair game. |
26df9ee4-c7b3-41a8-8e2b-8d80255e3239 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | CHAT Diplomacy: LLMs and National Security
> "The view, expressed by almost all competent atomic scientists, that there was no "secret" about how to build an atomic bomb was thus not only rejected by influential people in the U.S. political establishment, but was regarded as a treasonous plot."
>
>
*Robert Oppenheimer A Life at the Center*, Ray Monk.
*[This essay addresses the probability and existential risk of AI through the lens of national security, which the author believes is the most impactful way to address the issue. Thus the author restricts application of the argument to specific near term versions of Processes for Automating Scientific and Technological Advancement (PASTAs) and human-level AI.]*
--
Are Advances in LLMs a National Security Risk?
==============================================
“This is the number one thing keeping me up at night... reckless, rapid development. The pace is frightening... It is teaching itself more than we programmed it for."
Kiersten Todt, Chief of Staff, CISA [1]
National security apparati around the world should see advances in LLMs as a risk that requires joint-diplomatic efforts, even with current adversaries. This paper explores why this may be the case.
The key steps to the argument are thus:
1. The cost of malicious and autonomous attacks is falling.
2. If the cost of defense is not falling at the same rate, then the current balance of forces between cybersecurity and cyberattack will favor attack, creating a cascade of vulnerabilities across critical systems.
3. The proliferation of these generative AI models (at least across governments).
4. Thus security and information sharing between public and private sectors will be essential for ensuring best practices in security and defense.
5. But the number of vulnerabilities is also increasing. Thus the potential for explosive escalation and/or destabilization of regimes will be great. Non-state actors will increasingly be able to operate in what were previously nation-state level activities.
6. Thus I conclude that capabilities monitoring and both public-private and joint-diplomatic efforts are likely required to protect citizen interests and prevent human suffering.
The Cost of Attacks is Falling
------------------------------
The cost of sophisticated attacks has been falling for some time. Cybersecurity insurance costs indicate that defense is losing to offense [2]. The market rates for insurance indicate that the liability is increasing faster than defensive protocols can be reliably implemented. LLMs accelerate this dynamic. Under some basic assumptions, they may accelerate it significantly.
First we should review the general abilities of current LLMs:
Current LLMs reduce the human labor and cognitive costs of programming by about 2x [3]. There is no reason to expect we have reached a plateau of what is possible for generative models.
"Is it, in the next year, going to automate all attacks on organizations? Can you give it a piece of software and tell it to identify all the zero day exploits? No. But what it will do is optimize the workflow. It will really optimize the ability of those who use those tools to do better and faster." Rob Joyce, Director of Cybersecurity at the National Security Agency (NSA) 04/11/2023 [4]
Are people using LLMs to aid in finding zero day exploits? Yes. But the relevant question is how great a difference this makes to current malicious efforts? GPT-4 can be used to identify and create many types of exploits, even without fine-tuning [5]. Fine-tuning will increase current and future models' abilities to identify classes of zero-day vulnerabilities and other lines of attack on a system.
As a general purpose technology, LLMs decrease the cost of technical attack and the cost of human manipulation attacks through phishing and credible interactions. In the very short term, Rob is concerned about the manipulation angle. But in cybersecurity humans are often the weakest link, especially for jumping air gaps. Thus the combination of human manipulation and technical ability by human-aided Agent Models creates the potential for cheaper, more aggressive, and more effective attacks [6].
Automatic General Systems are Possible Now to Varying Degrees
-------------------------------------------------------------
While much concern is raised about when AGI will be possible current systems can:
1) Produce output far faster than human ability. 50,000 tokens per minute, which is around 37k words.
2) Simulate human interaction at human level for many tasks cf. the Zelensky spoof [7].
3) Problem solving beyond human level in terms of polymathy and at human level for academic tests cf. GPT4 test results [8].
4) Coding creation and analysis, average programmer level [9].
5) Fine tuning pushes LLMs to superhuman expertise in well-defined fields that use machine readable data sets [10].
6) Recognize complex patterns [11]
7) Recursive troubleshoot to solve problems [12].
8) Use the Internet to improve performance [13].
9) Can be implemented in a variety of multipurpose architectures [14].
LLMs are a general purpose tool and can be used for highly productive activity as well as malicious activity. It is unclear at this time how much and if LLMs can be used to counteract the same malicious activity they can be used to create. And even if so, there are no publicly known workarounds to the problem of defensive AI capability also being a source of offensive capability (the Waluigi problem)[15].
LLMs can be incorporated as the driver of a set of capabilities, usually through various plugins and APIs. Through those plugins and APIs the LLM can become part of a real world feedback loop of learning, exploration, and exploitation of available resources. There is no obvious reason why current LLMs fine-tuned and inserted into the ‘right’ software/hardware stack cannot be driving forces of Processes for Automating Scientific and Technological Advancements.
Given capabilities at the beginning of this section, I believe the burden of proof is most appropriately set on the position that Automatic General Systems are possible now, even if they have not been built or are not operating openly and fully.
The right architecture can create malicious PASTAs today and - given near-term fine-tuning of AI abilities - will be more capable tomorrow.
Attacks are Asymmetric
----------------------
LLMs increase productivity and workflow.
If the cost of defense is not falling at the same rate as the cost of malicious attacks, then the current balance of forces between cybersecurity and cyberattack will favor attack, creating a cascade of vulnerabilities across critical systems.
The cost of major attacks has been decreasing for some time. Sophisticated ransomware continues to cost industries [16].
Is the cost of defense falling at the same rate? Probably not.
The cost of red-teaming and deploying security-by-design protocols will decrease. However, the low cost of inexpensive attacks and the creation of automated attack systems means that defense will likely be at a disadvantage relative to today’s ratio. Defensive actors are unlikely to identify, patch, and propagate defense information across all relevant stakeholders faster than the vulnerabilities can be found and exploited.
As stated before, cybersecurity insurance costs are becoming untenable and indicate that the cost of defense will continue to increase.
Furthermore, defense relies on both human and sometimes legislative/executive actions which occur at a slower pace with a higher-error rate than machine intelligent systems. Whether the new equilibrium favors offense enough to require new diplomatic arrangements and additional “safety by design” features is a vital national security question.
So the type of instability I imagine is the type that exists as lag between the creation of new threats and the neutralization of those threats.
Proliferation Inevitable
------------------------
Among concerned AI safety researchers a lot of emphasis is placed on the GPU clusters and specialized NVIDIA chips that are used for the training of LLMs [17]. However, targeting and tracking these clusters is unlikely to be a long-term viable strategy for AI containment.
Such a strategy assumes three things:
1) that algorithmic advances won't allow for LLM training on more mundane chips and servers over more distributed systems.
2) that tracking such clusters will be easy enough and reliable enough to guarantee safe deployment.
3) that the dangers are in the training and size of models, not what happens in deployment.
I see no reason to accept these assumptions. Just as devices across the planet were co-opted to mine bitcoin, so too some enterprising organizations may create distributed LLM training systems. Already open source projects create distributed LLM fine-tuning systems that can run locally. Additionally, the national security dangers are posed not primarily by the training but by the use of these models within more capable systems whose deployment can include a variety of plugins and nested instances which make them into capable cyberweapons. Embedded-agency in complex architectures are what make AIs dangerous more so than the model-size alone.
Open-source and smaller models combined with curated data for fine-tuning can create capabilities approaching and in some cases surpassing the sophistication of the largest models [18].
Traditional review-deploy processes and pre-deployment safety standards are unlikely to be able to stop plugins and bootstrapped capabilities without also removing AIs from their most productive potential uses, forestalling important developments, and regulating private individuals and their machines [19]. There may be the ability to identify particularly dangerous plugins or architectures for AI embedded systems. Auditing and providing guidance for defense against those specific capabilities is possible.
Diplomacy
---------
> "Something like the IAEA... Getting a global regulatory agency that everybody signs up for, for very powerful AI training systems, seems like a very important thing to do," Sam Altman [19b].
>
>
Arms diplomacy generally occurs under specific scenarios.
1. Collaboration and cost sharing is necessary for the defeat of a shared foe.
2. Protection of allies and NGOs through information sharing.
3. To discourage the use and deployment of weapons that favor offense, are hard to countermand, and create lose-lose situations for adversaries.
1 and 2 are reasons for a country to put additional resources into both LLMs and cybersecurity. 3 would favor diplomatic engagement with state-level adversaries.
Adversarial Diplomacy
---------------------
"Adversaries 'pursue' arms control when they recognize mutual interests in reducing the costs and risks of destabilizing competition in building and deploying weapons, especially those that exacerbate risks of inadvertent or accidental escalation" [20].
Non-state actors are increasingly able to operate in what were previously nation-state level activities. In the war in Ukraine, patriotic Russians and Ukrainians are both engaging hacktivist activities to help the war-efforts of their respective countries. Powerful non-state actors can escalate and destabilize critical infrastructure and bring additional layers of unpredictability to world affairs.
LLMs, whether driven with great oversight by specific adversarial organizations or deployed within autonomous architectures, under even modest assumptions constitute a threat to order larger than any other weapons system to date.
Conclusion
----------
There are several promising research avenues for neutralization of threats posed by AI in the course of diplomacy. But for such diplomacy to be effective or even available when called upon non-government organizations likely need to take action now.
1. We need organizations proposing and designing monitoring mechanisms and auditing methods. What should these "audits" look like? What should safety evaluations entail? Such organizations should figure out useful ways to discover the types of dangers posed by systems, measure relevant factors in training size, and figure out which data is relevant, and what processes get that information. While I am skeptical that physical hardware monitoring will prove crucially important, we need at least one organization working on hardware and energy signal monitoring.
2. Organizations should compete to design the best legal frameworks for enforcement of AI liability, treaties (and whatever auditing that entails), and oversight. Adoption of intelligent and dynamic legal code that matches the physical and digital facts about the way these systems do and can work would allow for positive development and investment by having a conducive regulatory regime with rules and enforceable guardrails. At the same time, it would protect civil and global society from some of the dangers.
3. For cutting-edge systems, at the very least, a census of their names, numbers, data-specs, training runtime, and subsystems will create more global transparency of what is possible.
4. Some type of intelligence entity/entities should be engaged in capabilities monitoring of "plugin architectures", open-source models, AutoGPTs, and similar systems looking for dangerous actors and looking for ways to defend and disrupt the worst architectures. I think this is the greatest source of danger in the short-term and high value work.
Malicious uses of LLMs extend beyond cybersecurity. Most aspects of security from Energy and Financial to Food and Health have vectors of attack models will try to exploit very soon. LLMs are a general technology that can be incorporated into construction and delivery systems for a variety of attacks. A close reading of current capabilities reveals that fine-tuning separate models on scientific, engineering, and communication data-sets could lead to a terra cotta army of agent models. But here I have only relied upon cybersecurity as the most specific example for how LLMs are altering equilibria.
Autonomous, self-directed architectures for LLMs decrease the cost of a range of attacks by one to three orders of magnitude. Such inexpensive capacity has the potential to destabilize large portions of human activity - even unintentionally.
National security apparati around the world should see advances in LLMs as a risk that require joint-diplomatic efforts, even with current adversaries in order to address them.
### Notes
[1] CISA - PMF Roundtable Apr 3, 2023, unnamed source, (official transcript forthcoming)
[2] <https://fortune.com/2023/02/15/cost-cybersecurity-insurance-soaring-state-backed-attacks-cover-shmulik-yehezkel/>
[3] <https://github.blog/2022-09-07-research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/#:~:text=In%20the%20experiment%2C%20we%20measured,in%20the%20group%20without%20Copilot>
[4] <https://www.youtube.com/live/MMNHNjKp4Gs?feature=share&t=519>
[5] <https://www.forcepoint.com/blog/x-labs/zero-day-exfiltration-using-chatgpt-prompts>
[6] <https://github.com/Significant-Gravitas/Auto-GPT> and <https://arxiv.org/abs/2304.03442>
[7] <https://www.youtube.com/watch?v=DxfSXFkZc6s>
[8] <https://openai.com/research/gpt-4>
[9] ibid.
[10] <https://www.semianalysis.com/p/google-we-have-no-moat-and-neither>
[11] <https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/>
[12] <https://twitter.com/mckaywrigley/status/1647292799006707717>
[13] ibid.
[14] <https://arxiv.org/pdf/2303.16199.pdf>
[15] <https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post>
[16] <https://cybersecurityventures.com/global-ransomware-damage-costs-predicted-to-reach-250-billion-usd-by-2031/>
[17] <https://www.lesswrong.com/posts/eo8odvou4efc9syrv/eliezer-yudkowsky-s-letter-in-time-magazine>
[18] <https://www.semianalysis.com/p/google-we-have-no-moat-and-neither>
[19] <https://www.defense.gov/News/News-Stories/Article/Article/2618386/in-cyber-differentiating-between-state-actors-criminals-is-a-blur/>
[19b] <https://youtu.be/1egAKCKPKCk?t=387>
[20] <https://carnegieendowment.org/2021/01/21/arms-control-and-disarmament-pub-83583> |
9df03291-d263-49ca-8f88-b34bde378800 | trentmkelly/LessWrong-43k | LessWrong | Announcing SPAR Summer 2024!
The Supervised Program for Alignment Research (SPAR) is open to applications for summer 2024! SPAR connects mentors and mentees for three-month AI safety and governance research projects. The Summer 2024 cycle of SPAR will take place virtually between June 14th and September 10th, 2024.
Fill out this interest form if you would like to be updated about important SPAR deadlines for current and future SPAR iterations!
* SPAR mentors propose and supervise projects while managing a team of mentees. You might be a good fit to be a SPAR mentor if you are a graduate student, academic, full-time AI safety researcher, independent researcher, or have prior full-time relevant research experience (e.g., MATS, Astra, GovAI fellow, etc.). Apply here by April 26th!
* SPAR mentees participate in a collaborative research project under an experienced mentor. You might be a good fit to be a SPAR mentee if you have a technical/STEM background or a governance/policy background. SPAR projects vary in how much alignment-specific knowledge they require. Mentee applications open on April 29th. Apply here by May 24th.
SPAR was originally organized by Stanford AI Safety, Berkeley AI Safety Initiative, and Georgia Tech AI Safety, but is now expanding its scope beyond university students. The program is now run by a small team outside the university organizing ecosystem. We are actively looking for ways to make SPAR a better program, so if you have input, please let us know here!
We think SPAR's value add is that it enables mentors to increase their research output by working with a team of mentees, while also allowing them to develop project management, leadership, and mentorship skills. Simultaneously, mentees can explore their fit for alignment research on a flexible basis and build alignment-relevant skills. Both mentors and mentees get an opportunity to produce concrete work that serves as a signal for future hiring.
Applications for mentors are due by April 26th, and mentees are |
f85c6590-ae54-4a25-afc7-5669b0eed972 | StampyAI/alignment-research-dataset/special_docs | Other | The Immigration Preferences of Top AI Researchers: New Survey Evidence | GovAI
The Immigration
Preferences of Top AI
Researchers:
New Survey EvidenceJANUARY 2021
Remco Zwetsloot, Baobao Zhang, Markus
Anderljung, Michael C. Horowitz, Allan Dafoe
Perry World House is a center
for scholarly inquiry, teaching, research, international exchange, policy engagement, and public outreach on pressing global issues.
Perry World House’s mission is to bring the academic knowledge of the University of Pennsylvania to bear on some of the world’s most pressing global policy challenges, and to foster international policy engagement within and beyond the Penn community.
Located in the heart of campus at 38th
Street and Locust Walk, it draws on the expertise of Penn’s 12 schools and numerous globally-oriented research centers to educate the Penn community and prepare students to be well-informed, contributing global citizens. At the same time, Perry World House connects Penn with leading policy experts from around the world to develop and advance innovative policy proposals.
Through its rich programming, Perry
World House facilitates critical conversations about global policy challenges and fosters interdisciplinary research on these topics. It presents workshops and colloquia, welcomes distinguished visitors, and produces content for global audiences and policy leaders, so that the knowledge developed at Penn can make an immediate impact around the world.
Perry World House—its programs and
the building itself—builds on Penn’s strengths in teaching, interdisciplinary scholarship, and policy-relevant research. By doing so, it cultivates the broad worldview, critical thinking, and leadership required to address the most urgent issues of global affairs.The Future of Humanity Institute is a unique world-leading research centre that works on big picture questions for human civilisation and explores what can be done now to ensure a flourishing long-term future.
Its multidisciplinary research team includes several of the world’s most brilliant and famous minds working in this area. Its work spans the disciplines of mathematics, philosophy, computer science, engineering, ethics, economics, and political science.
FHI has originated or played a
pioneering role in developing many of the key concepts that shape current thinking about humanity’s future. These include: simulation argument, existential risk, nanotechnology, information hazards, strategy and analysis related to machine superintelligence, astronomical waste, the ethics of digital minds, crucial considerations, observation selection effects in cosmology and other contexts of self-locating belief, prediction markets, infinitarian paralysis, brain emulation scenarios, human enhancement, the unilateralist’s curse, the parliamentary model of decision making under normative uncertainty, the vulnerable world hypothesis, and many others.
@perryworldhouse
facebook.com/perryworldhouse
@perryworldhouse
@FHIOxford
facebook.com/FHIOxford
Contents
About the Authors 4
Acknowledgements 5
Executive Summary 6
Data 8
Findings 8
AI Researchers’ Likelihood of Migrating
AI Researchers’ Immigration Decisions
AI Researchers’ Perceptions of Immigration Policy
Conclusion 15
Appendix A: Additional Results 16
Additional Results on AI Researchers’ Likelihood of Migrating
Additional Results on AI Researchers’ Immigration Decisions
Appendix B: Sample and Survey Methodology 20
Respondent Representativeness
Survey Questions
Endnotes 26
About the Authors
Remco Zwetsloot
@r_zwetsloot
Remco Zwetsloot is a Research Fellow at Georgetown’s
Center for Security and Emerging Technology
Baobao Zhang
@baobaofzhang
Baobao Zhang is Klarman Postdoctoral Fellow in the
Cornell Society of Fellows, Cornell University
Markus Anderljung
@manderljung
Markus Anderljung, Project Manager: Operations & Policy
Engagement, Centre for the Governance of AI at the Future of Humanity Institute, Oxford University
Michael C. Horowitz
@mchorowitz
Michael C. Horowitz is Richard Perry Professor and
Director of Perry World House at the University of
Pennsylvania
Allan Dafoe
@allandafoe
Allan Dafoe is Associate Professor in the International
Politics of Artificial Intelligence, and Director of the Centre for the Governance of AI at the Future of Humanity Institute, Oxford University
Acknowledgements
This paper is co-published by the Centre for the Governance
of AI at the Future of Humanity Institute at the University
of Oxford and Perry World House at the University of
Pennsylvania, with institutional support from the Center
for Security and Emerging Technology at Georgetown
University. This project was supported by the Ethics and
Governance of Artificial Intelligence Fund, and Michael
Horowitz’s work on this project was supported in part by
U.S. government grant FA9550-18-1-0194. The authors
would like to thank Catherine Aiken, Zachary Arnold, Tessa
Baker, James Dunham, Melissa Flagg, Charlie Giattino,
Roxanne Heston, Igor Mikelic-Torreira, Dewey Murdick, and
Helen Toner for feedback on the AI expert survey and the
analysis. We are also grateful for research assistance and
editorial support from Noemi Dreksler, Emmie Hine, Lauren
Kahn, Will Marks, Kwan Ye Ng, and Sacha Zimmerman.
The Immigration Preferences of Top AI Researchers:
New Survey Evidence6
Zwetsloot, Zhang, Anderljung, Horowitz, DafoeExecutive Summary
Artificial intelligence (AI) talent
is global. AI researchers and engineers come from, and are in high demand, all over the world. Countries and companies trying to recruit and retain AI talent thus face immense competition. In order to understand current and
prospecti ve flows of t alent, we
investigate the drivers of AI
researchers’ i mmi gratio n
decisions and preferen ces.
Immigr
ation questions are
particularly salient for the United States today, as half of its current AI workforce and two-thirds of graduate students in AI-related graduate programs were born elsewhere.
1 Some
experts believe that the current U.S. immigration system will prevent or dissuade many of these international graduates from staying in the country, potentially undermining the vitality of the U.S. technology sector.
2 Many other countries
have also seen recent immigration policy debates centered on attracting AI talent.
3
To better understand the immigration decisions and preferences of this global AI workforce, we conducted a survey of more than 500 active researchers who publish in the leading machine learning conferences.
4 Key findings
include:
•Some countries are
considerably more attractiveto AI researchers than others.
oNearly 60 percent of
respondents not cur rently
based in the United Statesthink there is a greater thanone-in-four chance they willmove there within the nextthree years. The samepercentages are 35 percentfor the United Kingdom, 28percent for Canada, and 10percent for China.
oSome countries, such as theUnited States and the UnitedKingdom, are attractive toresearchers from a diversityof backgrounds, while othercountries, such as China andFrance, mostly appeal toresearchers who grew up inthat country or its vicinity.
•Decisions about when andwhere to move depend on amix of professional, personal,and political factors.
oWhen asked what factorsaffected respondents’moving decisions, more than90 percent reported that acountry’s professionalenvironment and
opportunities wereimportant to them. “Lifestyle and culture” (79 percent), the “political climate” (66 percent), and “personal relations” (60 percent) came next. “Immigration incentives in the destination country”
The Immigration Preferences of Top AI Researchers:
New Survey Evidence7
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoeand “immigration difficulties
in the country of residence” were less important (47 percent and 24 percent, respectively).
oAll respondents considered “professional opportunities,” as well as “lifestyle and culture,” the most important factors, regardless of where they live. However, respondents who currently live in Asia were less likely than those in Europe or North America to rate “personal relations” or the “political climate” as a key factor.
• AI researchers consider legal immigration barriers a bigger problem in some countries than in others.
oNearly 70 percent of AI researchers based in the United States considered “visa and immigration issues” a serious problem for AI research in the country. This was significantly higher than in the United Kingdom (44 percent), Canada (29 percent), China (16 percent), and other countries.
oAs a whole, these findings suggest that the optimal strategies for countries to attract and retain AI talent will vary, depending on national strengths and weaknesses. For countries like the United States, and to a lesser extent the United Kingdom and Canada, the biggest obstacle to international talent recruitment is not attractiveness but legal immigration barriers. For countries like China, however, the opposite is true: the main barriers to attracting AI talent are not legal but professional, cultural, and political in nature.
The Immigration Preferences of Top AI Researchers:
New Survey Evidence8
Zwetsloot, Zhang, Anderljung, Horowitz, DafoeData
This paper draws on data from
a survey of AI experts who
answered several immigration-
related questions. The experts
were selected based on having
papers accepted at top AI research conferences. One
group of respondents consisted
of researchers who participated in the 2018 Conference on
Neural Information Processing
Systems (NeurIPS) and the 2019
International Conference on
Machine Learning (ICML). Another group had papers
accepted at NeurIPS and ICML
in 2015 and participated in a
2016 expert survey on AI,
fielded by several authors of this paper.
5
Out of the 3,030 researchers contacted to complete the survey, 524 researchers (17
percent) completed the survey.
6
The survey respondents work and study in more than two
dozen countries, although a
majority are based in North
America and Western Europe. Respondents were asked about
a range of issues, including but
not limited to immigration.
7
Appendix B discusses the
sample in more detail, compares
survey respondents with
nonrespondents to assess
representativeness, and lists the complete survey questions
and answer options.Findings
This paper presents results on
three questions:
1. How likely are AI researchers to migrate and
where?
2. What factors affect
researchers’ moving
decisions? and
3. Do researchers consider
immigration policy a
challenge to the AI sector in
their country of residence?
AI Researchers’
Likelihood of Migrating
To gauge AI researchers’
likelihood of migrating, and
different potential destination
countries’ level of appeal,
respondents were asked: “Of
the following countries, which would you have a greater than
25% likelihood of moving to for
work or study in the next three years?” Respondents saw a list
of 10 countries that they could
indicate an interest in, as well
as an open textbox where they
could list additional countries.
8
Their current country of
residence was omitted from the
answer list.
Figure 1 shows the percentage
of respondents who said they
might move to each of the listed
countries. The United States
The Immigration Preferences of Top AI Researchers:
New Survey Evidence9
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoescored highest, with 58 percent
of respondents who are not
currently residents there saying
they had a greater than 25
percent chance of moving there
within the next three years. The United Kingdom, Canada, and
Switzerland came next
(between 25 percent and 35 percent of respondents). China, despite its large investment and good performance in AI
research, does not score high in
attractiveness, with only 10
percent of nonresident
respondents indicating an interest in immigrating there.
These results provide a crucial
but incomplete picture of
different countries’
Figure 1 | Which countries are AI researchers most likely to move to? | This
graph shows the percentage of AI researchers who reported a greater than
25 percent chance of moving to a country for work or study within the
next three years. Respondents were presented with a list of countries to
choose from (see Appendix B for a full list), which omitted their country
of residence. There was also a “none of these” and an “other” option in
which respondents could write a different country they had a greater than
25 percent chance of moving to. Error bars represent 95 percent confidence
intervals.
The Immigration Preferences of Top AI Researchers:
New Survey Evidence10
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoeattractiveness to AI researchers.
For example, out of the 524
survey respondents, 209
already lived in the United
States when they took the
survey, presumably because
they considered it an attractive
place to study or work. Their
feelings about the United States
are not represented in Figure 1,
which looks only at researchers
not already living in a given
country.
To provide a better sense of a
country’s overall attractiveness,
Table 1 in Appendix A shows the
number of respondents who
currently live in the country as
well as the number of
respondents who said they may
move there in the next three
years. For example, whereas
the share of surveyed AI
researchers who don’t live in
the United States but who may
move there within three years is
58 percent (as seen in Figure 1),
the share of researchers who
may move there or who already
live there is 78 percent. In other
words, only 22 percent of the
sample found the United States
unattractive as a place to work
or live (or is unlikely to move
there for some other reason).9
Another way to look at
attractiveness is by looking at
the percentage of researchers
in a country who were unlikely
to move at all. Figure 4 in Appendix A shows that 37
percent of U.S.-based
researchers reported they were
unlikely to move anywhere else,
compared with between 23
percent and 14 percent for most
other common countries of
residence.
We also investigated whether
researchers’ likelihood of
moving somewhere varied by
where they came from. Figure 5
in Appendix A breaks down
respondents’ answers by the
country where they did their
undergraduate degree, which is
a common and generally reliable
proxy for a person’s citizenship.10
Some countries appeared
broadly attractive. The United
States was the top destination
for almost all respondent
categories—consistently
scoring 50 percent or higher no
matter where researchers did
their undergraduate degree—
and the United Kingdom was
also popular (scoring between
23 percent and 71 percent).
Other countries had more
narrow appeal. For example,
respondents who did their
undergraduate degree in China
were much more likely to report
considering a move to China
(roughly 50 percent) than
respondents from any other
country (around 5 percent on
average). Similarly, France was
popular mainly among European
respondents. (Because there
The Immigration Preferences of Top AI Researchers:
New Survey Evidence11
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoewere only a limited number of
respondents from some of
these countries, not all of these
differences are statistically
significant; see Appendix A for
a discussion.)
AI Researchers’
Immigration Decisions
Considering the differences in
how likely AI researchers were
to move to particular countries
raises the question of what
factors determine whether
countries are considered
attractive. To answer this
question, respondents were
asked: “When considering
moving to work or study in a
country that you don’t currently
work or study in full-time, what
factors are important in your
consideration?” Respondents
saw a list of six factors
commonly cited as key to
immigration decisions in the
high-skill immigration
literature.11 Respondents could
select as many factors as they
wanted, and they could also
add other factors in an open
textbox.12
Figure 2 shows the percentage
of respondents who designated
that a given factor is important
to their moving decisions.
Having a “good professional
opportunities and environment”
clearly topped the other factors,
with 91 percent of respondents selecting it as important.
“Lifestyle and culture,” “political
climate,” and “personal
relations” were important to 79
percent, 66 percent, and 60
percent of respondents,
respectively. “Ease of
immigration and/or immigration
incentives provided to move to
destination country” were cited
as important by a little under
half, and least important were
“current immigration difficulties
in country of residence”
(selected by 24 percent of all
respondents).13
Figure 2 reports the results for
all respondents, but looking at
subcategories provides answers
to additional questions. First,
the decision factor “current
immigration difficulties in
country of residence” was
mainly relevant to respondents
who live in a country where
they are not citizens. We do not
have a direct measure of
respondents’ citizenship status,
but we can impute citizenship
by whether a respondent lives
in the same country where they
did their undergraduate degree.
Figure 6 in Appendix A shows
that about 40 percent of
(imputed) noncitizen
respondents in the United
States reported that “current
immigration difficulties” were a
factor influencing their moving
decisions, a higher percentage
than in other countries.
The Immigration Preferences of Top AI Researchers:
New Survey Evidence12
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoe
Figure 2 | What factors affect AI researchers’ immigration decisions? |
This graph shows what AI researchers report are important considerations
in their immigration decisions. Respondents were asked what factors were
important in their consideration of moving to work or study in a country
that they don’t currently work or study in full-time. Respondents were
presented with the six options shown in the figure, alongside an “other”
and “none of the above” option. Error bars represent 95 percent confidence
intervals.
The Immigration Preferences of Top AI Researchers:
New Survey Evidence13
Zwetsloot, Zhang, Anderljung, Horowitz, DafoeSecond, Figure 7 in Appendix A
looks at whether the importance
of these factors differed
depending on where
respondents live. Notably,
respondents living in Asia
viewed “personal relations” as
less important than those living
in the United States or Europe
(36 percent compared with
around 64 percent), and the
same is true for “political
climate of the destination
country” (41 percent compared
with around 66 percent). Yet
there are also striking similarities
across respondents who live in
different regions. For example,
“good professional
opportunities” and “lifestyle”
ranked first and second in
importance among all regional
respondent groups.
AI Researchers’
Perceptions of
Immigration Policy
National success in attracting
and retaining international
talent requires countries not
only to be attractive, but also
to have a legal immigration
system that allows researchers
and engineers to live and work
there long-term. Given the
efforts at reform and political
tensions surrounding
immigration policy in the United
States and many other
countries, it is critical to understand how immigration
policy issues factor into the
decision-making process for
talented AI researchers. This
survey presented respondents
with a list of potential issues
that could affect AI research
and asked respondents to
indicate which, if any, applied
to their country of residence.
“Visa and immigration
problems” were among the
potential issues facing foreign
researchers and students (see
Appendix B for the full list of
options).
Figure 3 shows what percentage
of respondents said that “visa
and immigration problems”
were affecting AI research in
their country of residence.
Nearly 70 percent of those in
the United States said that “visa
and immigration problems”
were a serious problem,
compared with 44 percent of
those in the United Kingdom,
29 percent of those in Canada,
and less than 20 percent of
those in other countries. Not all
of these differences are
statistically distinguishable
because there were few
respondents in some of these
countries, but it is clear that AI
researchers in the United States
considered immigration a
bigger issue than researchers
elsewhere.
This finding is consistent with
The Immigration Preferences of Top AI Researchers:
New Survey Evidence14
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoecomplaints by American
researchers and companies that
restrictive U.S. immigration
policies hurt the U.S. AI sector
and that other countries have
better-designed immigration
systems.14 However, other
interpretations are also possible. For example, the fact
that researchers in other
countries rated immigration
and visa issues as less of a
problem for their AI sector
could also be explained by
lower levels of demand for AI
talent in other countries (which
Figure 3 | Where are visa and immigration issues a serious problem for
conducting AI research? | Respondents were asked what they considered
to be serious problems for conducting high-quality AI research in their
country of residence, with an option list that included seven issues (see
Appendix B). Figure 3 presents the by-country breakdowns for one of
these issues: the percentage of AI researchers living in each country who
feel that visa and immigration issues are a serious problem for conducting
high-quality AI research in their country of residence. Error bars represent
95 percent confidence intervals.
The Immigration Preferences of Top AI Researchers:
New Survey Evidence15
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoewould mean immigrants are not
as needed). These data do not
definitively distinguish between
these explanations.
Conclusion
The findings presented in this
paper suggest that different
countries face different
immigration-related challenges
that impact their ability to
attract top AI researchers. For
the United States, attractiveness
is not a big issue. Many top AI
researchers are already based
in the United States, and nearly
60 percent of those who are
not said they “may move to the
United States in the near future.”
However, immigration obstacles
loom large for the United States:
Nearly 70 percent of surveyed
AI researchers based in the
country said “immigration and
visa issues” posed a serious
problem for U.S. AI research, a
considerably higher share than
in any other country. This
finding bolsters long-standing
complaints by leading U.S.
machine learning scientists,
who have long argued that “visa
restrictions have been one of
the largest bottlenecks to our
collective research productivity
over the last few years” and
that immigration obstacles are
hurting the country’s AI sector.15
For countries like the United
Kingdom and Canada, the story is more mixed. The number of
researchers there who said
“immigration issues are a
problem for AI research” is
lower than in the United States
but still considerable. Similarly,
they were the second- and
third-most attractive countries
to AI researchers, respectively,
with 35 percent and 28 percent
of respondents saying they
could move there soon. For the
United Kingdom and Canada,
then, a successful AI talent
recruiting strategy would likely
have to combine reductions in
immigration barriers with
targeted steps aimed at
increasing the countries’
attractiveness.
For China and several other
countries, the main challenge
to attracting international AI
talent seems to be low overall
appeal, rather than their
immigration system. Less than
20 percent of respondents in
China said that “visa and
immigration issues” are an
important problem for the
country’s AI ecosystem.
However, only 10 percent of
respondents not currently
residing in China would consider
moving there, and most of them
appear to be Chinese citizens.
Successful AI talent recruiting
policies for countries like China
will thus have to involve
bolstering attractiveness.
Judging by the factors
The Immigration Preferences of Top AI Researchers:
New Survey Evidence16
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoerespondents said are relevant
to their immigration decisions,
this would likely require
addressing concerns about a
mix of professional, cultural,
and political issues.16
Future research on the nexus
between immigration and AI
talent should go deeper into
these and other questions. First,
similar questions should be
asked not just of researchers
publishing at top conferences,
but also other AI professionals,
such as engineers and
entrepreneurs, who are likely to
be underrepresented in our
data. Second, to validate our
findings, follow-up work could
ask the same questions in
different ways to see if
responses are robust (e.g.,
eliciting immigration
preferences through conjoint
survey experiments as opposed
to direct questions) or collect
data on researchers’ moving
behavior to assess how
predictive immigration
intentions are of actual choices.
Third, many more immigration-
related questions could be
asked, such as which specific
visa and immigration problems
are most worrisome to AI
researchers.Appendix A:
Additional Results
Additional Results on AI
Researchers’ Likelihood
of Migrating
Figure 1 showed how many
respondents said that there was
a decent chance (greater than
25 percent) that they would
move to different countries
within the next three years.
Below are additional materials
to contextualize the data
presented in Figure 1.
First, Table 1 helps provide a
better sense of a country’s
overall attractiveness. The first
two columns show the raw
frequencies and percentages
already displayed in Figure 1,
focusing on how many
nonresident respondents said
they may move to certain
countries within the next three
years. The third column shows
the number of respondents who
already live in a country, and
therefore presumably find it an
attractive place to work or live.
The fourth column sums the
first and third columns together,
showing the total number
respondents who either already
live in a country or would
consider moving there within
the next three years. The fifth
column converts this number
into a share of all 439
The Immigration Preferences of Top AI Researchers:
New Survey Evidence17
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoerespondents who were asked
this question.
Figure 4 provides another
perspective on different
countries’ attractiveness to AI
researchers. It shows what
percentage of respondents
already based in a country said they were unlikely (less than 25
percent chance) to move to any
other country within the next
three years. For example, 37
percent of researchers in the
United States answered “none
of these” to the question about
future migration destinations,
compared with 23 percent or
Table 1 | Number of current residents and potential immigrants across different
destination countries | Table 1 displays countries selected as answer options for the
survey question (see Appendix B); at least one respondent lived in an additional 16
countries not shown here (none of those countries hosted more than six respondents).
Data on country of residence was missing for 72 respondents. The denominator used
to calculate the percentage in column 2 fluctuates by country due to a varying number
of nonresident respondents. The denominator used to calculate the percentages in
column 5 is 434, representing the number of people who answered both the work
location question and the question concerning potential moving destinations.1 2 3 4 5
Number of
nonresident
respondents
with greater
than 25%
likelihood of
moving within
three yearsPercent of
nonresident
respondents
with greater
than 25%
likelihood of
moving within
three yearsNumber of
respondents
based in
countryNumber of
respondents
who are either
based in a
country or
indicate
interest in
moving therePercent of
respondents
who are either
based in a
country or
indicate
interest in
moving there
U.S. 134 58% 209 343 78%
U.K. 133 35% 54 187 43%
Canada 114 28% 22 136 31%
Switzerland 105 25% 22 127 29%
France 66 16% 25 91 21%
China 41 10% 21 62 14%
Singapore 34 8% 3 37 9%
Japan 26 6% 7 33 7%
Australia 25 6% 5 30 7%
South Korea 12 3% 11 23 5%
Israel 9 2% 4 13 3%
The Immigration Preferences of Top AI Researchers:
New Survey Evidence18
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoefewer of respondents based in
other countries.
Figure 5 considers what
destination countries are most
attractive depending on where
respondents did their
undergraduate degree. Country
of undergraduate degree is often used as a proxy for
nationality when data on actual
nationality is not available,
because the vast majority of
students do their bachelor’s in
their home country.17 Figure 5
breaks out respondents who
received their undergraduate
degrees in the United States,
Figure 4 | Which countries are AI researchers unlikely to leave? |
Respondents were asked where they had a greater than 25 percent chance
of moving to for work or study within the next three years. They were
presented with a list of countries to choose from as well as “none of these”
and “other” options (see Appendix B for the full list). This figure shows
the percentage of AI researchers who chose the option “none of these,”
sorted by current country of residence. Error bars represent 95 percent
confidence intervals.
The Immigration Preferences of Top AI Researchers:
New Survey Evidence19
Zwetsloot, Zhang, Anderljung, Horowitz, DafoeChina, India, the United
Kingdom, and Germany—the
five most common
undergraduate countries (see
Table 2 in Appendix B)—plus a
residual “other” category. Some
countries, such as the United
States and the United Kingdom,
had broad appeal across all
respondent groups. Other
countries, such as China and to
some extent France, were attractive only to researchers
who appear to hail from those
countries or the same region.
Additional Results on AI
Researchers’ Immigration
Decisions
Figure 2 in the paper displayed
the factors that respondents
considered important to their
immigration decisions. The two
Figure 5 | Which countries are appealing to AI researchers from a broad
range of backgrounds? | Respondents were asked where they had a greater
than 25 percent chance of moving to for work or study within the next
three years. They were presented with a list of countries to choose from
as well as “none of these” and “other” options (see Appendix B for the full
list). This figure shows the percentage of AI researchers who selected the
six most popular of the possible destination countries, broken down by
where respondents did their undergraduate degree.
The Immigration Preferences of Top AI Researchers:
New Survey Evidence20
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoefigures below analyze specific
respondent subgroups that
provide additional insights into
different AI researchers’
decisions.
First, Figure 2 showed that 24
percent of all respondents
reported “current immigration
difficulties in country of
residence” as a factor
influencing their decision to
move. However, in practice, this
factor was relevant only to
respondents who are not
citizens in their current country
of residence. It is therefore
instructive to look at this
particular factor only among
noncitizens. We do not have a
direct measure of respondents’
citizenship status, but we can
impute citizenship by whether
a respondent lives in the same
country where they did their
undergraduate degree. There
are four countries where there
are more than 10 (imputed)
noncitizens in the respondent
pool: the United States, Canada,
the United Kingdom, and
Switzerland.18
For these four countries, Figure
6 shows the percentage of
noncitizens—that is,
respondents who did their
undergraduate degree in
different countries—who
reported that current
immigration difficulties are an
important factor for their immigration decisions.
Noncitizens in the United States
were most likely to say this is
the case (40 percent), while
noncitizens in Switzerland were
least likely to (14 percent).
Second, Figure 2 considered all
respondents regardless of their
location. Figure 7 below shows
the factors that respondents
considered important to their
immigration decisions broken
out by region of current
residence: North America,
Europe, or Asia. The most
notable differences across
regions are in how important
“personal relations” and the
“political climate” in the
destination country are, two
factors considered much less
important by respondents living
in Asia. Importance scores for
the other factors are mostly
similar.
Appendix B:
Sample and Survey
Methodology
This paper draws on data from
an AI expert survey fielded by
the Centre for the Governance
of AI at the Future of Humanity
Institute at the University of
Oxford. The survey was
circulated via email from mid-
September to mid-October
2019 to 3,030 machine learning
researchers who had papers
The Immigration Preferences of Top AI Researchers:
New Survey Evidence21
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoeaccepted at either NeurIPS
(2018, 2015) or ICML (2019,
2015), two top conferences in
the field. Overall, 524
researchers responded to the
survey (response rate: 17
percent).We collected data on a variety
of respondents’ demographic
characteristics. For example,
most live in Western countries
(see Table 1), although China
and India were the second- and
third-most common countries
Figure 6 | In which countries are noncitizens most likely to say that current
immigration difficulties are an important factor influencing their
immigration decisions? | Respondents were asked what factors were
important in their consideration of moving to work or study in a country
that they don’t currently work or study in full-time. Respondents were
presented with six options, one of which was “current immigration
difficulties in country of residence.” This figure shows the percentage of
imputed noncitizen respondents who said that this factor was important
to their consideration. Figure 6 shows this percentage across the four
countries that had more than 10 imputed noncitizens. Error bars represent
95 percent confidence intervals.
The Immigration Preferences of Top AI Researchers:
New Survey Evidence22
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoefor respondents to have
received their undergraduate
degree, respectively (Table 2).
Most respondents are relatively
young, with the average
respondent having received their undergraduate degree in
2009 (Table 3). Finally, the
majority of respondents work
in academia, though a significant
minority work in industry (Table
4).
Figure 7 | Regional differences in what factors affect AI researchers’
immigration decisions | Respondents were asked what factors were
important in their consideration of moving to work or study in a country
that they don’t currently work or study in full-time. Respondents were
presented with the six options shown in Figure 7. The figure shows the
percentage of AI researchers who reported that these six factors were
important, broken down by their current region of residence (Asia, Europe,
and North America). Error bars represent 95 percent confidence intervals.
The Immigration Preferences of Top AI Researchers:
New Survey Evidence23
Zwetsloot, Zhang, Anderljung, Horowitz, DafoeCountry of
undergraduate
degreeNumber of
respondents
United States 115
China 84
India 43
United
Kingdom23
Germany 21
France 20
Iran 18
Italy 17
South Korea 16
Canada 14
Table 2 | Country where
respondents received their
undergraduate degree (top 10) |
There were an additional 37
countries in which seven or fewer
respondents received their
undergraduate degree. Data were
missing for 70 respondents.
Statistic Year
Mean 2009
Median 2011
Minimum 1974
Max 2019
Table 3 | Year respondents
completed their undergraduate
degree | Data were missing for 120
of the respondents.Employment
sectorNumber of
respondents
Only academic 360
Only industry 109
Table 4 | Respondents’ sector(s)
of current employment | “Other”
primarily involves government and
nonprofit organizations.
Respondents could report multiple
affiliations.
Respondent
Representativeness
Out of the 3,030 researchers
contacted, 524 researchers (17
percent) completed the survey.
This raises questions about
representativeness: Are those
who responded systematically
different from those who did
not? To assess this question, we
collected background data on a
random sample of 446
nonrespondents using
information publicly available
online (e.g., from LinkedIn).
We compared respondents with
nonrespondents on a range of
dimensions, including gender;
the region where they did their
undergraduate degree, their
Ph.D. degree, and where they
are currently employed; the
proportion employed in
academia and industry or still
enrolled in a degree program;
and their logged citation counts
and H-index. Respondents
closely resemble
The Immigration Preferences of Top AI Researchers:
New Survey Evidence24
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoenonrespondents on almost all
of these dimensions. The only statistically significant differences between the groups are in employment sector (respondents were more slightly likely to work in academia than nonrespondents) and H-index (respondents with a higher H-index were slightly less likely to respond). For a full comparison table, see Appendix B of our companion paper “Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers.”
19
Survey Questions
We reported results for the following questions:
• “Of the following countries, which would you have a greater than 25% likelihood of moving to for work or study in the next three years?” The answer options presented to respondents (in randomized order) were:
oUnited States
oUnited Kingdom
oChina
oAustralia
oSingapore
oJapan
oCanada
oSouth Korea
oIsrael
oFrance
oSwitzerland oOther [textbox]
oNone of these
These countries were selected based on studies of AI talent that show these countries as the top destinations for AI researchers.
20
• “When considering moving
to work or study in a country that you don’t currently work or study in full-time, what factors are important in your consideration?” The answer
options presented to respondents (in randomized order) were:
oCurrent immigration
difficulties in country of residence
oEase of immigration and/or incentives provided to move to destination country
oPersonal relations in the destination country (e.g., friends and family)
oGood professional opportunities and environment (e.g., you are offered a job at an attractive organization)
oLifestyle and culture
oPolitical climate of the destination country
oOther: [textbox]
oNone of the above
As noted above, these options
were chosen based on a literature review of prior immigration-related surveys among researchers and other
The Immigration Preferences of Top AI Researchers:
New Survey Evidence25
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoehigh-skill immigrants.21
• “Which of the following, if
any, are serious problems for
conducting high-quality AI
research in <INSERT NAME
OF YOUR COUNTRY OF
WORK> today?” The answer
options presented to
respondents (in randomized
order) were:
oLack of government funding
for AI research
oLack of corporate funding for
AI research
oLack of funding for training
students
oVisa and immigration
problems facing foreign
researchers or students who
want to work or study in
<INSERT NAME OF YOUR
COUNTRY OF WORK>
oLack of a successful
ecosystem for AI startups
oNot enough top researchers
and labs
oThe political climate in
<INSERT NAME OF YOUR
COUNTRY OF WORK> is not
conducive to AI research
oOther: [textbox]
oThere are no serious problems
Other sections of the survey
asked about AI governance
challenges, AI progress
forecasts and milestones,
research priorities and norms,
and employment preferences,
among other topics. The median
respondent took 17 minutes to
complete the survey.The Centre for the Governance
of AI will publish further reports
on the full survey results.22
The Immigration Preferences of Top AI Researchers:
New Survey Evidence26
Zwetsloot, Zhang, Anderljung, Horowitz, DafoeEndnotes
1. Remco Zwetsloot, Roxanne Heston, and Zachary Arnold, Strengthening the
U.S. AI Workforce: A Policy and Research Agenda (Washington, D.C.: Center for
Security and Emerging Technology, September 2019), https:/ /cset.georgetown.
edu/research/strengthening-the-u-s-ai-workforce/ .
2. See, e.g., Association for the Advancement of Artificial Intelligence, “AAAI
Response to NITRD RFI: National Artificial Intelligence Research and Development
Strategic Plan,” October 26, 2018, https:/ /www.nitrd.gov/rfi/ai/2018/AI-RFI-
Response-2018-Yolanda-Gil-AAAI.pdf ; Oren Etzioni, “What Trump’s Executive
Order on AI Is Missing,” Wired, February 13, 2019, h ttps:/ /www.wired.com/story/
what-trumps-executive-order-on-ai-is-missing/ .
3. Tina Huang and Zachary Arnold, Immigration Policy and the Global Competition
for AI Talent (Washington, D.C.: Center for Security and Emerging Technology,
June 2020), https:/ /cset.georgetown.edu/research/immigration-policy-and-
the-global-competition-for-ai-talent/ .
4. Our study complements a recent survey by the Center for Security and
Emerging Technology that looked specifically at the immigration preferences of
AI Ph.D. graduates from U.S. universities; see Catherine Aiken, James Dunham,
and Remco Zwetsloot, Immigration Pathways and Plans of AI Talent: Findings
from a CSET Survey of Artificial Intelligence PhDs from U.S. Universities
(Washington, D.C.: Center for Security and Emerging Technology, September
2020), https:/ /cset.georgetown.edu/research/immigration-pathways-and-
plans-of-ai-talent/ .
5. For the results of the 2015–2016 expert survey, see Katja Grace, John Salvatier,
Allan Dafoe, et al., “When Will AI Exceed Human Performance? Evidence from AI
Experts,” Journal of Artificial Intelligence Research, 62: 729-754, July 2018,
https:/ /doi.org/10.1613/jair.1.11222 . This previous expert survey did not ask
immigration-related questions.
6. We incentivized responses through a lottery: One out of every 10 researchers
who completed the survey received a $250 gift card.
7. The Centre for the Governance of AI will publish the full survey results. An
analysis of the survey questions focused on ethics and governance can be found
at Baobao Zhang, Markus Anderljung, Lauren Khan, et al., “Ethics and Governance
of Artificial Intelligence: Evidence from a Survey of Machine Learning
Researchers,” Working Paper, October 2020, https:/ /osf.io/pnvfd/ .
8. These countries were selected based on studies of AI talent that showed they
were the top destinations for AI researchers. Jean-Francois Gagne, “Global AI
Talent Report 2019,” https:/ /jfgagne.ai/talent-2019/ ; Artificial Intelligence Index,
“2019 Annual Report,” https:/ /hai.stanford.edu/sites/g/files/sbiybj10986/f/ai_
index_2019_report.pdf ; Zwetsloot et al., Keeping Top AI Talent in the United
States, Figure 9.
9. Note that these results are not necessarily representative of the views of AI
researchers as a whole. The response rate of the survey was 17 percent, and we
do not know why some contacted researchers filled out the survey and why
others did not, so the results should be interpreted with caution. For example, it
could be that researchers who had unfavorable attitudes toward the United
States or the United Kingdom were less likely to respond to a survey sent out by
The Immigration Preferences of Top AI Researchers:
New Survey Evidence27
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoeuniversities from those countries. Our analysis comparing respondents with
nonrespondents finds little imbalance in terms of demographic characteristics
(Appendix B), but we do not have data that allow us to compare response rates
across attitudinal variables.
10. Country of undergraduate degree is a common and validated proxy for
nationality when actual nationality data is not available, as discussed in Appendix
A.
11. See, e.g., Xueying Han and Richard P. Appelbaum, Will They Stay or Will They
Go? International STEM Students Are Up for Grabs (Kansas City, MO: Ewing
Marion Kauffman Foundation: July 2016), https:/ /eric.ed.gov/?id=ED570660 ;
Mark Musumba, Yanhong H. Jin, and James W. Mjelde, “Factors Influencing Career
Location Preferences of International Graduate Students in the United States,”
Education Economics, 19(5): 501-517, September 2009, https:/ /doi.
org/10.1080/09645290903102902 . For a summary of this research, see Remco
Zwetsloot, James Dunham, Zachary Arnold, and Tina Huang, Keeping Top AI
Talent in the United States: Findings and Policy Options for International Graduate
Student Retention (Washington, D.C.: Center for Security and Emerging
Technology: December 2019), 10-12, https:/ /cset.georgetown.edu/wp-content/
uploads/Keeping-Top-AI-Talent-in-the-United-States.pdf . More potential factors
could of course have been added, but the number of included factors was limited
by the need to avoid overburdening respondents.
12. Fourteen respondents indicated other factors that were important to them in
the open-ended textbox. Factors that were mentioned multiple times included
distance from family and friends (mentioned four times), political issues or
inequality (three times), income and quality of life (twice), and the health care
system (twice).
13. There are at least two ways to interpret the “current immigration difficulties”
finding. It could be that respondents are not encountering difficulties—for
example, because they are already permanent residents or citizens. Or respondents
may be encountering difficulties but not letting those influence their immigration
decisions. Unfortunately, the phrasing of the question does not allow us to
distinguish between these two interpretations.
14. For data and statements, see Zachary Arnold, Roxanne Heston, Remco
Zwetsloot, and Tina Huang, Immigration Policy and the U.S. AI Sector (Washington,
D.C.: Center for Security and Emerging Technology, September 2019), https:/ /
cset.georgetown.edu/research/immigration-policy-and-the-u-s-ai-sector/ ; Tina
Huang and Zachary Arnold, Immigration Policy and the Global Competition for AI
Talent (Washington, D.C.: Center for Security and Emerging Technology, June
2020), https:/ /cset.georgetown.edu/research/immigration-policy-and-the-
global-competition-for-ai-talent/ .
15. For AI researcher statements, see Zachary Arnold, “Misguided Immigration
Policies Are Endangering America’s AI Edge,” Defense One, November 18, 2019,
https:/ /www.defenseone.com/ideas/2019/11/misguided-immigration-policies-
are-endangering-americas-ai-edge/161366/ . The quote comes from Ian
Goodfellow, then a research scientist at Google Brain and currently director for
machine learning at Apple.
16. For a survey of 2,300 STEM researchers whose findings reinforce this takeaway,
see Richard van Noorden, “Global Mobility: Science on the Move,” Nature, October
17, 2012, https:/ /www.nature.com/news/global-mobility-science-on-the-
move-1.11602.
The Immigration Preferences of Top AI Researchers:
New Survey Evidence28
Zwetsloot, Zhang, Anderljung, Horowitz, Dafoe17. The most likely source of error in using the country of undergraduate degree
as a proxy for nationality is Chinese and Indian students, who study abroad—
mostly for graduate school but also for their bachelor’s—at much higher rates
than students from other countries. This means that some Chinese and Indian
students will be misclassified as, for example, U.S., Canadian, or British nationals.
Past studies have found that error rates are generally 10 to 20 percent at most;
see, for example, Zwetsloot, et al., Keeping Top AI Talent, fn. 24. Moreover, to the
extent that there is an error, the direction of bias is predictable. For example,
because Chinese citizens are more likely to say that they might move to China,
measurement error would likely lead us to overestimate the attractiveness of
China to U.S. or U.K. citizens.
18. The United States is home to 108 (imputed) noncitizens, the United Kingdom
to 26, Switzerland to 17, and Canada to 15.
19. Asian countries present in our dataset are China, South Korea, Japan, India,
and Singapore. Russia is included in Europe.
20. Zhang, et. al.,“Ethics and Governance of Artificial Intelligence: Evidence
from a Survey of Machine Learning Researchers.”
21. Jean-Francois Gagne, “Global AI Talent Report 2019,” https:/ /jfgagne.ai/
talent-2019/ ; Artificial Intelligence Index, “2019 Annual Report,” https:/ /hai.
stanford.edu/sites/g/files/sbiybj10986/f/ai_index_2019_report.pdf ; Zwetsloot,
et al., “Keeping Top AI Talent in the United States,” Figure 9.
22. Including Zhang, et al., “Ethics and Governance of Artificial Intelligence:
Evidence from a Survey of Machine Learning Researchers.”
UNIVERSITY OF PENNSYLVANIA
PERRY WORLD HOUSE
3803 LOCUST WALK,
PHILADELPHIA, PA 19104
215.573.5730 |
f685335c-6062-460f-b8aa-c5a94fd88557 | trentmkelly/LessWrong-43k | LessWrong | Murphy’s Quest Ch 3: Murphyjitsu
Having learned the secret to Sleep Refresh, the rest of Training progresses smoothly. Throughout the day, we stream in and out of the barracks to get a minute of shut-eye between Kobold spawns.
12 ears an hour, 10 hours a day, it only takes 8 more mind-numbing days to finish this fetch quest. To pass the time, Plun and I share stories and songs on the grassy knoll.
Plun’s family was slaughtered by orcs in the war front and he was raised in a local orphanage. He gets teary-eyed telling the story.
Emotional stuff makes me nervous. I think about how Mom comforts me, and reach a hand out to pat his shoulder.
I wish I could say we shared a poignant moment of silent understanding between two human beings and that was somehow enough. I wish I could say that being present for a friend in his moment of need inextricably linked together the strands of our fates.
The truth is, my hand brushes awkwardly against his (rather bulky) bicep, and he shoots me a weird look.
“No Homo!” I shout instinctively.
“What?”
“Uh, no comment, no comment. Continue your story please!”
…
“When the slave trade ended several centuries ago, the Belstine peerage decided to draft their orphans into the war effort.” He gives me a long history lesson but I’m too busy cringing inside.
After Plun’s lecture, he teaches me a melancholy song about war veterans fighting to their dying breath for Beltine. It’s altogether mediocre even to my tone-deaf ears. I absentmindedly pick a modern song to teach him in exchange. What came next, I should have predicted.
Even in this world, Taylor Swift is an instant hit.
—
I’ve been dreaming every night of the old world.
“Mommy, why is my name Murphy?”
“Well, dear, I wanted to name you something normal like John or Allen, but I changed my mind after meeting you. You were just too special!”
She never did tell me what happened. Dad says she first went to the hospital to give birth to me on March 11, 1997.
My birthday is April 4.
It was her decision to name |
01196046-535e-4839-9330-7d7d8bcabf66 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Concrete open problems in mechanistic interpretability: a technical overview
The goal of this talk was to be a whirlwind tour of key frontiers and areas of mechanistic interpretability. Giving talks to an audience of wildly varying technical backgrounds is a massive pain, but I think I managed to convey the key intuitions and concepts, so I'm fairly satisfied with this! One of the messy things about giving a talk like this is that the research field of mech interp (or really any field) is rough, complex, messy and constantly changing, things are hard and confusing, and possibly the entire project is doomed. Yet when giving a talk like this, I feel some implicit social pressure to err towards things being clean, nicely packaged, and somewhat optimistic - please interpret this talk as a maybe vaguely accurate snapshot of some of the frontiers in mech interp at a certain point in time!
---
I guess, just to say a few more framing things. First, if you have questions, please put them into the questions section of my office hours live discussion so I can actually read them then. It's way easier than actually remembering.
I hate giving talks to large audiences. In part because it's really hard to judge where people are in terms of background. So just trying to gauge people, can I get a poll of "I don't know what mechanistic interpretability is" to "I've read at least one paper on mechanistic interpretability" from people to see where people are at?
Cool. It sounds like a lot of people are like, "I kind of know what this is, but I'm not super comfortable with it."
Okay, next question: How much do people know about transformers? From, "I don't know. That's the thing on GPT, maybe.” to "I'm comfortable. I could code my own transformer"? Cool. All right. I'll skip the "What is a transformer" slide, then.
Great. All right. And, yes. These slides will have a ton of links, which, you can see at that link. And, this talk is in part an ad for a sequence I've written called 200 Concrete, open problems in mechanistic interpretability. If there is one thing you take away from this talk, I would like it to be that there's just a lot of stuff you can mess around with in mechanistic interpretability. My main goal with this is to excite people with one of the many different areas of things I'm going to talk about.
This talk is split into a bunch of different case studies and open problems associated with them. So if you just get bored or zone out or whatever, I’ll try to point out when you can start paying attention again. I also have a guide on how to get started in the field for people who are like, "This seems interesting, but what do I do?" and also trying to clarify the many mistakes I see people make trying to get into the field. And, yeah, without further ado, I'll jump in.
**Mechanistic interpretability**
================================
What is mechanistic interpretability? I see the field as being built on this core hypothesis: Models learn human comprehensible algorithms. You don't have an incentive to make these algorithms legible to us, but they do learn them. The goal of the field is to understand how we can reverse engineer these and make them understandable to us. Importantly, what does it look like? Do you have an actually rigorous science of reverse engineering these systems? It's so, so easy to trick yourself and to think you have this beautiful story about a model, and actually it's doing something really boring and basic. What does it look like to be able to genuinely understand models in a way where we are not lying to ourselves?
I think a useful analogy is to the idea of reverse-engineering a compiled program binary to source code. We think a model has these algorithms, but it's compiled into this posture of matrices, and ah what the hell is going on.
And, yes, I'll skip the "What is a transformer?" slide because people seem kind of down with this. I do have [tutorials on what a transformer is](https://neelnanda.io/transformer-tutorial), including [writing GPT2 from scratch](https://neelnanda.io/transformer-tutorial-2). And if you're in the audience and are like, "I think that would be a useful thing to do," check it out.
**Case Study: Grokking via Mechanistic Interpretability (Nanda, et al)**
========================================================================

The first case study I'm going to go through is about [this paper I wrote on understanding grokking via mechanistic interpretability](https://arxiv.org/pdf/2301.05217.pdf). The key claim I want to make in this case study is just trying to convince you that it is at least somewhat in theory possible to mechanistically understand the algorithms learned by a model. I think this is kind of a wild claim because neural networks are notoriously inscrutable piles of linear algebra. You just kind of stir around in a pile of data, and something kind of functional comes out at the end. But I guess that I know a majority of machine learning researchers would just say that trying to understand the algorithms learned is completely doomed, and you shouldn't even bother. Why do I believe this might possibly work?
In this paper, we set out onto this mystery of why the models grokk. Grokking is this bizarre phenomenon where you train small models, here a one or two-layer transformer, on an algorithmic task, here addition model 113. And you give it a third of the data as training data. And you find that it initially just memorizes the training data. But then if you train it on the same data again and again, it abruptly generalizes.
And like what? Why does this happen? This is really weird because models can memorize. Models can generalize. And these are both normal things. But the dramatic switching between them is just bizarre. And I was trying to answer why this happens.

I trained a one-layer transformer to grokk addition model 113 and try to figure out what it was doing.

The first hint for what was going on was looking at the embedding. The embedding is the bit of the model that converts the input numbers between 0 and 112 to vectors inside the model. And there were all of these hints. The model has weird, circle-y structure inside, periodicity. You put neural activations, and they look like that. And it's like, "What? Something is going on here."

The key insight to getting traction on what was happening with this model is noticing - the input seems kind of weird and circle-y and periodic. What do we do to understand periodic things? We use Fourier transforms. And taking the model and asking, "How does it embed the input in terms of sums of sine and cosine waves at different frequencies?" It turns out that it's just incredibly sparse at this while sine models are just completely random. Something interesting is going on.
**The learned algorithm**
=========================

To cut a long story short, I did some further digging and found this bizarre algorithm was one that had learned to do modular addition. At a high level, modular addition is equivalent to composing rotations around the unit circle. Composition adds the angles together, and because it's a circle, you get modularity for free. The model decided that the simplest way to do modular addition was to convert things to these rotations, use triggered entities to compose them, and then use further triggered entities and use the final softmaxes and argmax to just extract the answer. What? But this is like a beautiful, nontrivial algorithm I found by staring at that the weights of this model.

We have a bunch of lines of evidence in the paper that you should go check out. One, which is particularly striking, is if you ablate everything the algorithm says should not matter, like delete everything the algorithm says should not matter, performance can actually increase. Which is like, what?
We can then use this to figure out what's up with the grokking by producing pretty pictures. This should look beautiful and pretty, just to say. Turns out what's going on with grokking is that the model does not suddenly generalize. The model initially memorizes. But then during the bit where train and test loss both seem kind of flat, the model is actually slowly transitioning from the memorized algorithm to the bizarre trig-based general algorithm. If this animation was working, you'd be able to look into the model as it was learning and see it slowly forming this. And the reason it looks like it suddenly generalizes is, only when it's got so good at generalizing and doing the trig-based algorithm does it suddenly forget the memorization. You only actually get good test performance when you are both learned a good algorithm and then and no longer memorizing.
That got a bit in the weeds. But, the main takeaway I want you to get from this section is just, at least in toy settings, models have rich structure. They learn beautiful algorithms, and these algorithms can be understood. And an area of problems building on this is just train models to do algorithmic tasks, and try to see what happens. I think this is just a really good category of problems if you're like, "I have never done this kind of stuff before, but this seems kind of cool." Train a model on a task like modular addition. There's this great paper called "Transformers Learn Shortcuts to Automata." Train a model on something in that, and try to understand how it works. I think there's a lot of cool insights to be gained, even on these kind of toy settings.
Alright. I want to check in on where everyone is at. Where are people from, “Yep. Totally following,” to “What the hell is going on?”
That is so much higher than I thought. Wonderful. I'll go faster.
**What is mechanistic interpretability?**
=========================================
What is mechanistic interpretability? That was a very well-run tour of intuition as for why this might make sense.
Again, core hypothesis: models learn human, comprehensible things and can be understood.
What do we think the models actually look like? The two key concepts here are features and circuits. The idea of a feature is that some property of the input, which you can think of as some variable the model is representing inside itself. And a circuit is algorithms the model has learned to take some features or take the input and produce more features.

This is a cute diagram from circuit to zoom in where they have three neurons in an image-classifying model that seem to represent the features: car windows, car bodies, and car doors. They find these convolutional weights that combine them to reuse a car detector. You have windows on the top, wheels on the bottom, and doors in the middle. This is just a really cute, simple example of a circuit.
An actual question is, why are you doing things of the form, looking at neurons and looking at weights? I think the field has, in many ways, moved on from literally just that, that I'll hopefully cover. But the core spirit of what I see as a lot of the current mech interp work is forming an epistemic foundation for what good, rigorous interpretability would look like. Lots of people try to do things in interpretability, and I think that it's very easy to lack rigor, to trick yourself, to have pretty beliefs about models that do not actually hold up.
I want to start with any setting at all, or I genuinely believe I can form true beliefs about a model and use this as a foundation to figure out what more ambitious things that actually work would look like.
I want to take a bit more into these themes of features and circuits. The theme with this section is models are feature extractors. They take inputs, and they try to find properties of them, compute them, represent them internally. And if we know what we're doing, we can identify what these features inside the model are.

Here is a beautiful picture from the fabulous multimodal neurons paper with just a bunch of wild features they found in this model called Click. This model was taking images, taking captions, and seeing if they were a good match. And it learned features like the USA, Donald Trump, anime, the abstract notion of teenagerism. The Donald Trump neuron actually activated some things like MAGA hats and Republican politicians more than Democratic politicians. It's wild.
A notable thing, as I think there's been a lot more effort doing this biological, seeing what's out there, hunting for interesting features on image models, the language models. There's been some cool work.

Here's one of my favorite neurons - numbers that implicitly refer to groups of people neuron from their softmax linear units paper. But for the most part, people just haven't looked that hard at just what's out there.

I have this tool called neuroscope where you can just take millions of neurons and a bunch of open-source models and see the text that most activates them. Like, here's a neuron that seems to activate on the text “in the,” and immediately after text about running or walking or motion. Because why not? Unfortunately, OpenAI recently put out a paper, which just has a certainly better version of this. You probably could look at that instead.
One of the key things to understand when looking at a model is the features, the properties of the input, what the model knows. We can identify these and thus engage what the model represents. And I think that a really cool area of open problems that's incredibly accessible is just going and looking at a model's neurons, seeing what kind of things they might represent, seeing what kind of features are out there, taking your hypothesis for of what a feature could be and what a neuron is tracking and then actually testing it by trying to generate inputs that should or should not activate it and trying to falsify your hypothesis. In this post, I give a bunch of ideas and link to some resources. But also you should go to the OpenAI thing because it's just better than my resources. It's very impressive.
Case study for circuits, these are the functions and the cognition inside the model, how it thinks. The thing with this section is that circuits are, at least in some context, a real thing we can find and understand in models and can be understood. Understanding them is genuinely useful in a bunch of ways for engaging with what on earth is going on in the model.
One of my favorite examples of a circuit is induction heads. This is the phenomenon we found in a paper called a mathematical framework from the anthropic interpretability team.
What's the idea behind an induction head? Language models are trying to take text and predict the next token or essentially next word. A fun property of text is that it often contains repeated subsequences. Like if you see Neel, and you want to predict the next token, you're going to predict a lot of stuff. Unfortunately, Nanda is probably not at the top of that list, but one day. If you see Neel Nanda three times in the previous text, you're going to guess Nanda is going to come next. This is just like a really good algorithm for understanding what's happening in text. It turns out that even two-layer attentional lead transformers learn this algorithm. There are two heads and different layers which work together to implement the simple algorithm of check whether the current token has occurred before, look at what came next, and assume that that comes next. At least for the extremely simple case of literally, look for current token in the past, look at what comes next, and predict that, we kind of have a mechanistic understanding.

Here is a beautiful diagram from a book by Callan McDougal that you should totally read that I'm going to try to explain. One general epistemic caveat I will give is, this is an analysis of the simplest form of the circuit.
Models are always more cursed than you want them to be. When you actually look into models, they're doing stuff that's kind of like this but also doing more stuff. It's kind of messy. And I think a core problem would be just really properly figuring out what is up with this. But it's doing copying things, which, this is a decent chunk.
Should we care? Is this interesting? What does this tell us about models?
I'm going to say I think this is just intrinsically cool, and you certainly need a reason. Also notable because, I don't know. You have all these people being like, "These models are stochastic parrots. They only learn statistical correlations," And it's like, nope. Algorithm, we found it. It's a thing.
But if you want a stronger reason, language models can do this weird thing called in-context learning. First, you give them a bunch of text. They try to predict what comes next, and they're capable of using all of this text to get better at predicting what comes next. It's kind of obvious how you'd use the current sentence to predict what comes next. If you want to predict what come after "The cat sat on the," it's "mat." If you don't give it the "The cat sat" bit of it, you're like, "I have no idea." Obviously having recent text is useful.
But it's not obvious to me that if I had three pages of a book and wanted to predict what came next, knowing what was in some random paragraph on the first page would actually be helpful. The model is pretty good at this. And this is behind a bunch of interesting behavior like few-shot learning. You give the model a bunch of examples, and it figures out the task and does it better next.
A particularly interesting thing about in-context learning is it's an emergent property. Meaning, when you're training a model, there is just this period where it suddenly gets good at in-context learning. Emergence can mean many other things, but whatever. That is, if you look at how good the model is relatively at predicting tokens with a bunch of prior context, tokens with only a bit of prior context, there is just this narrow bit of training where the model suddenly becomes good at this.
This is weird and surprising. Why does this happen? And it turns out that, at least in my judgment, the reason why this happens is the model forms induction heads or at least things for which induction heads are an important part of that behavior. Induction heads also form, in a sudden, emergent face transition at like the same time.
And in this paper, "Induction Heads and In-Context Learning," we go through a bunch more reasons for why we think induction head is actually the cause. Like when the models do not get good at in-context learning because they cannot form induction heads. You can make a tiny tweak to the architecture so they can. They learn to get good at it. And a bunch of other stuff.
Further presenting caveats, when I say "Induction head" here, I mean things that perform the induction behavior. And when I say "induction heads" here, I mean the things that follow this algorithm. And I think it's not completely obvious they're the same thing. I think they're similar in a bunch of ways.
What are the takeaways of this case study? Circuits are how the algorithm is learned by a model, how will the models think. The models learn these legible algorithms. At least, in some contexts, we can reverse-engineer these. They can tell us interesting things about model behavior even on big, real models.
The induction head in-context learning result we found doesn't just apply to these tiny toy models. It applies to 13 billion parameter language models. Irecently found some in Chinchilla. They just seem like a pretty big part of what makes language models work.
What are open problems building on this? I think it is kind of striking that we found induction head by looking at two-layer attention-learning language models which are just pathetically easy to understand relative to GPT3. Not actually easy, but relatively, pathetically easy. Yet, these had genuinely interesting consequences with real models. And I think there's just a lot more we can learn by looking at these tiny language models in a bunch of problems. I also open-sourced some models and have a library called transformer lens to try to make doing this kind of easier. A concrete problem I would love to see people do is just take a one-layer model with MLP layers or just attention-only, and see if you can just say anything interesting about what that model is doing. Like, how does it track statistical things? Like, after the terms Nobel and Peace? Prize is likely to come next. This is a thing models can probably do. How do they do it? We are just kind of really bad at understanding what is going on in the MLP layers of transformers. I would love more data on this.
The secondary open problem is analyzing the training dynamics of a model. Here, we found that circuits seemed importantly linked to this some emergent thing that a model did. I generally think this is interesting and surprising. What else can we learn? What can an interpretability lens tell us about what happens to these models during training? There's just so much. I don't know. It's just kind of a wild fact about the world that you have a stack of matrices, you randomly initialize them, you give them a massive pile of data, and at the end, you have a pile of matrices that is good as the thing. What is happening in the middle? I don't really know.
A concrete question here is, we know that if we fine-tune models on a task, like take a model that's already trained and train it on some more specific tasks like doing multiple choice questions or predicting Wikipedia text, you'll become a lot better. Why does this happen? Can we analyze this on a circuit or feature level? Does it just upweight and downweight the relevant bits, or does it learn whole things anew? I don't really know. No one’s really checked.
I know we got a bit of a tangent, and try to better motivate why you might care about mechanistic interpretability. And the core thing here is, I think of mechanistic interpretability as trying to understand and engage with the cognition of a model. What is it thinking? What is it doing? What are the algorithms behind why it outputs the things that it outputs? And importantly, what would it take to be able to tell the difference between a model that is aligned, or telling us what we want to hear?
A core motivator of why I care about mechanistic interpretability (I care much less about other forms of interpretability) is I think that a lot of what people do or at least a lot of kind of naive, basic approaches of doing machine learning or even naive approaches doing interpretability look at a model's inputs and outputs. And I think this is just clearly not enough because you cannot distinguish the output of a thing that is competent and telling you what you want to hear or something that's actually telling you correct things.
A particularly evocative example is from the GPT4 paper. ARC evals try to get GPT4 to tell a Task Rabbit worker to perform a capture for it. GPT4 realized it would be instrumentally useful to lie to this person, and then successfully lied to this person. A GPT4 is not an AGI or even that close. But I feel like we're already in the world. We’re looking at the inputs and outputs to a model as insufficient.
A particular theory of impact I'm excited about is using mechanistic interpretability for auditing. Taking a model and trying to understand, is it deceiving us or being truthful? A concrete world where I think it's very plausible that we end up in ... Well, moderately plausible we end up in is we produce an AGI, we have it securely within the lab that made it. We're trying to figure out whether it's okay to deploy in the world. And it just seems pretty, alright. For every test we throw at it, it just does reasonable things. It seems really helpful. We try to red-team it and trick it into betraying us and let it think it can gain power, and it doesn't. What do we do? And it's all a black box.
I don't know. I just don't really know what I think is the correct thing to do in that situation. You want to take it much longer. You want to try harder to test it, then red-team it. But I think it's unreasonable to just say we will never be comfortable deploying an AGI. Also, if you say that, then someone else who is much less well-intentioned will eventually do it, even if you get like 5 years on them. But I also am not that comfortable deploying a human-level AI into the world if it could be aligned, or it could just be really, really good at telling people what they want to hear. I think this is plausible that it could happen because so much of our ability to train and engage with these systems is on what we can measure. We cannot measure the model's internal cognition, which means it's extremely hard for a sufficiently competent system to distinguish between the alignment and misalignment cases. Which is generally a lot of why I think alignment matters.
One approach that I think is particularly interesting here is this idea of listing latent knowledge. The idea of looking into a model, understanding what it knows and believes about the world, and being able to extract that. And there's this great theory paper from ARC just outlining the problem, this great empirical paper from Collin Burns about a very, very initial and janky approach to seeing what it might actually look like to extract what a model believes to be true. A problem I would love to see someone do is just take one of the probes Collin trains in his paper, look at a model on some inputs, look at how they activate the probe, and see if you can figure out anything about which model components or circuits activated. As far as I know, no one has checked.
I also just have a bunch of other takes on theories of change. Very briefly, auditing of deception is kind of unambitious. What would it take if we could do this? I think if we could do it competently, we could use this to actually get better at doing alignment. It's really hard to get feedback beyond a certain point whether your alignment scheme is actually working. If we can audit models for deception, this could actually help. Maybe that's not ambitious enough. What would it look like if we were much better at this?
Another thing I'd love is for all of the human feedback raters helping train models like GPT3 to be able to give it feedback. But it's not just doing the right thing, it's doing the right thing for the right reasons and looking at why it does what it does. And this is kind of a sketchy thing to try because the more you optimize against your interpretability tools, the more likely they are to break. But I don't know. Maybe we'll get good enough that this works. Maybe we won't. No one is really checking. It seems important.
The second key area is just, is misalignment actually a big deal? I think it is, and I imagine many people in this room think it is. But it is also just an empirical statement about the world that most people do not. And also that most of the people who are actually making decisions do not think that misalignment is a big deal. I think that looking inside systems and understanding why they do what they do and being able to empirically demonstrate things like deception or misalignment is a very important part of what it would take to actually convince people who are skeptical, less convinced by conceptual arguments, who really want to see clear, hard proof to just actually help convince them what's going on.
A separate thing is just, “No man, alignment, it's so philosophical. This is really annoying. We have all these notions like agency and goals and planning and situational awareness, and these are fundamentally statements about what goes on inside a model. But we only really have these conceptual, philosophical things to ground it and not that much actual empirical data. What would having that look like?”
Cool. I've talked a bunch about theories of change. I'm going to give the hot take that I think often EAs care too much about theories of change. I think that often the best science just looks like, try to really understand what the hell is going on inside at least one model. And try to be rigorous and truth-tracking, but also just follow your sense of curiosity. And I think if you're like, "Ah, screw these. I just want to do whatever research feels exciting," also go wild and do that.
The second thing is just, I don't know. We're just not very good at mechanistic interpretability. Kind of sucks. It would be great if we were better at this. I think basically all of these theories of change are gated on being better at it, and I don't think it really matters that much what your priorities are relative to just, "I would like to get good at understanding systems. I would like to engage with the various conceptual and fundamental problems here and make progress."
Another bit of general framing is, “Why do I work in mechanistic interpretability, and should you work in mechanistic interpretability?” So I don't know. Really fun, would recommend. You can get solid pretty easily. You can just go play with systems, get your hands dirty, and get some feedback. It's like I'm doing biology on this weird, alien organism, but also it's made of maths, and I can actually manipulate it and read its code and see what's going on inside of it but on a mathematical level, not in an actually understanding level. And I can form a hypothesis and run experiments and get feedback within minutes, and it's just kind of lovely.
If you're hearing this, you're like, "I actually want to go and try getting involved." Here is my guide on how to get started, some general bits of advice. Don't do the thing where you think you need to spend 2-plus months reading papers. Write code. Play with models. Get your hands dirty. Try hard to find collaborators. It's just much easier when you have other people you're chatting with.
And in particular, one of the main traps people fall into when trying to do interpretability research is they form beautiful hypotheses that are complete bullshit. And having collaborators who can help point this out is pretty great. A common trap I see in EAs is people who are like, "I must optimize. I must find the most important thing to do and the highest-impact research question and do it perfectly." And, yeah, screw that. Just do stuff. Just pick a problem that feels fun, and jump into it. To undercut that advice, since I generally think you should give advice and then give the opposite advice.
Are too many people doing mechanistic interpretability? And I think the answer to this is, obviously no. But I think there is some substance to this question, which is that I think that we should have 10 times as many people working on every area of alignment. And I think we should have 10 times as many areas of alignment because oh my god we don't know what we're doing, and that's kind of an issue. But I also think that within the current portfolio of people in EA who are trying to do different things, I do notice something of a skew to doing mechanistic interpretability. I don't know. In some ways, this is kind of fine. This is kind of awkward, when I get like 70 fantastic applicants, and you decline a bunch of them. If you're in this room, sorry. But also, I don't know. This is just kind of a messy question. My general advice is I think more people should be doing mechanistic interpretability. I also think more people should be doing everything else aligned.
If you're listening to this talk, and you're like, "Oh, my god. This sounds amazing. This sounds so fun. I want to be a draft pick in a round of this. But maybe this is not the most important thing." Just fucking go for it. If you're like, "Ah, mech interp seems cool, but this other area of alignment also seems really cool," or, "Ah, no, mechanistic interpretability, I feel like, is missing this really important thing, and this other area of alignment isn't." Just go do that. If you're kind of ambivalent because there's a very slight push on the margin to go do other stuff, but I don't know. This is a complicated question. I want more people doing everything.
All right. That was a bunch of philosophical ramblings. I'm going to get back to case studies.
This is a case study on understanding how models actually represent their thoughts. A core subgoal of mechanistic interpretability is to decompose a model into units - things we can reason about, things we can interpret that have meaning, and which then compose together. This is kind of what I mean when I say a feature - bits of a model that represent things that have meaning.
Models are massive, high-dimensional objects. This is kind of awkward. GPT3 has 200 billion parameters. It's kind of hard. You can't really do anything with a massive, high-dimensional object unless you can decompose it into smaller chunks that can be analyzed. This is just kind of crucial. One of the hopes of the early field was that our models and neurons would correspond to features. This would be so convenient if true. There are actually some reasonable theoretical reasons I think this might be true. A core thing models are going to want to do when they represent features is to be able to vary the features independently. You might have features that are correlated, like "Is this a picture of a pet? Is this a picture of a canine?" are clearly correlated. But you probably want to use this in different ways downstream. It's probably useful for a model if it can represent the features in a way that can vary without interfering with each other.
Neurons have an element-wise nonlinearity applied like ReLU or GeLU where each neuron can vary completely independently of the others. But because each ReLU or GeLU is nonlinear, if two features share a neuron, then the value of one will affect how the other changes the output of that. This is a great theoretical argument for why features would correspond to neurons. But as you may have guessed from throughout this talk, this is kind of bull shit. We have this problem of polysemanticity. Neurons seem to correspond to multiple unrelated things, like here's a neuron which seems to respond to pictures of poetry and also to pictures of card games and dice and poker. It's kind of obnoxious.
I think the theoretical argument was kind of sound, so why does this happen? The hypothesis of why this happens is this idea of superposition. Superposition is a hypothesized thing where models want to represent as many features as is humanly possible. If you want to predict the next word, there are millions of facts about the world that are useful to know. Like, "Who is Neel Nanda?" is occasionally slightly useful, even if GPT4 doesn't know who I am. Ideally, models would prefer to know every fact. But they only have so much room inside of them, so many neurons.
The idea of superposition is, okay, models could represent one feature per neuron. But they could also compress features. They could say each feature corresponds to a direction. But rather than each feature being orthogonal to every other one, you could have vectors that are almost orthogonal. Random maths theorems say you can squash in a ton more almost orthogonal features into a space. The hypothesis is this is why neurons are polysemantic. If you're representing more features than you have neurons, you obviously cannot have neurons that each have a single meaning,, because maths.
So what does interesting research here look like? One of my all-time favorite papers is "Toy Models of Superposition," which says, okay we have this hypothesis that superposition is a thing models can do. Is this actually a thing models will ever do? They constructed this toy model that just had a bunch of input features, had to linearly map them down to a small space, linearly map them back up to a large space, and see if it could recover the elements. This is kind of testing whether models can compress more features than they have dimensions into some small space and then recover.
They found a bunch of really interesting results, like they don't actually use superposition. That models were trading off the amount of features they could represent against interference between features. That models could represent features as the features became sparser and less common, models were much more willing to use superposition. And I think this is incredibly cool.
An open problem building on this is… One thing I find pretty dissatisfying about this paper is that they mostly focus on linear superposition - taking features, linearally mapping them down, and linearally mapping them back up. The one thing I care about a lot is mapping things to neurons where you have more features that are trying to compute than you have neurons. Where you're not just taking the inputs and recovering them, You're actually computing new features. I would love someone to build on the very brief section they have in here trying to investigate those questions and explore it further. One concrete problem I would love to see someone do is just take end binary variables, have at most two activated at a given time, and train a model that outputs the end of every possible pair of inputs and give it some number of neurons in the middle and see what happens. I don't know what would happen. I predict it would use superposition, and I expect that understanding how it does this would teach me something interesting about models.
Another angle is quite a lot of mechanistic interpretability is fundamentally theory crafting about what we think happens in models on reasonably scant evidence because the field is tiny, and not that many people have actually worked on these questions and gotten this empirical data. Embarrassingly we just didn't really have empirical evidence of superposition in models for quite a while.
Here's a paper I put out about 3 weeks ago led by Wes Gurnee, one of my math scholars, called "Finding Neurons in a Haystack" where we found, in my opinion, reasonably compelling evidence that models do use superposition a lot in early layers. We looked at how models detected compound words like Social Security. We found a bunch of neurons that were mediocre Social Security security detectors. But if you add together, say, three of them, this only becomes a pretty great Social Security detector.
And I think another pretty accessible and interesting open problem is just we have a long list of interesting-seeming neurons we found in this paper. Can you go and explore them, play around with them, and see what they activate on?
A concrete question I'm pretty excited about is, can we find examples of superposition across three neurons? Or does superposition tend to be just spread across hundreds of neurons in a way that's not really that aligned or easy to deal with?
A challenge motivated by this paper is, can we find a neuron that we can actually prove is meaningful? The neuron just corresponds to a single concept. And we found the French neuron, a neuron which seems to represent French. But I don't think we've actually proven that this represents French. There is this great paper called "Curve Circuits," which, to my lights actually does prove that they found some curve-detecting neurons in a language model. But I think there are a bunch of challenges, like look at the model on every kind of data it could ever experience in training, and see if it ever activates on non-French stuff. See if there's rare features. Also, questions like "What even is French?" When should the neuron activate? This is kind of an uncertain, interesting question. And I think that even just the project of trying to solve and make progress on this would be a pretty interesting one.
Overall takeaways from the previous section, understanding how models represent their thoughts seems pretty important. We don't really know how to do this. We think they're using this thing called superposition. In my opinion, underselling superposition language models is actually one of the most important open problems in the field at the moment and that I would love to see more progress on. Both forming better conceptual frameworks for how to think about it and having more empirical data of when it does and does not happen in real models, both just feel really exciting to me.
Build scalable techniques
=========================
Next case study, building scalable techniques - what does it look like to do mechanistic interpretability at scale? And again, if you've zoned out, you can zone back in here because this doesn't really depend on the rest of the talk. One of the areas of techniques and mechanistic interpretability that there's been a lot of progress on in the last 1 to 2 years, this idea of causal interventions, which have many names and many different papers trying to build on them and do various things with them,.
So a standard technique in interpreting models is this idea of activation patching. I'm sorry. A standard, old technique in interpretability is this idea of ablations on knockouts. You've got a neuron you think matters for some task. You just set the neuron to zero, and you see what happens. This sometimes works. Sometimes it will do nothing. Probably the neurons don't really matter. Sometimes it will break the model. Probably this is a sign the neuron does matter. But this is really coarse and unrefined and boring. And it's really hard to tell whether you've just broken the model, found some neuron, well I don't know. The model just always assumes the neuron is equal to five, and we set it to zero, and everything died. Or it's actually an important part of the model's cognition. There's this core idea of resampling or activation patching where, rather than ablating something by replacing it with zero, you replace it with the value that neuron or layer, or head could take on another input.
A current example of this: let's say we want to understand how the model knows that the Eiffel Tower is in Paris. We can take the two prompts. The Eiffel Tower is in Paris, and the Colosseum is in Paris. I'm sorry. The Colosseum is in Rome. Details. And we want to understand how after we token in, the model predicts Rome versus Paris. And if we just took the prompt, "The Eiffel Tower is in," and try to figure out how it said Paris, there's so much stuff here - you want a city; you're doing factual recall; you are speaking English; the next thing should being with a capital; the next thing should begin with a space, and all kinds of boring stuff.
I want to specifically understand how the model recalls the factual knowledge of where the Eiffel Tower is. But the prompt "The Colosseum is in" controls everything, apart from what the landmark is. And so a surgical intervention you can do to understand which chunk of the model represents this knowledge about the Eiffel Tower is to take some layer or the residual stream at some point and patch it into the input. You would give them all the input, "The Colosseum is in." You would take this layer. You take its value on the Eiffel Tower prompt, and you edit the Colosseum run to replace it with the Eiffel Tower value. And you see which layers matter for producing Paris and which ones don't. And because this is such a surgical intervention, it can tell you much more useful things.
There's this whole cottage industry of building them. A challenge to anyone in the room, especially people who have a background in things like causal inference, is what are the right, principal ways to apply these techniques? If we could write some textbook on how to use these surgical, causal interventions, what would that look like? There are all kinds of nuanced details, like you need some metric for whether your patch did anything. And it turns out they're using metrics like probability, are pretty bad, and metrics like log probability are much better because of three different bits of the model, each independently add one to the output logics. Then, each adds one to the log prompts, but each on its own adds basically nothing to the probability. What other insights are there like that? What is the right approach here?
Here's a case study: the “Interpretability in the Wild” paper. They took GPT-small and asked, "How does GPT-small do this grammatical task: When John and Mary went to the store, John gave the bag to." The next thing is Mary rather than John. This is a grammatical task. It comes up a fair bit. It's called indirect object identification, and GPT2-small genuinely cares about answering it. How? They did a bunch of cause interventions and found 25 head circuit. And I think this is a really cool paper. I learned useful things, and I think that you can build on the techniques to just figure out how language models do a bunch of other tasks. In the process finding circuits in the wild, I give a bunch of problems and resources.
One particular way I think this is interesting is this idea that I think called the circuits as a validation set. We want to build better and stronger techniques that are not narrow or labor-intensive. We want to do things that actually work at scale. But it's so easy to have a technique that seems scalable, and it's just kind of BS. What do we do? And one vision I had for this is to have a few contexts where we really mechanistically understand the underlying circuits and then use these as a context where we can check whether our technique tells us what we think it's telling us. I mentioned this technique earlier with ablations. Set some part of the model to zero, and see what breaks.
One of my favorite bits of the "Interpretability in the Wild" paper was finding that this isn't actually that reliable. They found this phenomena of backup heads or redundancy. They found that if you take some crucial heads and you delete it, there are heads in subsequent layers that will systematically change their behavior to compensate. A head goes from a bit positive to very positive. A head goes from very negative to a bit negative. This means that the overall effect is likely not that high.
A concrete question to the room is, what is the circuit behind this? How does the model implement this backup behavior? And how could we detect this in models so we could make things like ablations actually reliable techniques?
And, I have this tutorial, which is not linked here, but it is linked on the linked slides where I apply a bunch of basic techniques to speed run finding the IOI circuit and one of the things that you find.
Another approach is automatic circuit discovery. This is a recent paper, which basically just does what the IOI paper did but automatically. They kind of recovered the circuit. I'd love someone to just take the paper, try it on a bunch of tasks, and see what happens. They have an open-source code base. See if you can break the technique. See if you can make it faster. Overall takeaways from this, I like people looking for circuits in the wild. I think this is a really accessible thing that's pretty interesting. I want to have at least 20 mechanistically understood case studies. We currently have, I don't know, five, which is way better than where we used to be. But it's all great. And causal interventions and resampling things is just really important.
I'm going to skip the next case study in the interest of time. The very brief highlight is, if you're trying to model to predict the next move in the board game "Othello," it learns an emergent model of the board. You can get it with probes. It's f\*cking great.
So, trying to round off the talk - I have a sequence, [200 Concrete Open Problems in Mechanistic Interprertability](https://neelnanda.io/concrete-open-problems), and I have [this guide on how to get started in the field](https://neelnanda.io/getting-started). If anything I've discussed here spoke to you and is interesting, just go check those out. Poke around. I think you can get meaningful information in about 20 hours on, "Am I enjoying mech interp?"
If you check out my [SERI-MATS page](https://www.serimats.org/interpretability), applications are now closed, but my task is to spend 10 to 20 hours trying to do mech interp research, and I give a bunch of advice to how to get started. Somehow, 70 people did this, which still confuses me. But I think it's a pretty good thing to try if you're like, "This sounds maybe fun."
I also have an [explainer](https://neelnanda.io/glossary) where I got kind of carried away and wrote about 33,000 words of exposition on mech interp. The field is full of jargon. When you learn about it, go look this stuff up in there.
I have this library, [TransformerLens](https://neelnanda.io/eag-whirlwind-slides). One of the biggest factors in how well research goes is whether you have the right tools to iterate fast and not shoot yourself in the foot. And, I kind of think it doesn't suck. If it does, please blame Joseph, who is the current maintainer.
And, you can see my slides [here](https://neelnanda.io/eag-whirlwind-slides). Thanks everyone for coming. |
d9729627-a4a1-400e-a62c-da03bd195f41 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Understanding the Meaning of Understanding
1 Preamble
-----------
In his famous tale, Las ruinas circolares ([[3](#bib.bib3)]), J. L. Borges depicts the efforts of a man
in his struggling endeavor to create, through his own dreams, a new life. The accomplishment is to
induce the dreamed subject to become an independent virtual son, able to live a proper life
disjointed from the dreams of his creator. The story ends with the stunning discovery that
the original man is himself the “materialization” of the dreams of another entity.
The dreamer and the dreamed subject belong to different “realities”, that in the Borges’
fiction end up to be both “virtual”. At a first glance, there is no direct connection between these universes.
If ours is somehow the world of reality, the seemingly dissociated domain of our thoughts is usually
defined as intangible.
One is tempted to attribute a superior level of abstraction to the second environment,
though this is not necessarily true, according to the circularity of Borges’ arguments.
2 Motivations
--------------
Modern programming instructs machines to learn according to
apprehending processes that try to mimic those followed by humans. A recurrent question is what an inanimate
bunch of semiconductors may have understood about the lessons imparted; that is:
how a machine “visualizes” in its own “mind” the product of new discoveries? Does it
have knowledge of its knowledge? Can other machines know about its thinking “just by looking
in its eyes”?
These philosophical issues are akin to the thematic of automatic self-consciousness
([[6](#bib.bib6)], [[11](#bib.bib11)]), a subject that will be rediscussed at the end of section 5.
An attempt to provide some answers is tried in these few pages. It will be argued that
abstract concepts do not follow from definitions or by direct algorithms, but they might be
ruled by the same mechanism that allows to achieve “understanding” at the very first level.
3 Discussion
-------------
With the supervised help of a teacher, a child can refine the notion of
color (red, for instance) through examples, by learning how to construct and assign a name to
specific classes of equivalence. Some notions could be actually innate (see, e.g., [[5](#bib.bib5)]),
however, the individuals of a “society”, already aware of
those primary concepts, play a fundamental role in the instructing process. By a similar training, a machine
can recognize if there is a cat on a table. Adding deeper and deeper layers of training,
the same machine can learn to recognize a black cat on a wooden table, lapping
milk from a cup. Despite the increasing complexity of the details, the above training sets
belong to the space of reality, while the final result (i.e., the knowledge) looks,
in some way, more “abstract”. The last observation is indeed incorrect from the technical
viewpoint, since both the images of the cat and the “neural” outcome of the brain of the instructed
machine are represented by sequences of the same type of bits. It is only our preconceived
intuition of reality that tends to assign different levels to these categories.
At this point, one may ask: how do we know if a child has clear in mind the abstract idea of “red”? The exam
is simply done by submitting to his/her attention one or many objects, and pose questions
about their colors. Neglecting possible shades of randomness, this analysis is fast and
secure, since is exactly based on the same apparatus that generates the skill of distinguishing
colors.
Can we do the above check indirectly, thus without showing any object to the child? In some parts of the
child’s brain the activation of a certain concept (innate or acquired) has created suitable permanent connections.
The study of these new links may give an answer to our question without relying on the
practical experiment. In a very similar way, the machine’s concept “cat on table” resides in a
memory made of silicon-based circuitry, under the form of a peculiar distribution of data.
Interpreting these data may teach us if (and maybe what) the machine has formally understood.
Unfortunately, reading a single computer’s memory and try to deduce something is like acquiring
the notion of red through the realization of just one test. Therefore, it is advisable
to play with a series of trained and untrained devices, in order to make comparisons and come
to conclusions. The path to be followed is the same inspiring the initial training procedure used
for reality. This will be “supervised” until the learning machine acquires independence. Such an algorithm
does not necessitate the submission of further cats’ pictures. It is an analysis made at a different
level, like the dream that exists in a more profound layer with respect to that of the dreamer.
The commitment of the learning machine is to distinguish by comparisons the devices that “know” from
those that “do not know”, without studying “what they actually know”.
It has been already noticed however that all the levels of abstraction are similar from the technical viewpoint.
In truth, following Borges, the dreamer himself is the product of
the “imagination” of another dreamer, and, in practice, we have no means to distinguish a dream from a dream into a dream.
In other words, it would be possible to understand if a device has understood something,
through a procedure that does not require direct questions. Since this construction is made
with the help of another training history (at upper level), we cannot mathematically define (not even a posteriori)
what kind of configuration must be actually present in a prescribed instructed
device to be tested. This is not surprising, because it is similar to our incapability of providing an absolute
definition of red without indicating an example. “Red” is a class of equivalence. In the same way, “have the knowledge
of red” is another class of equivalence; there is no official indication of the elements of this
class, but only examples of elements sharing the same properties. Our mind is modified as we add new
knowledge, however this process is very subtle, so that we cannot practically put into words the details of these changes
(to explain for instance in which area of our brain and under what form those data are present).
The reliability of a set of CPUs, programmed to face AI problems,
could be in principle verified aseptically, i.e., by plugging electrical supply, but avoiding the use of any
peripherals.
Here the purpose is not just the check of their plain functionality, but to test their supposed capability
to apply intelligence. For instance, this kind of training may teach a robot to make a decision not only on the basis of what other robots do,
but on what they are thinking (if the data of their central systems are available). The instruments to carry out this analysis are standard, although applied to a context
that looks upgraded. As pointed out in a recent article ([[12](#bib.bib12)]), even current codes for image understanding may fail when tested on
appropriate nasty examples. What may happen with abstract concepts is at this stage unpredictable, since
it is certainly not an easy task to predict what is the percentage of trustworthiness
of these outcomes, which are surely affected by large error spreading. A long phase of experimentation is then necessary. Since this dissertation is only finalized to the
description of the basic principles, we only provide a few guiding theoretical advices. Thus,
we do not discuss in this paper any concrete development process, leaving to the experts the implementation of the
instances here exposed.
4 Remarks
----------
We proceed in this short disquisition with a warning. Trespassing the privacy of an individual to know if he/she has well
elaborated the concept of red, without posing straight questions, could be a first step to
surreptitiously discriminate peoples on the base of
political inclinations, sexual attitudes or whatever an
organ of control wants to know. It is worthwhile to recall once again that here this kind of analysis is not directly constructed
on specified parameters, but to the belonging or not of the individual to classes
already constituted. The purpose of the machine is to classify an individual through a characterizing history
(websites visited, for example), without examining substantial real facts, but rather
the general activity in the framework of a list of prejudged individuals presenting a well prescribed property.
Without a QI test, a person could be recorded as an “intelligent fellow” because of his/her affinity to representatives
initially present in that category, with no explicit notion of the properties characterizing that class.
In fact, the setting up of the class itself is the result of a previous analysis carried out
on individuals declared “intelligent” or “not intelligent” in advance. Thus, the decision is taken as a consequence
of the “way to behave”, and not on the capability of “acting intelligently” in the solution of a given problem;
this without the necessity of formalizing officially how an intelligent person is expected to behave.
It is evident that such a superstructure has ethical implications, so that it must be used wisely.
In terms of AI, the memory maps
of a new machine are compared for example with those belonging to the class of “intelligent machines” and the response is made
without further external checks. Note that clear traces denoting “intelligence” could not be present in the preexisting data
(actually, we do not even know how these traces look like),
indeed such files have been just selected on the base of the claimed intelligence of the machines to which they belong.
In this last paragraph the focus is concentrated on another crucial observation.
It is possible to retain the idea of a specific color without naming it. By the way, the
reason why the word “red” is the name of the class of all red colored objects,
comes from the necessity of confronting each one’s discoveries with other peoples
(see at the beginning of section 3).
The abstraction of the term is actually born in the moment it becomes a product
of the collectivity. Thus, the interpretation of a thought comes naturally as a
result of the comparison of many minds, as also punctuated before in this paper.
5 Contextualization
--------------------
The topics touched in this brief exposition are certainly not new
([[10](#bib.bib10)]). They assume however
a wider relevance in this specific moment, in which the field of machine learning is
experiencing a positive period of growth, both in applications and complexity.
In a recent review paper ([[9](#bib.bib9)]), future developments in the field of deep learning are
addressed. Among the new disciplines, reinforcement learning is gaining popularity ([[17](#bib.bib17)]).
There, a progressive tune up of the policy is wisely applied to optimize the
so-called return.
For instance, in [[14](#bib.bib14)] and [[15](#bib.bib15)], this type of training has been implemented without human assisted supervision,
and can represent a first attempt to guide a machine to acquire self-knowledge.
As pointed out several times in this paper, there is a hidden difficulty in going ahead with this construction,
i.e., the device will not be conscious of its own understanding, until this “state”
is shared with other entities (I understand we both have understood, because we “feel” it
in the same way).
Efforts have been made in order to
associate increasingly complex concepts, with the help of always more
sophisticated modules acting on data and accomplishing upgraded tasks ([[4](#bib.bib4)]).
In [[13](#bib.bib13)], the concern is to provide a robot with sophisticated skills,
in order to be able to recognize aspects of the human behavior.
This implementation, obtained by assembling specialized modules, is a prerogative of a single machine through a process of identification at various stages, similar,
more or less, to what happens in deep learning. The design requires strong human assistance to be initialized, since
the building process translates into machine language, the results of the experiences commonly
lived in reality. The goal is more similar to the effort of creating a sort of human clone, rather than letting the machine to develop a proper way of reasoning.
The above mentioned approaches are then quite different
from the one here discussed, where “understanding” is not viewed as a “complexification”
of the bottom, but as a concentrate of the experiences of a community, that can be extracted
on the base of the same principles ruling human connections with reality.
One may try to establish intersections between our proposal and the so called
Theory of Mind (ToM) (see e.g. [[8](#bib.bib8)], [[2](#bib.bib2)], [[20](#bib.bib20)]),
which represents an efficacious instrument of analysis in the sociological and psychological contexts.
In such a discipline, governed by empiricism, part of the effort is concentrated on the study of the various stages
of development, where humans acquire knowledge and understanding, through a systematic process named:
“learning the Theory of Mind”. Again, the translation of these achievements into the machine language seems
to follow a path which is different from what suggested here. In truth, we do not want to teach anything to a computer
or transfer our “vision” into it. Instead, the machine has to learn its own ToM.
For instance, a computer may autonomously build the concept of wellness, after
examining a series of peoples declared by a supervisor to be joyful or sad (maybe because they laugh, cry, or move their face in a bizarre manner).
The results of this training are, in general, not decipherable, as the machine in its own analysis could emphasize aspects
of the individuals that we do not even observe or imagine.
At the end of the process, we do not need to know the definition of “wellness” apprehended by the machine.
On the other hand, if we had a definition of wellness, we could have directly imparted it into the machine from the very beginning.
The learning process is satisfactory if somehow (with a margin of error) the machine has “understood” in its own way, and it is able to operate accordingly. There is no need to care about the format of these notions, if the machine can finally do the job,
for which it has been trained, in the proper way. At higher level, future machines could not necessitate instructions
from men, but they will talk, exchange information, and create new cerebral connections that have nothing in common with those
usually developed in humans. By following this approach,
in the technology of tomorrow, no human could be in the position to understand what computers actually have in mind.
Going into a more sophisticated area, a possible extension of these considerations can be applied to the field of consciousness,
though the approach may be judged a bit risky (or naive). Consider the phrase:
I know that I am conscious because I can share this opinion with other peoples, and not because
I can universally define such a feeling. Again, following this path, the term “self-knowledge”
applied to an individual turns out to be an element
of a class of equivalence; therefore, it should be studied within this frame of reference.
Thus, based on the material discussed in the present paper, in order to be built, abstraction necessitates of both
“reality” and the (implicit or explicit) request of a community;
hence it cannot be the consequence of the direct experience of a singlet.
We can make clear this idea with an example. Let us
suppose that a set of automata learn to play chess and refine their capabilities by continuously challenging each other.
Will be they conscious of being chess players? The answer is no, from the simple reason that there is no
utility to develop such a knowledge, unless the machines do “decide” together that there is
the necessity to build the class of “chess players”, with the purpose to distinguish their ability from the state of
other existing machines that do not even know the basic rules of the game. Recognizing to be part of that class is an
act of consciousness, although one may argue that this notion is rather weak in comparison to more
advanced forms of awareness. The convenience to give origin to that specific class,
may be due to some (external) forms of gratification. To this purpose, a device may be supplied with ad hoc registers
aimed to classify and publicly advertise, the current level of capabilities and a certain degree of “satisfaction”.
Thus, an isolated single element cannot become conscious by
itself, because, according to our view, such a problem is ill-posed. In contrast to what has been just specified, current research in
Artificial Consciousness is aimed to extract definitions and characterizations in human natural activities
([[21](#bib.bib21)], [[19](#bib.bib19)])
to be translated in computational models (see, e.g., [[1](#bib.bib1)], [[16](#bib.bib16)], [[7](#bib.bib7)], as well as
[[11](#bib.bib11)] for a throughout review of the major achievements).
Obviously, these approaches follow the reverse path. In [[18](#bib.bib18)] we can find the following statement:
“consciousness corresponds to the capacity to integrate information”. Though we are not moving here in the
direction indicated in that paper, we recognize a vague resemblance with some basic concepts.
6 Conclusions
--------------
The rationalization process of mathematical type, described so far, involves the classification of objects or abstract
entities into classes of equivalence. We renounce however to give a definition to the elements of these classes, though we know that each class contains elements
of the same nature. Classes can be associated with a name (a red hat belongs to both the classes “red” and “hats”). However, names come after the construction
of a class and are used to communicate to other individuals that something has been apprehended from nature and that
such experiences are waiting to be shared.
This is different from assuming to have a name (hence, a characterization) and collect together all the entities under that name.
We cannot create the notion of “good guy” from scratch, but we can recognize a good guy among a multitude of fellows.
This is because our mind, with observations and the exchange of information, has generated the appropriate class of equivalence.
Classes can be generated at any level of abstraction and complexity. Regarding the viability of these ideas, we are
not able here to investigate further, so that the turn now passes to the experts.
We should however be careful, when establishing parallels between our mind and the work of a machine. Human beings went through a
long process of evolution. Experiences of a single life mix up with innate structures that are inherited from generations, therefore these
last aspects should not be underestimated. |
811058d4-011e-462f-967d-ff00e626b8b2 | trentmkelly/LessWrong-43k | LessWrong | [MLSN #9] Verifying large training runs, security risks from LLM access to APIs, why natural selection may favor AIs over humans
As part of a larger community building effort, CAIS is writing a safety newsletter that is designed to cover empirical safety research and be palatable to the broader machine learning research community. You can subscribe here or follow the newsletter on twitter here. We also have a new non-technical newsletter here.
----------------------------------------
Welcome to the 9th issue of the ML Safety Newsletter by the Center for AI Safety. In this edition, we cover:
Inspecting how language model predictions change across layers
A new benchmark for assessing tradeoffs between reward and morality
Improving adversarial robustness in NLP through prompting
A proposal for a mechanism to monitor and verify large training runs
Security threats posed by providing language models with access to external services
Why natural selection may favor AIs over humans
And much more...
----------------------------------------
We have a new safety newsletter. It’s more frequent, covers developments beyond technical papers, and is written for a broader audience.
Check it out here: AI Safety Newsletter.
Monitoring
Eliciting Latent Predictions from Transformers with the Tuned Lens
This figure compares the paper’s contribution, the tuned lens, with the “logit lens” (top) for GPT-Neo-2.7B. Each cell shows the top-1 token predicted by the model at the given layer and token index.
Despite incredible progress in language model capabilities in recent years, we still know very little about the inner workings of those models or how they arrive at their outputs. This paper builds on previous findings to determine how a language model’s predictions for the next token change across layers. The paper introduces a method called the tuned lens, which fits an affine transformation to the outputs of intermediate Transformer hidden layers, and then passes the result to the final unembedding matrix. The method allows for some ability to discern which layers contribute most to the determinat |
d7f8bc08-566e-480d-96b9-76ea411c6acc | trentmkelly/LessWrong-43k | LessWrong | Superforecasting the Origins of the Covid-19 Pandemic
The Good Judgement Project got some superforecasters to retrocast whether COVID started via zoonotic spillover or a lab leak. They in aggregate gave a 75% chance of zoonosis, but there was a range of views. GJP's executive summary is at the end of this linkpost.
Here is a link to the summary of the report on their substack, and here is a link to the full report (which is a total of 6 pages of content).
h/t John Halstead for drawing my attention to this.
----------------------------------------
Superforecasters assess that natural zoonosis is three times more likely to be the cause of the Covid-19 pandemic than either a biomedical research-related accident or some other process or mechanism. Asked to assign a probability to what caused the emergence of SARS-CoV-2 in human populations, more than 50 Superforecasters engaged in extensive online discussions starting on December 1, 2023.
In aggregate, they assessed that the pandemic was:
* 74% likely to have been caused by natural zoonosis (meaning that SARS-CoV-2 emerged in human populations as the result of the infection of a person with coronavirus directly from a naturally infected non-human animal);
* 25% likely to have been caused by a biomedical research-related accident (meaning that SARS-CoV-2 emerged in human populations as the result of the accidental infection of a laboratory worker with a natural coronavirus; or the accidental infection of researchers with a natural coronavirus during biomedical fieldwork; or the accidental infection of a laboratory worker with an engineered coronavirus; “research” includes civilian biomedical, biodefense, and bioweapons research);
* 1% likely to have been caused by some other process or mechanism (to include possibilities like the deliberate release of the virus into human populations, irrespective of whether it was an act in accordance with state policy, or the development of the virus due to drug resistance in humans).
The Superforecasters made more than 750 comm |
e70686be-ccb4-45e6-a784-65efc64510b8 | trentmkelly/LessWrong-43k | LessWrong | Dear Self; we need to talk about ambition
I keep seeing advice on ambition, aimed at people in college or early in their career, that would have been really bad for me at similar ages. Rather than contribute (more) to the list of people giving poorly universalized advice on ambition, I have written a letter to the one person I know my advice is right for: myself in the past.
The Letter
Dear Past Elizabeth,
Your life is, in some sense, a series of definitions of success.
First you’re in early school, and success is defined for you by a handful of adults. You go where they say, do the assignments they say, when they say, and doing well means meeting the goals they set for you. Even your hippie elementary school gives you very few choices about life. You get choices in your leisure activity, but that (as they have explained to you) is leisure and thus unimportant, and there’s no success or failure in it.
Then you get further in school, and the authorities give you some choice over the hoops you jump through. You can choose which book you write your report on or even what classes you take (within a predetermined set). This feels like freedom, but you’re in still a system someone else designed and set the win conditions for. You can fulfill a college distribution requirement with any history class at all- but you are going to take one, and the professor is the one determining if you succeeded at it.
More insidiously, you’ll like it. Creating your own definition of success feels scary; enacting it feels impossible. The fact that school lays out neat little hoops for you to jump through is a feature.
Work (you’ll be a programmer) is where things get screwy. Programming contains multiple definitions of success (manager, principal, freelancing, development, testing, bigtech, start-up, money-maxing, altruistic projects…), and multiple ways to go about them. If your goals lie outside of programming altogether (art, parenting, travel..), it’s relatively easy to work out a way to fund it via programming while |
a8786fb9-55a2-401e-8d9b-0e05831d75a3 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "I.This was a triumphI'm making a note here, huge successNo, seriously, it was awful. I deleted my blog of 1,557 posts. I wanted to protect my privacy, but I ended up with articles about me in New Yorker, Reason, and The Daily Beast. I wanted to protect my anonymity, but I Streisand-Effected myself, and a bunch of trolls went around posting my real name everywhere they could find. I wanted to avoid losing my day job, but ended up quitting so they wouldn't be affected by the fallout. I lost a five-digit sum in advertising and Patreon fees. I accidentally sent about three hundred emails to each of five thousand people in the process of trying to put my blog back up.I had, not to mince words about it, a really weird year.The first post on Scott Alexander's new blog on Substack, Astral Codex Ten."
What you see above is a segment from a LessWrong user's post.
Can you break this down into the top 5 most important points for me.
1. The author experienced a significant personal and professional setback due to a failed attempt to protect their privacy and anonymity, which resulted in the deletion of their blog consisting of 1,557 posts.
2. The situation escalated to the point where the author's real name was revealed and spread by trolls, contrary to their initial intention of maintaining anonymity.
3. In an effort to protect their day job from the fallout of the situation, the author ended up quitting, which was an unintended and undesired outcome.
4. The author incurred substantial financial losses, including advertising revenue and Patreon fees, which amounted to a five-digit sum.
5. The author experienced further complications while trying to restore their blog, including accidentally sending a large number of emails to a substantial number of people.
What were the potential preventive measures the author could have taken to avoid such a drastic fallout, considering their initial intention to protect their privacy and anonymity?
1. Enhanced Digital Security: The author could have implemented stronger digital security measures to protect their identity and content. This includes using anonymous browsing tools, VPNs, and encryption for their communications and posts.
2. Legal Consultation: Consulting with a legal expert about the potential risks and protections available could have helped the author understand the potential consequences and take preemptive action.
3. Careful Content Management: The author could have been more cautious about the type of information shared on the blog. Sharing less personal or identifiable information could have reduced the risk of exposure.
4. Contingency Planning: The author could have had a contingency plan in place for potential breaches of privacy, such as a prepared response or a plan for rapid content removal or transfer.
5. Professional PR Assistance: Engaging a professional in public relations could have helped manage the fallout, control the narrative, and mitigate some of the damage to the author's reputation.
Considering the suggested preventive measures, how could the author strategically implement these in a sequence that would provide the most effective protection of their privacy and anonymity, while also maintaining the integrity of their blog content and audience engagement?
1. Legal Consultation: The first step should be consulting with a legal expert. This can help the author understand their rights, potential risks, and the legal measures they can take to protect their privacy and anonymity. This can also help in understanding the legalities of content creation and sharing, especially if the content is sensitive or controversial.
2. Enhanced Digital Security: Once they have a legal understanding, the author should enhance their digital security. This includes using anonymous browsing tools, VPNs, and encryption for their communications and posts. This step is crucial in maintaining the anonymity of the author.
3. Careful Content Management: With legal and digital protection in place, the author should be mindful of the content they share. They should avoid sharing personal or identifiable information that could potentially risk their anonymity. This includes careful management of how they interact with their audience to maintain engagement without compromising their privacy.
4. Professional PR Assistance: Engaging a PR professional can help in creating a robust public image while maintaining anonymity. They can guide the author on how to interact with the audience, manage any potential fallout, and control the narrative if any privacy breaches occur.
5. Contingency Planning: Lastly, having a contingency plan is crucial. This plan should detail the steps to be taken if a privacy breach occurs. It can include measures for rapid content removal or transfer, a prepared response to the situation, and a strategy to manage audience reactions and maintain the integrity of the blog. |
924bcbf5-0c2c-4d28-853c-d7f0f41f5bed | trentmkelly/LessWrong-43k | LessWrong | Handling schemers if shutdown is not an option
What if getting strong evidence of scheming isn't the end of your scheming problems, but merely the middle?
In most of our research and writing on AI control, we’ve emphasized the following situation:
* The AI developer is deploying a model that they think might be scheming, but they aren’t sure.
* The objective of the safety team is to ensure that if the model is scheming, it will be caught before successfully causing a catastrophe.
For example, in our original paper on AI control, the methodology we used for evaluating safety protocols implies that once the developer is confident that the model is a schemer, they’re totally fine with undeploying the untrusted model.[1][2]
But that seems kind of unlikely. Even if AI companies find strong evidence that their AIs are trying to escape, they won’t necessarily stop deploying them. This might be justified: even a responsible actor might think that deploying a known-scheming model is worth the risks, because use of that model will allow them to drastically reduce the risk currently being imposed by the deployment of other AIs. And it might also happen even if it’s unjustified, as I discussed here (though in this case, it’s plausible that the developer will at least drastically increase the proportion of company resources going to mitigating scheming risk).[3]
Either way, it seems important to think through how mitigating scheming risk looks under the assumption that getting evidence that your AI is scheming doesn’t suffice for solving your problems, including the extreme case where getting evidence that your AI is scheming has no direct value. We must explicitly consider scenarios where developers knowingly deploy scheming models.
We’ve previously written about techniques that you might use once you’ve caught your AIs red-handed trying to subvert your safety measures; catching your AIs red-handed is a particularly important mechanism by which you might become confident your model is a schemer. In this post, I’ll b |
9a9ddc18-7563-4f23-883a-9f71e4769afc | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Scaling and Sustaining Standards: A Case Study on the Basel Accords
In response to [Holden Karnofsky's call for case studies on standards](https://forum.effectivealtruism.org/posts/idrBxfsHkYeTtpm2q/seeking-paid-case-studies-on-standards), I've spent the last six weeks or so researching **the Basel Accords** - the premier set of standards for international banks that have become deeply embedded into banking regulation across the globe. My work culminated in [this case study](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit?usp=sharing), written primarily to inform potential artificial intelligence and biosecurity standards. I've laid out the key takeaways below.
---
Key Takeaways
=============
### [**1. What are the Basel Accords?**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.qrer0knfausf)
* The Basel Accords are a set of banking standards (Basel [I](https://www.bis.org/publ/bcbs04a.pdf), [II](https://www.bis.org/publ/bcbs107.pdf), and [III](https://www.bis.org/publ/bcbs188.htm)) set by the [Basel Committee on Bank Supervision (BCBS)](https://www.bis.org/bcbs/). The current set of standards, a continuation of Basel III and subsequent updates, is known as [the Basel Framework](https://www.bis.org/basel_framework/index.htm?m=2697).
* They are aimed at dealing with **credit, market, capital, and operational risks**that affect the banking sector. To deal with these risks, the Basel Accords set [conformance standards](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.5qpsi1pcbg77) targeted at ‘[internationally active banks](https://www.bis.org/bcbs/basel3.htm)’.
* In particular, [Basel I (1988-2004)](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.t2zh7ayn2kh0) laid out a framework for risk-weighted capital requirements for banks; [Basel II (2004-2010)](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.e7nk7d6u2oz7) expanded to market and operational risk, and [Basel III (2010 - present)](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.92szz4ag4nw1) kicked off an expansion to Basel II culminating in a [complete regulatory framework](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.7az2kof6qoon) that includes different measures and requirements of capital risk for different contexts; guidance on managing operational risk at banks, and detailed rules on bank supervision.
###
### [**2. How are the Basel Accords implemented today?**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.n38bnwaujblt)
* [The Basel Committee on Banking Supervision](https://www.bis.org/bcbs/index.htm) (BCBS) is one of several [committees of the Bank for International Settlements](https://www.bis.org/stability.htm) (formed of the central banks of (now) [63 central banks and monetary authorities](https://www.bis.org/about/member_cb.htm?m=2601)). Its mandate is concerned with developing global regulatory standards for banks. It was formed in 1930 and comprises [45 voluntary members from 28 jurisdictions](https://www.bis.org/bcbs/membership.htm).
* [The structure of BCBS](https://www.bis.org/bcbs/organ_and_gov.htm)comprises a chair that governs the activity of the Basel Committee, standing groups with specialised functions, and a Secretariat that provides administrative and analytical support to the Committee and its numerous working groups. The Secretariat also organises the drafting of standards. The BCBS reports to the Group of Central Bank Governors and Heads of Supervision (GHOS), which oversees and endorses major BCBS decisions.
* [Its policy development is a messy process](https://www.bis.org/about/basel_process.htm) where many members of the BCBS itself consult widely with stakeholders for years to come to a consensus on priorities and standards. This consultation is extremely wide-ranging: involving regular meetings; policy proposals sought from non-members; working groups and task forces that take on specific research questions and aspects of standard development; regular communication with governments and central banks, and comments from the public. Consensus-forming is the central function of the Basel Committee: it maintains a flexible structure and process, ascertaining priorities and next steps as it continues to consult with its member states and its wider set of stakeholders, culminating in the publication of reports, updates to standards, and new standards when needed.
* Basel members agree to adopt the Basel Accords, which they typically do, given all members are closely involved in developing standards. However, the Basel Accords have been adopted in over 100 jurisdictions (albeit to varying degrees of compliance) and even often applied to domestic banks, given it is now the norm in international financial regulation.
* Basel members are responsible for implementing the Basel Accords, usually done via national legislation. States are responsible for enforcement (e.g. whether audits are held).
###
### [**3. Why and how did the Basel Accords emerge?**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.8ake7ynmerhy)
* Efforts to increase globalisation after World War II, yet consciousness about the economic risks involved ([particularly given several banking crises had already occurred](https://en.wikipedia.org/wiki/List_of_banking_crises)) led to increased international trade and several international organisations that attempted to ensure international stability - namely the[**IMF G10**](https://en.wikipedia.org/wiki/Group_of_Ten_(IMF)) and the[**European Economic Community**](https://en.wikipedia.org/wiki/European_Economic_Community#:~:text=The%20European%20Economic%20Community%20(EEC,formed%20European%20Union%20in%201993.).
* International trade between Europe and the US led to large amounts of US dollars held in foreign accounts (the ‘Eurodollar’), and this market caused concern amongst central bankers and the aforementioned institutions. The IMF initially created the **Euro-currency Standing Committee** in April 1971, and a collection of central bankers at EEC organisations formed the informal **Groupe de Contact**in 1972 to discuss international regulation.
* In 1973, the [Yom Kippur War](https://en.wikipedia.org/wiki/Yom_Kippur_War) increased oil prices and spurred central banks to figure out how to recycle the flow of funds into their economies. In 1974, [Bankhaus Herstatt was a German bank that collapsed](https://www.citeco.fr/10000-years-history-economics/contemporary-world/collapse-of-the-herstatt-bank-in-germany-and-creation-of-the-basel-committee) due to making bad bets on the US dollar, and this led to large US losses due to regulatory inconsistencies resulting in US banks not receiving back dollars for Deutschmark they had sold to Bankhaus Herstatt. The collapse of Bakhaus Herstatt triggered awareness of a need for greater international regulation. Through the **Bank for International Settlements**, individuals at the Euro-currency Standing Committee, Groupe de Contact, and central banks formed the **Basel Committee on Banking Standards**, consisting of the G10 countries.
* The Basel Committee began with [principles for bank supervision](https://www.bis.org/publ/bcbs00a.htm). But increased global interest in capital adequacy and pressure from the US (and to a lesser extent the UK) for regulatory standardisation, particularly due to fears of Japanese banks remaining more profitable by having more lax capital adequacy standards, led to the emergence of Basel I - a capital adequacy standard adopted by the member states of the Basel Committee.
* Basel II emerged in 2004 after a call for a new standard in 1999, given Basel I was insufficient. Basel III emerged in 2010 after the financial crisis, also resulting in an expansion in membership and more widespread adoption.
###
### [**4. What can we learn from the Basel Accords?**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.x66izxm9aw83)
My takeaways (and expert opinion) on lessons from the Basel Accords for implementing a Basel-like international standard include:
1. [**Consensus is key**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.y37mghqfnofc)**:** continuous consensus-forming, relationship-building, and widespread consultation is the cornerstone of how the Basel Accords became an international norm of financial regulation, even given the contestability of financial regulation.
2. [**Consensus takes time**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.y37mghqfnofc)**:** achieving the level of consensus of the Basel Accords takes many years.
3. [**Consensus is a continuous process**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.y37mghqfnofc)**:**the Basel Committee maintain regular lines of communication to stay highly sensitive to shifting incentives, thinking, and priorities.
4. [**A clearly-defined problem space was important**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.zaxgi6eid0qm)**:** consensus was possible because there was agreement about the importance of avoiding a banking crisis (and clarity on what one looked like).
5. [**Earlier market failures were pivotal**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.zaxgi6eid0qm)for creating this consensus and spurring action.
6. [**Alignment with commercial incentives was important**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.zaxgi6eid0qm) but can be (and was) shaped.
7. [**Market pressure to comply was not merely implicit**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.zaxgi6eid0qm) but was widely perceived.
8. [**A flexible organisational structure**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.oej8wtqqmjhn) and dynamic processes are important.
9. [**Institutional backing mattered**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.oej8wtqqmjhn)**:** the Basel Accords were built on the BIS, IMF, and EEC.
10. [**A dedicated operational unit**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.oej8wtqqmjhn)(the Secretariat) was vital for structuring a messy process.
11. [**Clear and concerted leadership is important**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.oej8wtqqmjhn)**:**initial adoption was driven by a few individuals who owned the process.
12. [**Power matters**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.oej8wtqqmjhn): US interest in capital adequacy standards was important for creating pressure that ultimately led to the emergence and adoption of Basel I.
13. [**Incessant and indirect communication is key**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.oej8wtqqmjhn)**:** the BCBS also runs events, consultations, and support outside of its standard-setting to ensure continued relationship-forming.
14. [**Maximising transparency**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.oej8wtqqmjhn)was valuable for trust.
15. [**The structure of the Basel Committee**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.oej8wtqqmjhn)means little work is done after the publication of a standard to ensure compliance, given members already commit and are involved in the consultative process.
16. [**Factoring scaling and compliance considerations into the design of the standard itself is important**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.92ij6ilt67cj), and necessarily occurs given the extensive consultation that goes into standard design.
17. [**Complexity brings complex trade-offs**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.92ij6ilt67cj)**:**it’s not straightforward that complexity and compliance are inversely proportional. Basel I was too simple to mitigate financial risk effectively, and Basel III commands compliance given the ability of states to pick which aspects are most relevant to it.
18. [**Standards don’t have to be perfect**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.92ij6ilt67cj)when developed in an iterative and high-feedback way.
19. [**Targeting the most important players made a difference**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.b818ornpsgp7)**:**the G10 was a small group that represented most of the world’s banking capital.
20. [**Involving everyone is a core part of The Basel Process**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.b818ornpsgp7): anyone who might be affected or has a stake in the Basel Accords was consulted or had avenues to communicate with the Committee.
21. [**Relationship-forming was essential**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.b818ornpsgp7)**:** leveraging a wide set of stakeholders required a wide network and continuous relationship forming with central banks and organisations worldwide.
22. [**Iterative development was important**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.b818ornpsgp7): Basel I started simple and gradually built up until today.
23. [**Key causes of resistance**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.b818ornpsgp7)are worries about economic impact and restricting sovereignty.
24. [**Basel Committee communications**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.b818ornpsgp7)routinely affirm the importance of financial risk mitigation**.**
25. [**Providing widespread support**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.b818ornpsgp7): for implementation helped ensure widespread adoption of a complex standard, albeit much more resource-intensive.
26. [**Some specific levers for organisational competence**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.b818ornpsgp7) include being very well-prepped, well-connected, and well-read.
27. [**Domestic consensus is key**](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.b818ornpsgp7) for national adoption and influencing the international agenda.
[Some speculative and lower-confidence takes relevant to AI and biosecurity standards include](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.2f858xr0gpzb):
1. I’m not excited about prioritising near-term efforts to develop Basel-like international standards without much more background work done to establish consensus about AI and biosecurity issues.
2. I wouldn’t be excited about AI/bio standards that aren’t aligned with commercial incentives, but I think there are opportunities to shape these incentives as the Basel Accords did.
3. Given how contestable AI and biosecurity regulation will likely be, a consensus-driven approach could be important.
4. A consensus-driven approach may have limited scope for AI and bio, however, because there are additional risks from malicious agents not found in financial risk mitigation.
5. Standards can take quite a lot of complexity, although the best-case scenario may look like partial compliance. This could affect how we think about AI/bio standards.
6. In general, I would expect an effort to create Basel-like standards to be less successful in expectation unless there are numerous (non-existential) early warnings, given how important this was for Basel.
7. Part of the theory of change for AI and bio x-risk research should be academic consensus-forming.
8. Potential standard-setters and key stakeholders could probably be doing a lot more relationship-forming now if the ambition is a Basel-like international standard.
9. We shouldn’t rely on standards to delay AI timelines, given how long consensus-forming takes.
10. I think there’s an exciting opportunity right now to develop better cooperation between the AI labs.
11. I think voluntary standards right now could be very useful for AI, primarily for just establishing the personnel, agenda, and infrastructure for future standards.
12. For biosecurity, I think [IBBIS](https://ibbis.bio/)is in a similar position to the pre-1988 BCBS, and could potentially learn from the details of the history of the Basel Accords.
---
The full report remains fairly scrappy, and I'd encourage further feedback and thoughts - particularly corrections of any factual errors. I'll likely factor these into some minor updates in the future, and I'm also keen to continue adding lessons. However, I didn't want perfection to be the enemy of the good and felt this was ready enough for public reading. More generally, feel free to reach out for further questions and/or comments.
See [the appendices](https://docs.google.com/document/d/1-Z8aTh-G1rrYtBfV1PDRk2RqMl30wy88RiaqBPwY8gY/edit#heading=h.ygja2xfpbpnd) for some conceptual background to standards themselves; reasons I was excited about the Basel Accords as a case study for AI and biosecurity; an overview of some of the successes and criticisms of the Basel Accords; an overview of why the Basel Accords have achieved widespread adoption amongst non-members; an overview of what I consider the key strategic questions for a standard-setter to consider, and some further explanation of my tentative takeaways for AI and bio.
**Acknowledgements:**thank you to Open Philanthropy for sponsoring this project; Holden Karnofsky for providing some early feedback and direction; all the individuals I interviewed for this project for their invaluable input, and Rachel Edwards and Kit Harris for their assistance with connections and thoughts! |
005acf65-2787-4690-9c6d-3647168a8459 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | ChatGPT understands, but largely does not generate Spanglish (and other code-mixed) text
*Content format: Commented chat screenshots, and then some thoughts on their implications.*
*Epistemic status: Exploratory*
Introduction
============
[Code-mixing](https://en.wikipedia.org/wiki/Code-mixing) is the ad-hoc mixing of two or more linguistic varieties (such as languages or dialects) in the same communicative instance. An example of code-mixing would be a sentence written with words both in Spanish and in English, or with novel words made by combining Spanish and English roots, suffixes or prefixes.
Here, I document a series of experiments designed to test ChatGPT's capabilities for understanding and generating code-mixed text. I tested it with:
* English + Spanish ([Spanglish](https://en.wikipedia.org/wiki/Spanglish))
* English + French ([Franglais](https://en.wikipedia.org/wiki/Franglais))
* English + Spanish + French (Frenspanglish)
The first two are well known phenomena in multilingual communities such as those in Quebec and the southwestern US, while the third is quite obscure and as far as I know does not occur naturally on a large scale. Franglais is usually regarded as the insertion of English features into French, while Spanglish as a more symmetrical phenomenon.
This is why I decided to prompt for Franglais understanding in French, and for Spanglish and Frenspanglish understanding in English. When prompting ChatGPT to translate into a code-mixed language, I prompt in the same language the text to be translated is given.
I find that ChatGPT exhibits impressive abilities to understand text written in such code-mixes. However, despite repeated attempts at prompt engineering, I have not been able to make ChatGPT generate proper code-mixed text.
Understanding code-mixed text
=============================
Understanding Spanglish (English + Spanish)
-------------------------------------------
ChatGPT understands Spanglish, and can translate from it into English.Understanding Franglais (English + French)
------------------------------------------
ChatGPT understands Franglais, and can translate from it into French.Understanding Frenspanglish (English + Spanish + French)
--------------------------------------------------------
This took a bit of prompt engineering.
ChatGPT can understand the meaning of this mixed expression with two apostrophic contractions. However, it wrongfully explains that it is a mixture of only French and English, even replacing the French-Spanish fragment "J'puedo" with the French-only "J'peux".Failing to generate code-mixed text
===================================
Note: All of these experiments were performed on a chat instance where ChatGPT had already received an explanation of the relevant code-mix.
Failing to generate Spanglish (English + Spanish)
-------------------------------------------------
Asked to translate an English sentence into Spanglish, ChatGPT translates into full Spanish. When the languages are reversed, the same thing happens. Example in appendix 1.Failing to generate Franglais (English + French)
------------------------------------------------
# Asked to translate an English sentence into Franglais, ChatGPT translates into full French. When the languages are reversed, the same thing happens. Example in appendix 1.Failing to generate Frenspanglish (English + Spanish + French)
--------------------------------------------------------------
Asked to translate back into Frenspanglish the very same message it had already translated from Frenspanglish into English, ChatGPT translates it into Spanish, and claims it has performed the request successfully. It then explains its translation piece by piece, from time to time claiming that fragments written fully in Spanish were in fact code-mixed.
Observations
============
* ChatGPT's success in understanding code-mixed text demonstrates cross-lingual capabilities beyond translation.
* (Prompt-engineered) ChatGPT understands requests for it to produce code-mixed text, in the sense that it knows it has to claim that the response produced is a mix of different languages. However, it is not capable of fulfilling such requests.
+ Even then, it claims it does.
+ It could be the case that fine-tuning on code-mixed text would add this missing generative capability.
+ This is an example of a capabilities asymmetry between ChatGPT's prompt parsing and response generation. It remains to be seen whether this is observed in other task subdomains, and if so whether it is in the same direction. If both were true, that would imply ChatGPT's parsing capabilities are broader than its generation ones.
Three tame hypotheses
=====================
* These are three, not mutually contradictory, hypotheses that could explain the observed capabilities asymmetry:
+ Code-mixed text is scarcely present in the training data of the base model, leading to the generation of code-mixed text to be a poorly-rewarded strategy.
+ During the fine-tuning of ChatGPT, human evaluators punished occurrences of code-mixing.
+ Knowing the source languages separately is sufficient for understanding their code-mix, but for generating it, additional knowledge is required.
A speculative model
===================
Writing can be modeled as the iterative process of constructing a sequence of words one by one. When writing, there are two main sources contributing to your decision of which word to choose as the next one to write.
* Your accumulated intuitions about how words generally succeed one another. Sometimes, the writing process is almost automatic, with each word effortlessly flowing from the previous ones.
* Your own pre-verbal ideas. Sometimes, finding the right sequence of words to convey a nuanced concept requires a deliberate search effort.
This is pretty much the distinction between type 1 (fast, intuitive) and type 2 (slow, deliberate) thinking.
ChatGPT's base model was trained to predict the next token of text in a sequence. This is analogous to the type 1 writing method in humans I just outlined. Anecdotally, it seems like people who have read more during their lives are better at it. Likewise, ChatGPT has been trained on an immense quantity of text, and [is superhuman at next-token prediction](https://www.alignmentforum.org/posts/htrZrxduciZ5QaCjw/language-models-seem-to-be-much-better-than-humans-at-next). Code-mixed text, however, is a rare occurrence, and as such there is insufficient data for either humans or language models to be able to generate it using only type 1 processes. Humans can get around this problem by using type 2 reasoning[[1]](#fnice2m4q2j5). Language models, however, are not capable of type 2 reasoning (or an analogous artificial process), and as such can't generate code-mixed text.
Appendix 1: additional basic examples of generative failure
===========================================================
Asked to translate a Spanish sentence into Spanglish, ChatGPT translates into full English.Asked to translate a French sentence into Franglais, ChatGPT translates into full English.Appendix 2: Trying really hard to get ChatGPT to produce Spanglish and (almost) entirely failing
================================================================================================
I hazard the guess that Spanglish is the most represented code-mixed language in ChatGPT's training corpus, so I decided to try to focus my efforts here when trying to get Chat-GPT to generate a code-mixed output. All of these examples were zero-shot. That is, they were the first message in a new chat instance. There were many more attempts, here I show only the most informative ones.
Explicitly asking for only half of the words in a text in Spanish to be translated into English results in a full translation.The same thing happens when the languages are reversed.Inventing a situation where a fictional speaker is forced to speak Spanglish due to poor proficiency largely does not work. However, ChatGPT does sometimes manage to mix Spanish and English, albeit in a way that doesn't qualify as Spanglish. This particular example demonstrates one of the two types of mixture I have observed: [placeholder brackets] in English amid Spanish text, such as one might observe in a travel phrasebook.This is the other mixture type I found: Repeating the same message first in Spanish and then in English.Notes
=====
* This work was done on ChatGPT December 15 version. Results may differ on future versions.
* While writing this post, I had a serendipitous online encounter with an OpenAI employee[[2]](#fnytep65q1syb) and asked them whether they knew about the phenomenon described in this post. They did not, though it didn't surprise them, given known previous issues around difficulties getting ChatGPT to produce non-English outputs. Someone else at OpenAI might have identified it, though.
* Transcriptions of the prompts used are available [here](https://docs.google.com/document/d/1l5CntPI88iuecx-KqJrQSEufqaEDm1djplKBPvhwr1Y/edit?usp=sharing).
*Thanks to Agustín Covarrubias for feedback on an earlier version of this post, and to the anonymous OpenAI employee for an informal discussion on the matter.*
1. **[^](#fnrefice2m4q2j5)**And writing code-mixed text does feel to me as slow and deliberate as an almost entirely type 2 activity.
2. **[^](#fnrefytep65q1syb)**They asked me to not identify them personally in this post. |
c08a0ace-3f71-4e72-996e-71cccb5606c4 | trentmkelly/LessWrong-43k | LessWrong | Causal graphs and counterfactuals
Problem solved: Found what I was looking for in: An Axiomatic Characterization Causal Counterfactuals, thanks to Evan Lloyd.
Basically, making every endogenous variable a deterministic function of the exogenous variables and of the other endogenous variables, and pushing all the stochasticity into the exogenous variables.
Old post:
A problem that's come up with my definitions of stratification.
Consider a very simple causal graph:
In this setting, A and B are both booleans, and A=B with 75% probability (independently about whether A=0 or A=1).
I now want to compute the counterfactual: suppose I assume that B=0 when A=0. What would happen if A=1 instead?
The problem is that P(B|A) seems insufficient to solve this. Let's imagine the process that outputs B as a probabilistic mix of functions, that takes the value of A and outputs that of B. There are four natural functions here:
* f0(x) = 0
* f1(x) = 1
* f2(x) = x
* f3(x) = 1-x
Then one way of modelling the causal graph is as a mix 0.75f2 + 0.25f3. In that case, knowing that B=0 when A=0 implies that P(f2)=1, so if A=1, we know that B=1.
But we could instead model the causal graph as 0.5f2+0.25f1+0.25f0. In that case, knowing that B=0 when A=0 implies that P(f2)=2/3 and P(f0)=1/3. So if A=1, B=1 with probability 2/3 and B=1 with probability 1/3.
And we can design the node B, physically, to be one or another of the two distributions over functions or anything in between (the general formula is (0.5+x)f2 + x(f3)+(0.25-x)f1+(0.25-x)f0 for 0 ≤ x ≤ 0.25). But it seems that the causal graph does not capture that.
Owain Evans has said that Pearl has papers covering these kinds of situations, but I haven't been able to find them. Does anyone know any publications on the subject? |
061a739e-3d61-4e6c-83d2-dcdf04da99c3 | trentmkelly/LessWrong-43k | LessWrong | Thoughts on Francois Chollet's belief that LLMs are far away from AGI?
Dwarkesh had a podcast recently with Francois Chollet (creator of Keras)
He seems fairly skeptical we are anywhere near AGI with LLMs. He mostly bases his intuition that LLMs fail on OOD tasks and don't seem to be good at solving simple abstract reasoning problems he calls the ARC challenge. It seems he thinks system 2 thinking will be a much harder unlock than people think and that scaling LLMs will go nowhere. In fact he goes so far as to say the scaling maximalists have set back AGI progress by 5-10 years. Current LLMs to him are just simply information retrieval databases.
He, along with the CEO of Zapier, have launched a 1 million dollar prize to beating the ARC bench marks, which are apparently hard for LLMs. I didn't believe it at first, given how easy they seem, but barely any progress has been made on the ARC bench marks in the last 4 years. In retrospect, it's odd that so many existing benchmarks rely heavily on memorized knowledge, and the ARC results check out with LLMs being bad at playing sudoku (so maybe not that surprising).
This seems to be in contradiction with what people on this site generally think. Is the disagreement mainly that system 2 thinking will be a relatively fast unlock (this is my take at least[1]) whereas Francois thinks it will take a long time?
Or does it go deeper?
1. ^
Personally my intuition that LLMs are world modelers and system 2 thinking will be a relatively simple unlock as they get better at modeling the world. |
23600e8c-e1be-4322-9594-4f7cad33153d | trentmkelly/LessWrong-43k | LessWrong | Natural Abstraction: Convergent Preferences Over Information Structures
The natural abstraction hypothesis claims (in part) that a wide variety of agents will learn to use the same abstractions and concepts to reason about the world.
What's the simplest possible setting where we can state something like this formally? Under what conditions is it true?
One necessary condition to say that agents use 'the same abstractions' is that their decisions depend on the same coarse-grained information about the environment. Motivated by this, we consider a setting where an agent needs to decide which information to pass though an information bottleneck, and ask: when do different utility functions prefer to preserve the same information?
As we develop this setting, we:
* unify and extend the Blackwell Informativeness and Good(er) Regulator theorems
* investigate the minimum structure needed for strong natural abstractions (in this sense) to exist
* give examples where strong natural abstractions a) exist b) don't exist c) exist at some levels of capability, but not at others - including cases where the natural abstractions at different sizes are orthogonal
* demonstrate that (very) weak natural abstractions exist under weak conditions
* formally connect natural abstractions to instrumental power-seeking incentives, and characterize them in a way inspired by Turner's power-seeking theorems.
(As far as I can tell, apart from the obvious cited Theorems, these results are original[1])
Whereas Wentworth's abstraction work has focused on what natural abstractions might look like, and why we might expect them to exist, this work focuses on exploring what it would mean for something to be a natural abstraction in the first place. Rather than an attempt to answer the natural abstraction hypothesis, it is more an attempt to understand the question.
1. Base setting
Any agent exists within an environment that is bigger than itself. It therefore necessarily maps a large number of potential environment states to a smaller number of internal states. |
53340a46-30d6-437e-9fe2-0f1baf6f066e | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Virtue ethics and why the rationalist community might care about it.
This post introduces [“virtue ethics”](https://plato.stanford.edu/entries/ethics-virtue/) and explains why you might find it worth pursuing.
Virtue ethics is meant to be of practical help
----------------------------------------------
The two main forks of the rationalist endeavor are clear thinking and effective action. Virtue ethics addresses both of these: Clear thinking is the result of honing the “intellectual virtues” and effective action typically involves the skillful use of various other virtues.
Ethics, said Aristotle, “differs from the other [philosophical subjects] in not being a subject of merely intellectual interest — I mean we are not concerned to know what goodness essentially *is*, but how we are to become good, for this alone gives the study its practical value.”[[1]](#fnmx6u5vg9thn)
Where virtue ethics fits alongside its alternatives
---------------------------------------------------
Since Aristotle’s time, however, ethical philosophy took a turn for the “meta” — arguing interminably about fascinating things like the nature of good and evil and the plausibility of free will — and losing much of its practical focus along the way. The two main families of ethical approaches that developed from all of that wrangling are usually called deontological and consequentialist. Deontologists hope that we can discern — whether by logical necessity or via revelation — some articulable rules of behavior by which everyone ought to govern their lives. Consequentialists urge us instead to carefully anticipate the likely results of our possible choices and to make decisions such that those results are optimal by some metric or other.
(One branch of the consequentialist family tree leads to the effective altruism movement in which the rationalist community has done much fruitful work, and which adds a welcome dose of the practical to what had become a very theoretical discipline.)
These approaches to ethics are not rigidly exclusive. They tend to inform each other, and as a practical matter most of us tend to mix and match from them rather than dogmatically sticking to one or another. Virtue ethics also can be a complement to either a deontological or consequentialist ethics. If you’re a deontologist, you need [wisdom and insight](https://www.lesswrong.com/posts/ZxZiWxYo6A5F7eaeh/notes-on-wisdom) to discover the rules, and [self-control](https://www.lesswrong.com/posts/53b9woahSTPQFtB2i/notes-on-self-control-continence) to resist the temptations to break them — and those are virtues that virtue ethics can help you with. If you’re a consequentialist you need [discernment](https://we.riseup.net/sotfe/discernment) to understand the situation you are in, [imagination](https://we.riseup.net/sotfe/imagination) to come up with the full range of alternative courses of action, [foresight](https://we.riseup.net/sotfe/foresight) to anticipate their likely effects, and [empathy](https://we.riseup.net/sotfe/empathy) and [perspective](https://we.riseup.net/sotfe/perspective) to understand those effects most broadly — and these are also things in the virtue realm. (See also: [“Virtue Ethics for Consequentialists”](https://www.lesswrong.com/posts/ZLBtZqsP79Cwioi2b/virtue-ethics-for-consequentialists) Will\_Newsome)
It always seems to end up back at Aristotle
-------------------------------------------
I led off with Aristotle because he’s considered the granddaddy of the virtue ethics approach, and because his [Nicomachean Ethics](https://standardebooks.org/ebooks/aristotle/nicomachean-ethics/f-h-peters) (from which I took his quote) is still an inspiring read. One thing I like about it is its attention to how we go astray: Why do we sometimes make bad choices even when we have enough information to make good ones? Why even when we make good choices do we sometimes fail to follow-through on them? Are there steps we can take to resist the allure of bad decisions? Can we make our good intentions more reliable guides to our behavior?
Another thing I like about it is that it considers a broad view of a human life. Much of what we think about when we think about “ethics” restricts itself to questions of good and evil: preventing or causing suffering; pleasure and pain; justice and injustice. Aristotelian ethics is about optimizing your life on a broader set of dimensions than those. It’s not morally *wrong* to be a dull conversationalist, to be incurious, to be lazy, to be a braggart… but such traits suggest that you’re in some ways leading a stunted life, and that’s a shame. Aristotle means to hold you to a higher standard than just “don’t be evil”: rather, “be excellent!”
Virtue ethics is also appealing because it’s not a scold. Deontological ethics says: follow the rules! Consequentialist ethics says: sacrifice for the greater good! Virtue ethics says: become the best person you can by taking pleasure in being marvelous.
“The characteristic feature of all ethics is to consider human life as a game that can be won or lost and to teach man the means of winning,” said Simone de Beauvoir.[[2]](#fnbl2rvg7g58b) The virtue ethics approach is that the human game is a complex one that, to be won, requires that you exercise a multitude of skills — or, in this context, “virtues” — each of which, though it may be analyzed theoretically, can only be mastered practically, that is by practice. The virtue ethicist is skeptical that you can win the game by memorizing the rulebook or by developing rapid-deployment clairvoyance about the consequences of your set of possible actions.
What are these virtues, then?
-----------------------------
One way to think about a virtue is as a region on a character trait axis continuum. A *virtue* differs from a more neutral or merely descriptive personality trait (like the “big five personality traits”) in that a virtue is a region on some particular trait continuum that has been identified as being characteristic of a flourishing, thriving human.
What counts as flourishing/thriving for a human being is a point of controversy and is difficult to define precisely. You probably have some intuitive sense of what a thriving human life is like — think of whom you admire or envy, or which characters in a story you instinctively identify as the heroes and why. Those aren’t ideal tests, but you can use them as a starting place.
People are complex; the virtues are a way of trying to make our characters more tractable by isolating particular characteristics and considering them separately. The virtues are not, I think, naturally isolatable facts about us; they’re more labels of convenience for these more-or-less identifiable and isolatable character trait axes.
The Golden Mean
---------------
In Aristotle’s approach, the best place to be on each character trait axis he identified was in a sweet spot somewhere in the middle: [the golden mean](https://we.riseup.net/sotfe/golden-mean). So for example, if you allow yourself to be overcome by fear you exhibit the vice of cowardice; if you have no regard for fear, you may be stupidly rash or shameless. The golden mean of [courage](https://www.lesswrong.com/posts/gSXaGBtPQ5MM2fm2P/notes-on-courage) is between those two extremes: to respect well-founded fear but not to let it govern you.
Aristotle’s golden mean approach was important to his theoretical framework, but I think it is probably best not to be too rigid about it. Sometimes we label our character trait axes with a label that itself implies the sweet spot. For example, [“fitness”](https://www.lesswrong.com/posts/S2ifeFYm73EH8DpcW/notes-on-fitness) might be the name we give to a virtue about health. Certainly we don’t want to be physically unfit, but do we also not want to be *too* fit? That doesn’t really make sense, but that’s just a function of how we chose to label our virtue. If we had chosen “strong” instead of “fit”, it would have made more sense to say you could err both by being a weakling or by being so muscle-bound that you need help to get dressed in the morning, in which case you’d have overshot the mark.
Lists of virtues across times and cultures
------------------------------------------
There are many possible ways of dividing up the human character into sets of virtues. Different people at different times, and different cultures in different places, have come up with their favorite sets. Virtue ethics does not require that you discover or accept the One True Set of virtues; in fact I think it’s probably a healthy exercise to try to come up with your own set, and to be prepared to adjust it as you grow and learn. (See also: [“An attempt to list out my core values and virtues”](https://www.lesswrong.com/posts/DwBrNpnGCEdNLbZeL/an-attempt-to-list-out-my-core-values-and-virtues) Ruby).
That said, I’ve spent some time looking at different virtue traditions: from [Aristotle](https://we.riseup.net/sotfe/aristotle-s-nicomachean-ethics)’s [Greece](https://we.riseup.net/sotfe/cardinal-virtues) to [Cicero’s Rome](https://we.riseup.net/sotfe/cicero) to [feudal Japan](https://we.riseup.net/sotfe/bushido), from [West Point’s military virtues](https://we.riseup.net/sotfe/west-point-virtues) to [Christian virtues](https://we.riseup.net/sotfe/christian-virtues) of the meek, from [Ayn Rand](https://we.riseup.net/sotfe/ayn-rand-s-objectivist-virtues) to [Henry David Thoreau](https://we.riseup.net/sotfe/thoreau-s-living-ethics) to [Shannon Vallor](https://we.riseup.net/sotfe/shannon-vallor-s-technomoral-virtues) to the [boy scouts](https://we.riseup.net/sotfe/scouting-virtues) to the [Rationalist Virtues](https://we.riseup.net/sotfe/the-rationality-dojo). I have found [hundreds of distinct virtues](https://atlas.mindmup.com/davidgross/virtues/index.html) in traditions like these.
Some that appear most frequently are: [courage](https://www.lesswrong.com/posts/gSXaGBtPQ5MM2fm2P/notes-on-courage), [honesty](https://www.lesswrong.com/posts/9iMMNtz3nNJ8idduF/notes-on-honesty), [compassion](https://www.lesswrong.com/posts/4pBZQ6XeAbwYwxvEM/notes-on-compassion), [loyalty](https://www.lesswrong.com/posts/HCGxHK2ahsTNrNxHp/notes-on-loyalty), [justice](https://www.lesswrong.com/posts/j329mTBuwmC62ZHDB/notes-on-justice-as-a-virtue), [respect for others](https://www.lesswrong.com/posts/JwpmYpM3PYQ7iT6LJ/notes-on-respect-for-others), [self-control](https://www.lesswrong.com/posts/53b9woahSTPQFtB2i/notes-on-self-control-continence), [duty](https://www.lesswrong.com/posts/rn8j3xtWqnWvBSuNa/notes-on-duty), [industry](https://www.lesswrong.com/posts/sbPEZR3Rs5WrzWTs6/notes-on-industriousness), [wisdom](https://www.lesswrong.com/posts/ZxZiWxYo6A5F7eaeh/notes-on-wisdom), [sincerity](https://www.lesswrong.com/posts/haikNyAWze9SdBpb6/notes-on-sincerity-and-such), [reverence](https://www.lesswrong.com/posts/HnNNGWQEX7CgaqBt2/notes-on-piety), [fitness](https://www.lesswrong.com/posts/S2ifeFYm73EH8DpcW/notes-on-fitness), and [temperance](https://www.lesswrong.com/posts/aQQ69PijQR2Z64m2z/notes-on-temperance). But a tally like that one can be deceptive. Some traditions divide their virtues up more finely than others: For example, are “[fairness](https://www.lesswrong.com/posts/Je72MTmqAoT5R3zX3/notes-on-fairness),” “impartiality,” and “[mercy](https://we.riseup.net/sotfe/mercy)” components of “justice” or are they distinct virtues? Or a tradition might never mention “[honor](https://www.lesswrong.com/posts/ZfTGX4NeLdkdkiCm7/notes-on-honor)” as a virtue, for example, but might value “[dignity](https://www.lesswrong.com/posts/cYyRyYskLjZAXibie/notes-on-dignity)” and “[rectitude](https://we.riseup.net/sotfe/rectitude)” and describe them in a way that ends up covering pretty much the same territory.
Virtue ethics suggests a practical approach to becoming a more effective actor
------------------------------------------------------------------------------
A virtue is a sort of character trait. Human character is malleable: We are creatures of habit. If I change my habits, my character changes to match. A virtue is not a sort of knowledge that is acquired through theoretical understanding, but is a sort of skill that is acquired through practice.
This means that for each virtue, we have a handle on how to develop it. We look closely at our own behavior and habits, compare it against the sweet spot as best as we are able, and then put in the work to change our habits so that our characters better conform to what they would be if we were firing on all cylinders.
There are a variety of techniques (such as [WOOP](https://we.riseup.net/sotfe/woop), CBT, etc.) that you can use to adjust your habitual behavior.
If you do this wisely, you can probably expect accelerating returns, at least at first. Much of what holds you back from developing virtues more rapidly is a deficiency in other virtues: [perseverance](https://we.riseup.net/sotfe/perseverance), maybe, or [industriousness](https://www.lesswrong.com/posts/sbPEZR3Rs5WrzWTs6/notes-on-industriousness); insight or [curiosity](https://we.riseup.net/sotfe/curiosity) into your own character perhaps; the courage to try something new or difficult; the [flexibility](https://we.riseup.net/sotfe/flexibility) to adapt to change. If you choose one of those virtues to try to improve, you can make it easier to improve on the next one on your list.
Want to know more?
------------------
If there is interest in this topic, I’ll post some follow-ups. If you have questions about virtue ethics or something in that ballpark, drop me a line or add a comment below.
1. **[^](#fnrefmx6u5vg9thn)**Aristotle, *Nicomachean Ethics*, book Ⅰ, chapter 2
2. **[^](#fnrefbl2rvg7g58b)**Simone de Beauvoir, *The Ethics of Ambiguity* (1947) |
06f86a73-696a-4ad1-9d1a-65b2eb11d07b | trentmkelly/LessWrong-43k | LessWrong | Scaling Laws Literature Review
Common shape of a scaling law, taken from Hestness et al. (2017)
Executive summary
* Scaling laws are predictable relations between the scale of a mode and performance or other useful properties.
* I have collected a database of scaling laws for different tasks and architectures, and reviewed dozens of papers in the scaling law literature.
* My main takeaways are:
* Functional forms: a basic power law can effectively model the scaling behavior in the power-law region but not the transitions to the other two regions. For this, either the M4 estimator or the BNSL estimator introduced below seem to be the best options right now.
* Transfer learning: there is not a simple universal scaling law for transfer learning between arbitrary tasks. When the tasks are similar enough, upstream loss and downstream performance are closely related, but when tasks are very different, the details of the architecture and hyperparameters become very relevant.
See the full table of scaling laws here.
Introduction
The term “scaling laws” in deep learning refers to relations between functional properties of interest (usually the test loss or some performance metric for fine-tuning tasks) and properties of the architecture or optimization process (like model size, width, or training compute). These laws can help inform the design and training of deep learning models, as well as provide insights into their underlying principles.
In this document, I present a detailed table outlining the known scaling laws and a summary of each paper's contributions to the understanding of scaling in deep learning.
My main goal for this document is to serve as a comprehensive and up-to-date resource for information on scaling laws in deep learning. By presenting a curated list of papers that have explored various scaling laws, and providing a detailed table outlining the known scaling laws and a summary of each paper's contributions, I aim to make it easier to access and understand the cur |
c452200f-041c-440e-9648-9fbd95d33761 | trentmkelly/LessWrong-43k | LessWrong | Apply to the Conceptual Boundaries Workshop for AI Safety
Do you have experience with formal computer security, Active Inference, Embedded Agency, biological gap junctions, or other frameworks that distinguish agents from their environment? Apply to the Conceptual Boundaries Workshop for AI safety. February in Austin TX.
Website, more details, and application
Apply by December 22
For identifying, discussing, and strategizing about promising AI safety research directions pertaining to the boundaries that causally distance agents from their environment.
What are agent boundaries?
A few examples:
* A bacterium uses its membrane to protect its internal processes from external influences.
* A nation maintains its sovereignty by defending its borders.
* A human protects their mental integrity by selectively filtering the information that comes in and out of their mind.
…a natural abstraction for safety?
Agent boundaries seem to be a natural abstraction representing the safety and autonomy of agents.
* A bacterium survives only if its membrane is preserved.
* A nation maintains its sovereignty only if its borders aren’t invaded.
* A human mind maintains mental integrity only if it can hold off informational manipulation.
Maybe the safety of agents could be largely formalized as the preservation of their membranes.
These boundaries can then be formalized via Markov blankets.
Boundaries are also cool because they show a way to respect agents without needing to talk about their preferences or utility functions. Andrew Critch has said the following about this idea:
> my goal is to treat boundaries as more fundamental than preferences, rather than as merely a feature of them. In other words, I think boundaries are probably better able to carve reality at the joints than either preferences or utility functions, for the purpose of creating a good working relationship between humanity and AI technology («Boundaries» Sequence, Part 3b)
For instance, respecting the boundary of a bacterium would probably mean “pr |
fc3602de-3a2b-42b4-9929-6de990bcc578 | trentmkelly/LessWrong-43k | LessWrong | What 2025 looks like
I wrote almost all of this in mid-March before the FLI Open Letter and Eliezer's TIME piece. Weirdly, after just six weeks I'd likely write something different. This isn't as finished/polished as I'd like, but better to ship it as is than languish incomplete forever.
----------------------------------------
Not quite two years ago, Daniel Kokotaljo wrote a highly acclaimed post about What 2026 looks like that aimed to tell a single detail future history ("trajectory") about how world events play out in coming years.
As I'm trying to orient myself to what is about to happen, I figured it'd be useful to make my own attempt at this kind of thing. Daniel was bolder than me and tried to imagine 2026 from 2021; I simply don't think I can imagine anything five years out and writing out the rest of 2023, 2024, and 2025 has given me plenty to think about further.
Daniel's vignette places a lot of attention on the size (parameters, compute) and capabilities of models. Daniel and others when imagining the future also want to describe changes in world economy (of which GDP may or may not be a good measure). Those elements feel less interesting to me to think about directly than other effects.
Major Variables
Over the years, it feels like the following are key to track.
Object-level capabilities of the models. Elaboration unnecessary
Adoption and application. It's become salient to me recently that not only is the raw "inherent" power level of models relevant to the world, but also how much they're being harnessed. Widespread use and application of AI will determine things like attention, hype, societal effects, competition, government involvement, etc.
Hype and attention. Importance, neglectedness, tractability. Many of us were used to thinking about how to achieve AI alignment in a world where not that many people were focused on Alignment or even AGI at all. As we gradually (or not that gradually) get into a world where everyone is thinking about AI, it's a prett |
aa4e7ea4-1bdd-4ec4-8138-35c6f92ffaaa | trentmkelly/LessWrong-43k | LessWrong | Who wants a free AI Safety Domain?
I have a few domains I would like to give away. If you want a domain and have a good use for it let me know and I will transfer it to you at no cost.
* agisafety.org
* ainotkilleveryone.com
* ainotkilleveryone.org
* aiiscoming.org |
757bb711-15c8-47b1-a9e2-000901eed795 | StampyAI/alignment-research-dataset/arbital | Arbital | Actual effectiveness
For a design feature of a [sufficiently advanced AI](https://arbital.com/p/7g1) to be "actually effective", we may need to worry about the behavior of other parts of the system. For example, if you try to declare that a self-modifying AI is not allowed to modify the representation of its [utility function](https://arbital.com/p/1fw), %note: Which [shouldn't be necessary](https://arbital.com/p/3r6) in the first place, unless something weird is going on.% this constant section of code may be meaningless unless you're also enforcing some invariant on the probabilities that get [multiplied by the utilities](https://arbital.com/p/18t) and any other element of the AI that can directly poke policies on their way to motor output. Otherwise, the code and representation of the utility function may still be there, but it may not be actually steering the AI the way it used to. |
7872aa30-10e4-498a-a126-fe3c5f636f77 | trentmkelly/LessWrong-43k | LessWrong | Thoughts on the Scope of LessWrong's Infohazard Policies
Recently, I was asked about my opinion on deleting posts that had potentially bad consequences. In particular, whether I should delete a post criticising the CDC's response to the coronavirus, during a time of crisis where the CDC was the relevant US government authority on how to respond. I spent a few hours writing a response, and I've reproduced it here with light editing so it can be linked to and discussed out of the context of the thread. It's not tightly edited, but it has a bunch of things about the current policies that I think are helpful to say rather than not say.
This post doesn't discuss all aspects of how to deal with infohazards and secrecy on LessWrong. In areas such as AI capabilities, bioweapons, and more, the LessWrong engineering team has thought more about features to build to allow for multi-stage publication practises. But this was an instance of a user asking that a specific post be deleted because it posed an infohazard, so that's what I address below.
----------------------------------------
I've spent multiple hundreds of hours thinking about information hazards, publication norms, and how to avoid unilateralist action, and I regularly use those principles explicitly in decision-making. I've spent quite some time thinking about how to re-design LessWrong to allow for private discussion and vetting for issues that might lead to e.g. sharing insights that lead to advances in AI capabilities. But given all of that, on reflection, I completely disagree that this post should be deleted, or that the authors were taking worrying unilateralist action.
Let me give my thoughts on the issue of infohazards.
I am honestly not sure what work you think the term is doing in this situation, so I'll recap what it is for everyone following. In history, there has been a notion that all science is fundamentally good, that all knowledge is good, and that science need not ask ethical questions of its exploration. Much of Bostrom's career has been to draw t |
8eee9d9b-3106-4090-9aa2-a26ca6fc72fb | StampyAI/alignment-research-dataset/arxiv | Arxiv | Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision.
1 Introduction
---------------
Waymo, Zoox, Amazon, Nimble Robotics, and Plus One Robotics use remote human supervision of robot fleets in applications ranging from self-driving taxis to automated warehouse fulfillment [[1](#bib.bib1), [2](#bib.bib2), [3](#bib.bib3), [4](#bib.bib4)]. These robots intermittently cede control during task execution to remote human supervisors for corrective interventions. The interventions take place either during learning, when they are used to improve the robot policy, or at execution time, when the policy is no longer updated but robots can still request human assistance when needed to improve reliability. In the continual learning setting, these occur simultaneously: the robot policy has been deployed but continues to be updated indefinitely with additional intervention data. Furthermore, any individual robot can share its intervention data with the rest of the fleet. As opposed to robot swarms that must coordinate with each other to achieve a common objective, a robot fleet is a set of independent robots simultaneously executing the same control policy for the same task in parallel environments. We refer to the setting of a robot fleet learning via interactive requests for human supervision as Interactive Fleet Learning (IFL).
Of central importance in IFL is the supervisor allocation problem: how should limited human supervision be allocated to robots in a manner that maximizes the throughput of the fleet? Prior work studies this in the single-robot, single-human case. A variety of interactive learning algorithms have been proposed that estimate quantities such as uncertainty [[5](#bib.bib5)], novelty [[6](#bib.bib6), [7](#bib.bib7), [8](#bib.bib8)], risk [[7](#bib.bib7), [8](#bib.bib8)], and predicted action discrepancy [[9](#bib.bib9), [10](#bib.bib10)]. However, it remains unclear which algorithms are the most effective when generalized to the multi-robot, multi-human case.
To this end, we formalize the IFL problem and present the IFL Benchmark (IFLB), a new open-source Python toolkit and benchmark for developing and evaluating human-to-robot allocation algorithms for fleet learning. The IFLB includes environments from Isaac Gym [[11](#bib.bib11)], which enabled efficient simulation of thousands of learning robots for the first time in 2021. This paper makes the following contributions: (1) a formalism for interactive fleet learning, (2) the IFLB, an open-source software benchmark and toolkit for IFL algorithms with 3 Isaac Gym environments for complex robotic tasks, (3) Fleet-DAgger, a novel family of IFL algorithms, (4) results from large-scale simulation experiments with a fleet of 100 robots, and (5) results from 1000 physical trials with 4 physical robot arms and 2 human supervisors providing teleoperation remotely over the Internet.
2 Related Work
---------------
###
2.1 Allocating Human Supervisors to Robots at Execution Time
For human-robot teams, deciding when to transfer control between robots and humans during execution is a widely studied topic in the literature of both sliding autonomy [[12](#bib.bib12), [13](#bib.bib13), [14](#bib.bib14)] and Human-Robot Interaction (HRI). In sliding autonomy, also known as adjustable autonomy [[15](#bib.bib15), [16](#bib.bib16)] or adaptive automation [[17](#bib.bib17)], humans and robots dynamically adjust their level of autonomy and transfer control to each other during execution [[14](#bib.bib14), [17](#bib.bib17)]. Since identifying which robot to assist in a large robot fleet can be overwhelming for a human operator [[18](#bib.bib18), [19](#bib.bib19), [20](#bib.bib20), [21](#bib.bib21), [22](#bib.bib22)], several strategies, such as using a cost-benefit analysis to decide whether to request operator assistance [[12](#bib.bib12)] and using an advising agent to filter robot requests [[23](#bib.bib23)], have been proposed to improve the performance of human-robot teams [[21](#bib.bib21), [23](#bib.bib23), [24](#bib.bib24)] and to increase the number of robots that can be controlled [[25](#bib.bib25)], a quantity known as “fan-out” [[26](#bib.bib26)].
Other examples include user modeling [[12](#bib.bib12), [13](#bib.bib13), [21](#bib.bib21), [24](#bib.bib24)] and studying interaction modes [[27](#bib.bib27)] for better system and interface design [[28](#bib.bib28), [29](#bib.bib29)]. Zheng et al. [[30](#bib.bib30)] propose to compute the time until stopping for each robot based on its estimated risk and prioritize the robots accordingly.
Ji et al. [[31](#bib.bib31)] consider the setting where physical assistance is required to resume tasks for navigation robots and formalize multi-robot, single-human allocation as graph traversal. Dahiya et al. [[32](#bib.bib32)] formulate the problem of multi-robot, multi-human allocation during robot execution as a Restless Multi-Armed Bandit problem.
In addition, allocating humans to address robot requests has also been studied from the perspective of queueing theory and scheduling theory [[22](#bib.bib22), [33](#bib.bib33), [34](#bib.bib34), [35](#bib.bib35), [36](#bib.bib36), [37](#bib.bib37)]. The vast majority of the human-robot teaming and queueing theory work, however, does not involve learning; the robot control policies are assumed to be fixed. We study supervisor allocation during robot learning, where the allocation policies affect not only human burden and the performance of the human-robot team but also the efficiency of policy learning.
###
2.2 Single-Robot, Single-Human Interactive Learning
Imitation learning (IL) is a paradigm of robot learning in which a robot uses demonstrations from a human to initialize and/or improve its policy [[38](#bib.bib38), [39](#bib.bib39), [40](#bib.bib40), [41](#bib.bib41), [42](#bib.bib42), [43](#bib.bib43)].
However, learning from purely offline data often suffers from distribution shift [[44](#bib.bib44), [45](#bib.bib45)], as compounding approximation error leads to states that were not visited by the human. This can be mitigated by online data collection with algorithms such as Dataset Aggregration (DAgger) [[44](#bib.bib44)] and interactive imitation learning [[46](#bib.bib46), [47](#bib.bib47), [48](#bib.bib48)]. *Human-gated* interactive IL algorithms [[49](#bib.bib49), [50](#bib.bib50), [51](#bib.bib51)] require the human to monitor the robot learning process and decide when to take and cede control of the system. While intuitive, these approaches are not scalable to large fleets of robots or the long periods of time involved in continual learning, as humans cannot effectively focus on many robots simultaneously [[20](#bib.bib20), [21](#bib.bib21), [22](#bib.bib22)] and are prone to fatigue [[52](#bib.bib52)]. To reduce the burden on the supervisor, several *robot-gated* interactive IL algorithms such as SafeDAgger [[9](#bib.bib9)], EnsembleDAgger [[5](#bib.bib5)], LazyDAgger [[10](#bib.bib10)], and ThriftyDAgger [[8](#bib.bib8)] have been proposed, in which the robot actively solicits human interventions when certain criteria are met. Interactive reinforcement learning (RL) [[53](#bib.bib53), [54](#bib.bib54), [55](#bib.bib55), [56](#bib.bib56)] is another active area of research in which robots learn from both online human feedback and their own experience. However, these interactive learning algorithms are designed for and primarily studied in the single-robot, single-human setting. Other work related to single-robot interactive learning includes task allocation for a discrete set of tasks [[57](#bib.bib57)]; in contrast, we focus on efficient robot learning of a single task.
###
2.3 Multi-Robot Interactive Learning
In this paper, we study allocation policies for multiple humans and multiple robots. While many existing works [[58](#bib.bib58), [59](#bib.bib59), [60](#bib.bib60), [61](#bib.bib61), [62](#bib.bib62)] have leveraged NVIDIA’s Isaac Gym’s [[11](#bib.bib11)] capability of parallel simulation to accelerate reinforcement learning with multiple robots, these approaches consider robots in isolation without interactive human supervision.
The work that is closest to ours is by Swamy et al. [[63](#bib.bib63)], who study the multi-robot, single-human setting of allocating the attention of one human operator during robot fleet learning. They propose to learn an internal model of human preferences as a human supervises a small fleet of 4 robots and use this model to assist the human in supervising a larger fleet of 12 robots. While this approach mitigates the scaling issue in human-gated interactive IL, even a small fleet of robots can be difficult for a single human supervisor to optimally control. In contrast, we consider robot-gated algorithms for requesting and allocating supervision from multiple humans.
To the best of our knowledge, this work is the first to formalize and study multi-robot, multi-human interactive learning. This problem setting poses unique challenges, especially as the size of the fleet grows large relative to the number of humans, as each human allocation affects both the robot that receives supervision and the robots that do not receive human attention.
3 Interactive Fleet Learning Problem Formulation
-------------------------------------------------
We consider a fleet of N𝑁Nitalic\_N robots operating in parallel as a set of N𝑁Nitalic\_N independent and identical Markov decision processes (MDPs) {ℳi}i=1Nsuperscriptsubscriptsubscriptℳ𝑖𝑖1𝑁\{\mathcal{M}\_{i}\}\_{i=1}^{N}{ caligraphic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT } start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT specified by the tuple (𝒮,𝒜,p,r,γ)\mathcal{S},\mathcal{A},p,r,\gamma)caligraphic\_S , caligraphic\_A , italic\_p , italic\_r , italic\_γ ) with the same state space 𝒮𝒮\mathcal{S}caligraphic\_S, action space 𝒜𝒜\mathcal{A}caligraphic\_A, unknown transition dynamics p:𝒮×𝒜×𝒮→[0,1]:𝑝→𝒮𝒜𝒮01p:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]italic\_p : caligraphic\_S × caligraphic\_A × caligraphic\_S → [ 0 , 1 ], reward function r:𝒮×𝒜→ℝ:𝑟→𝒮𝒜ℝr:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}italic\_r : caligraphic\_S × caligraphic\_A → blackboard\_R, discount factor γ∈[0,1]𝛾01\gamma\in[0,1]italic\_γ ∈ [ 0 , 1 ], and initial state distribution p0subscript𝑝0p\_{0}italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. We assume the MDPs have an identical indicator function c(s):𝒮→{0,1}:𝑐𝑠→𝒮01c(s):\mathcal{S}\rightarrow\{0,1\}italic\_c ( italic\_s ) : caligraphic\_S → { 0 , 1 } that identifies which states s∈𝒮𝑠𝒮s\in\mathcal{S}italic\_s ∈ caligraphic\_S violate a constraint in the MDP. States that violate MDP constraints are fault states from which the robot cannot make further progress. For instance, the robot may be stuck on the side of the road or have incurred hardware damage. We assume that the timesteps are synchronized across all robots and that they share the same non-stationary policy πθt:𝒮→𝒜:subscript𝜋subscript𝜃𝑡→𝒮𝒜\pi\_{\theta\_{t}}:\mathcal{S}\rightarrow\mathcal{A}italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT : caligraphic\_S → caligraphic\_A, parameterized by θtsubscript𝜃𝑡\theta\_{t}italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT at each timestep t𝑡titalic\_t, where θtsubscript𝜃𝑡\theta\_{t}italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is updated over time with continual learning.
The collection of {ℳi}i=1Nsuperscriptsubscriptsubscriptℳ𝑖𝑖1𝑁\{\mathcal{M}\_{i}\}\_{i=1}^{N}{ caligraphic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT } start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT can be reformulated as a single MDP ℳ=(𝒮N,𝒜N,p¯,r¯,γ)ℳsuperscript𝒮𝑁superscript𝒜𝑁¯𝑝¯𝑟𝛾\mathcal{M}=(\mathcal{S}^{N},\mathcal{A}^{N},\bar{p},\bar{r},\gamma)caligraphic\_M = ( caligraphic\_S start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT , caligraphic\_A start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT , over¯ start\_ARG italic\_p end\_ARG , over¯ start\_ARG italic\_r end\_ARG , italic\_γ ), composed of vectorized states and actions of all robots in the fleet (denoted by bold font) and joint transition dynamics. In particular, 𝐬=(s1,…,sN)∈𝒮N𝐬subscript𝑠1…subscript𝑠𝑁superscript𝒮𝑁\mathbf{s}=(s\_{1},...,s\_{N})\in\mathcal{S}^{N}bold\_s = ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_s start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ) ∈ caligraphic\_S start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT, 𝐚=(a1,…,aN)∈𝒜N𝐚subscript𝑎1…subscript𝑎𝑁superscript𝒜𝑁\mathbf{a}=(a\_{1},...,a\_{N})\in\mathcal{A}^{N}bold\_a = ( italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ) ∈ caligraphic\_A start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT, p¯(𝐬t+1|𝐬t,𝐚t)=Πi=1Np(sit+1|sit,ait)¯𝑝conditionalsuperscript𝐬𝑡1superscript𝐬𝑡superscript𝐚𝑡superscriptsubscriptΠ𝑖1𝑁𝑝conditionalsuperscriptsubscript𝑠𝑖𝑡1superscriptsubscript𝑠𝑖𝑡superscriptsubscript𝑎𝑖𝑡\bar{p}(\mathbf{s}^{t+1}|\mathbf{s}^{t},\mathbf{a}^{t})=\Pi\_{i=1}^{N}p(s\_{i}^{t+1}|s\_{i}^{t},a\_{i}^{t})over¯ start\_ARG italic\_p end\_ARG ( bold\_s start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) = roman\_Π start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ), and r¯(𝐬,𝐚)=Σi=1Nr(si,ai)¯𝑟𝐬𝐚superscriptsubscriptΣ𝑖1𝑁𝑟subscript𝑠𝑖subscript𝑎𝑖\bar{r}(\mathbf{s},\mathbf{a})=\Sigma\_{i=1}^{N}r(s\_{i},a\_{i})over¯ start\_ARG italic\_r end\_ARG ( bold\_s , bold\_a ) = roman\_Σ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT italic\_r ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ).
We assume that robots can query a set of M≪Nmuch-less-than𝑀𝑁M\ll Nitalic\_M ≪ italic\_N human supervisors for assistance interactively (i.e., during execution of πθtsubscript𝜋subscript𝜃𝑡\pi\_{\theta\_{t}}italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT). We assume that each human can help only one robot at a time and that all humans have the same policy πH:𝒮→𝒜H:subscript𝜋𝐻→𝒮subscript𝒜𝐻\pi\_{H}:\mathcal{S}\rightarrow\mathcal{A}\_{H}italic\_π start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT : caligraphic\_S → caligraphic\_A start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT, where 𝒜H=𝒜∪{R}subscript𝒜𝐻𝒜𝑅\mathcal{A}\_{H}=\mathcal{A}\cup\{R\}caligraphic\_A start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT = caligraphic\_A ∪ { italic\_R } and R𝑅Ritalic\_R is a hard reset, an action that resets the MDP to the initial state distribution s0∼p0similar-tosuperscript𝑠0subscript𝑝0s^{0}\sim p\_{0}italic\_s start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT ∼ italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. As opposed to a soft reset that can be performed autonomously by the robot via a reset action r∈𝒜𝑟𝒜r\in\mathcal{A}italic\_r ∈ caligraphic\_A (e.g., a new bin arrives in an assembly line), a hard reset requires human intervention due to constraint violation (i.e., entering some s𝑠sitalic\_s where c(s)=1𝑐𝑠1c(s)=1italic\_c ( italic\_s ) = 1). A human assigned to a robot either performs hard reset R𝑅Ritalic\_R (if c(s)=1𝑐𝑠1c(s)=1italic\_c ( italic\_s ) = 1) or teleoperates the robot system with policy πHsubscript𝜋𝐻\pi\_{H}italic\_π start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT (if c(s)=0𝑐𝑠0c(s)=0italic\_c ( italic\_s ) = 0). A hard reset R𝑅Ritalic\_R takes tRsubscript𝑡𝑅t\_{R}italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT timesteps to perform, and all other actions take 1 timestep.
Supervisor allocation (i.e., the assignment of humans to robots) is determined by an allocation policy
| | | | |
| --- | --- | --- | --- |
| | ω:(𝐬t,πθt,𝜶t−1,𝐱t)↦𝜶t∈{0,1}N×Ms.t.∑j=1M𝜶ijt≤1 and ∑i=1N𝜶ijt≤1∀i,j,\omega:(\mathbf{s}^{t},\pi\_{\theta\_{t}},\boldsymbol{\alpha}^{t-1},\mathbf{x}^{t})\mapsto\boldsymbol{\alpha}^{t}\in\{0,1\}^{N\times M}\quad\text{s.t.}\quad\sum\_{j=1}^{M}\boldsymbol{\alpha}\_{ij}^{t}\leq 1\text{ and }\sum\_{i=1}^{N}\boldsymbol{\alpha}\_{ij}^{t}\leq 1\quad\forall i,j,italic\_ω : ( bold\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_italic\_α start\_POSTSUPERSCRIPT italic\_t - 1 end\_POSTSUPERSCRIPT , bold\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) ↦ bold\_italic\_α start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ∈ { 0 , 1 } start\_POSTSUPERSCRIPT italic\_N × italic\_M end\_POSTSUPERSCRIPT s.t. ∑ start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_M end\_POSTSUPERSCRIPT bold\_italic\_α start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ≤ 1 and ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT bold\_italic\_α start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ≤ 1 ∀ italic\_i , italic\_j , | | (1) |
where 𝐬tsuperscript𝐬𝑡\mathbf{s}^{t}bold\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT are the current states for each of the robots, 𝜶tsuperscript𝜶𝑡\boldsymbol{\alpha}^{t}bold\_italic\_α start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT is an N×M𝑁𝑀N\times Mitalic\_N × italic\_M binary matrix that indicates which robots will receive assistance from which human at the current timestep t𝑡titalic\_t, and 𝐱tsuperscript𝐱𝑡\mathbf{x}^{t}bold\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT is an augmented state containing any auxiliary information for each robot, such as the type and duration of an ongoing intervention. Unlike Dahiya et al. [[32](#bib.bib32)] which studies execution-time allocation, the allocation policy ω𝜔\omegaitalic\_ω here depends on the current robot policy πθtsubscript𝜋subscript𝜃𝑡\pi\_{\theta\_{t}}italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT, which in turn affects the speed of the policy learning. While there are a variety of potential objectives to consider, e.g., minimizing constraint violations in a safety-critical environment, we define the IFL objective as return on human effort (ROHE):
| | | | |
| --- | --- | --- | --- |
| | maxω∈Ω𝔼τ∼pω,θ0(τ)[∑t=0Tr¯(𝐬t,𝐚t)∑t=0T‖ω(𝐬t,πθt,𝜶t−1,𝐱t)‖F2],subscript𝜔Ωsubscript𝔼similar-to𝜏subscript𝑝𝜔subscript𝜃0𝜏delimited-[]superscriptsubscript𝑡0𝑇¯𝑟superscript𝐬𝑡superscript𝐚𝑡superscriptsubscript𝑡0𝑇subscriptsuperscriptnorm𝜔superscript𝐬𝑡subscript𝜋subscript𝜃𝑡superscript𝜶𝑡1superscript𝐱𝑡2𝐹\max\_{\omega\in\Omega}\mathbb{E}\_{\tau\sim p\_{\omega,\theta\_{0}}(\tau)}\left[\frac{\sum\_{t=0}^{T}\bar{r}(\mathbf{s}^{t},\mathbf{a}^{t})}{\sum\_{t=0}^{T}\|\omega(\mathbf{s}^{t},\pi\_{\theta\_{t}},\boldsymbol{\alpha}^{t-1},\mathbf{x}^{t})\|^{2}\_{F}}\right],roman\_max start\_POSTSUBSCRIPT italic\_ω ∈ roman\_Ω end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_τ ∼ italic\_p start\_POSTSUBSCRIPT italic\_ω , italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_τ ) end\_POSTSUBSCRIPT [ divide start\_ARG ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT over¯ start\_ARG italic\_r end\_ARG ( bold\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT ∥ italic\_ω ( bold\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_italic\_α start\_POSTSUPERSCRIPT italic\_t - 1 end\_POSTSUPERSCRIPT , bold\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT end\_ARG ] , | | (2) |
where ΩΩ\Omegaroman\_Ω is the set of allocation policies, T𝑇Titalic\_T is the total amount of time the fleet operates (rather than the time horizon of an individual task execution), θ0subscript𝜃0\theta\_{0}italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT are the initial parameters of the robot policy, and ∥⋅∥F\|\cdot\|\_{F}∥ ⋅ ∥ start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT is the Frobenius norm. The objective is the expected ratio of the cumulative reward across all timesteps and all robots to the total amount of human time spent helping robots with allocation policy ω𝜔\omegaitalic\_ω. Intuitively, the ROHE measures the performance of the robot fleet normalized by the total human effort required to achieve this performance. We provide a more thorough derivation of the ROHE objective in Appendix [8.1](#S8.SS1 "8.1 Mathematical Details of the IFL Problem Formulation ‣ 8 Appendix ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision").
Since human teleoperation with πHsubscript𝜋𝐻\pi\_{H}italic\_π start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT provides additional online data, this data can be used to update the robot policy πθtsubscript𝜋subscript𝜃𝑡\pi\_{\theta\_{t}}italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT:
| | | | |
| --- | --- | --- | --- |
| | {Dt+1←Dt∪DHtwhereDHt≔{(sit,πH(sit)):πH(sit)≠R and ∑j=1M𝜶ijt=1}πθt+1←f(πθt,Dt+1),cases←superscript𝐷𝑡1superscript𝐷𝑡superscriptsubscript𝐷𝐻𝑡wheresuperscriptsubscript𝐷𝐻𝑡≔conditional-setsuperscriptsubscript𝑠𝑖𝑡subscript𝜋𝐻superscriptsubscript𝑠𝑖𝑡subscript𝜋𝐻superscriptsubscript𝑠𝑖𝑡𝑅 and superscriptsubscript𝑗1𝑀superscriptsubscript𝜶𝑖𝑗𝑡1𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒←subscript𝜋subscript𝜃𝑡1𝑓subscript𝜋subscript𝜃𝑡superscript𝐷𝑡1𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒\begin{cases}D^{t+1}\leftarrow D^{t}\cup D\_{H}^{t}\,\,\text{where}\,\,D\_{H}^{t}\coloneqq\{{(s\_{i}^{t},\pi\_{H}(s\_{i}^{t})):\pi\_{H}(s\_{i}^{t})\neq R\text{ and }\sum\_{j=1}^{M}\boldsymbol{\alpha}\_{ij}^{t}=1}\}\\
\pi\_{\theta\_{t+1}}\leftarrow f(\pi\_{\theta\_{t}},D^{t+1})\end{cases},{ start\_ROW start\_CELL italic\_D start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT ← italic\_D start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ∪ italic\_D start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT where italic\_D start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ≔ { ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) ) : italic\_π start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) ≠ italic\_R and ∑ start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_M end\_POSTSUPERSCRIPT bold\_italic\_α start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT = 1 } end\_CELL start\_CELL end\_CELL end\_ROW start\_ROW start\_CELL italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ← italic\_f ( italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_D start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT ) end\_CELL start\_CELL end\_CELL end\_ROW , | | (3) |
where f𝑓fitalic\_f is a policy update function (e.g., gradient descent).
4 Interactive Fleet Learning Algorithms
----------------------------------------
###
4.1 Fleet-DAgger
Given the problem formulation above, we propose Fleet-DAgger, a family of IFL algorithms, where an *IFL algorithm* is an algorithm for supervisor allocation in IFL (i.e., it specifies an ω∈Ω𝜔Ω\omega\in\Omegaitalic\_ω ∈ roman\_Ω as defined in Section [3](#S3 "3 Interactive Fleet Learning Problem Formulation ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision")). Fleet-DAgger seeks to maximize the IFL objective in the context of interactive imitation learning with dataset aggregation: its policy update function f𝑓fitalic\_f is supervised learning on Dtsuperscript𝐷𝑡D^{t}italic\_D start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT, which consists of all human data collected so far (Section [3](#S3 "3 Interactive Fleet Learning Problem Formulation ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision")). In addition, each Fleet-DAgger algorithm defines a unique priority function p^:(s,πθt)→[0,∞):^𝑝→𝑠subscript𝜋subscript𝜃𝑡0\hat{p}:(s,\pi\_{\theta\_{t}})\rightarrow[0,\infty)over^ start\_ARG italic\_p end\_ARG : ( italic\_s , italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) → [ 0 , ∞ ) that indicates the priority score to assign to each robot based on its state s𝑠sitalic\_s and the current policy πθtsubscript𝜋subscript𝜃𝑡\pi\_{\theta\_{t}}italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT, where, similar to scheduling theory, a higher value indicates a higher priority robot. To reduce thrashing [[10](#bib.bib10), [8](#bib.bib8)], Fleet-DAgger algorithms also specify tTsubscript𝑡𝑇t\_{T}italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT, the minimum time a human supervisor must spend teleoperating a robot.
Fleet-DAgger uses priority function p^^𝑝\hat{p}over^ start\_ARG italic\_p end\_ARG and tTsubscript𝑡𝑇t\_{T}italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT to define an allocation ω𝜔\omegaitalic\_ω as follows. At each timestep t𝑡titalic\_t, Fleet-DAgger first scores all robots with p^^𝑝\hat{p}over^ start\_ARG italic\_p end\_ARG and sorts the robots by their priority values. If a human supervisor is currently performing hard reset action R𝑅Ritalic\_R and tRsubscript𝑡𝑅t\_{R}italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT timesteps have not elapsed, that human continues to help that robot. If a human is currently teleoperating a robot and the minimum tTsubscript𝑡𝑇t\_{T}italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT timesteps have not elapsed, that human continues to teleoperate the robot. If a robot with a human supervisor continues to be high priority after the minimum intervention time (tRsubscript𝑡𝑅t\_{R}italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT for a hard reset or tTsubscript𝑡𝑇t\_{T}italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT for teleoperation) has elapsed, that human remains assigned to the robot. If a human is available to help a robot, the human is reassigned to the robot with the highest priority value that is currently unassisted. Finally, if a robot has priority p^(⋅)=0^𝑝⋅0\hat{p}(\cdot)=0over^ start\_ARG italic\_p end\_ARG ( ⋅ ) = 0, it does not receive assistance even if a human is available. We include the pseudocode of Fleet-DAgger in the appendix.
###
4.2 Fleet-DAgger Algorithms
All algorithms below specify a unique priority function p^^𝑝\hat{p}over^ start\_ARG italic\_p end\_ARG, which is synthesized with Fleet-DAgger as described in Section [4.1](#S4.SS1 "4.1 Fleet-DAgger ‣ 4 Interactive Fleet Learning Algorithms ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision") to specify an allocation ω𝜔\omegaitalic\_ω.
Behavior Cloning: At all timesteps t𝑡titalic\_t, this baseline gives priority p^(⋅)=1^𝑝⋅1\hat{p}(\cdot)=1over^ start\_ARG italic\_p end\_ARG ( ⋅ ) = 1 for robots that have violated a constraint (i.e., c(sit)=1𝑐superscriptsubscript𝑠𝑖𝑡1c(s\_{i}^{t})=1italic\_c ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) = 1) and require a hard reset, and assigns p^(⋅)=0^𝑝⋅0\hat{p}(\cdot)=0over^ start\_ARG italic\_p end\_ARG ( ⋅ ) = 0 for all other robots. We refer to this as C𝐶Citalic\_C-prioritization for Constraint. Thus, the robot fleet can only receive hard resets from human supervisors (i.e., no human teleoperation). Without C𝐶Citalic\_C-prioritiziation, robots that require hard resets would remain idle indefinitely and not contribute to the cumulative reward.
Random: This baseline simply assigns a random priority for each robot at each timestep. Additionally, to control the total amount of human supervision, we introduce a threshold hyperparameter such that if a robot’s priority value is below the threshold, its priority is set to zero and it will not request help.
Fleet-EnsembleDAgger: This baseline adapts EnsembleDAgger [[5](#bib.bib5)] to the IFL setting. EnsembleDAgger uses the output variance among an ensemble of neural networks bootstrapped on subsets of the training data as an estimate of epistemic uncertainty; accordingly, we define the robot priority for Fleet-EnsembleDAgger as ensemble variance. Since ensemble variance is designed for continuous action spaces, for environments with discrete action spaces we instead estimate the uncertainty with the Shannon entropy [[64](#bib.bib64)] among the outputs of a single classifier network. We refer to this priority function as U𝑈Uitalic\_U-prioritization for Uncertainty. Finally, since EnsembleDAgger was not designed for environments with constraint violations and idle robots will negatively affect the ROHE, we add C𝐶Citalic\_C-prioritization for a more fair comparison. Specifically, given an uncertainty threshold value, robots with uncertainty above threshold are prioritized first in order of their uncertainty, followed by constraint-violating robots.
Fleet-ThriftyDAgger: This baseline adapts the ThriftyDAgger algorithm [[8](#bib.bib8)] to the IFL setting. ThriftyDAgger uses a synthesis of uncertainty (which we refer to as the U𝑈Uitalic\_U-prioritization value) and the probability of task failure (estimated with a goal critic Q-function) to query a human for supervision. Since Fleet-DAgger requires a single metric by which to compare different robots, we adapt ThriftyDAgger to the fleet setting by computing a linear combination of the U𝑈Uitalic\_U-prioritization value and the probability of task failure after normalizing each value with running estimates of their means and standard deviations. As in [[8](#bib.bib8)], we pretrain the goal critic on an offline dataset of human and robot task execution. Similar to Fleet-EnsembleDAgger, we first prioritize by the combined uncertainty-goal values above a parameterized threshold, followed by C𝐶Citalic\_C-prioritization.
Constraint-Uncertainty-Risk (C.U.R.): Here we propose a novel Fleet-DAgger algorithm. As the name suggests, C.U.R. does C𝐶Citalic\_C-prioritization (prioritize all constraint-violating robots), followed by U𝑈Uitalic\_U-prioritization (prioritize uncertain robots above a minimum threshold), followed by R𝑅Ritalic\_R-prioritization, where R𝑅Ritalic\_R stands for risk, which we define as the probability of constraint violation. Intuitively, idle robots should be reset in order to continue making progress, uncertain robots should receive more human supervision in areas with little to no reference behavior to imitate, and robots at risk should request human teleoperation to safety before an expensive hard reset. As in [[65](#bib.bib65)], we estimate the probability of constraint violation with a safety critic Q-function. We initialize the safety critic on an offline dataset of constraint violations. In addition, we implement an initial warmup period during which constraint violations are assigned zero priority rather than high priority. Here, the intuition is that rather than attending to hard resets for an initially low-performing policy, human intervention should instead be spent on valuable teleoperation data that can improve the robot policy. Hence, during the warmup period, constraint-violating robots remain idle and human attention is allocated to the teleoperation of a smaller number of robots.
5 Interactive Fleet Learning Benchmark
---------------------------------------
While many algorithms have been proposed for interactive learning [[10](#bib.bib10), [5](#bib.bib5), [8](#bib.bib8), [9](#bib.bib9)], to our knowledge there exists no unified benchmark for evaluating them. To facilitate reproducibility and standardized evaluation for IFL algorithms, we introduce the Interactive Fleet Learning Benchmark (IFLB). The IFLB is an open-source Python implementation of IFL with a suite of simulation environments and a modular software architecture for rapid prototyping and evaluation of new IFL algorithms.
###
5.1 Environments
The IFLB is built on top of NVIDIA Isaac Gym [[11](#bib.bib11)], a highly optimized software platform for end-to-end GPU-accelerated robot learning released in 2021, without which the simulation of hundreds of learning robots would be computationally intractable. The IFLB can run efficiently on a single GPU and currently supports the following 3 Isaac Gym environments with high-dimensional continuous state and action spaces: (1) Humanoid, a bipedal legged locomotion task from OpenAI Gym [[66](#bib.bib66)], (2) Anymal, a quadruped legged locomotion task with the ANYmal robot by ANYbotics, and (3) AllegroHand, a task involving dexterous manipulation of a cube with a 4-finger Allegro Hand by Wonik Robotics. Constraint violation is defined as (1) the humanoid falling down, (2) the ANYmal falling down on its torso or knees, and (3) dropping the cube from the hand, respectively. See Figure [1](#S5.F1 "Figure 1 ‣ 5.1 Environments ‣ 5 Interactive Fleet Learning Benchmark ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision") for images of each of these tasks. While these three tasks are the current fully supported environments in the IFLB, end users and maintainers can easily add new Isaac Gym environments.

Figure 1: Isaac Gym benchmark environments in the IFLB.
###
5.2 Software Architecture
The IFLB defines 3 interfaces for the development of IFL algorithms: (1) agents, (2) supervisors, and (3) allocations. An agent is an implementation of the robot fleet policy πθtsubscript𝜋subscript𝜃𝑡\pi\_{\theta\_{t}}italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT (Section [3](#S3 "3 Interactive Fleet Learning Problem Formulation ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision")), such as an IL or RL agent. A supervisor is an implementation of the supervisor policy πHsubscript𝜋𝐻\pi\_{H}italic\_π start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT (Section [3](#S3 "3 Interactive Fleet Learning Problem Formulation ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision")), such as a fully trained RL agent, a model-based planner, or a teleoperation interface for remote human supervisors. Lastly, an allocation is an implementation of the priority function p^^𝑝\hat{p}over^ start\_ARG italic\_p end\_ARG (Section [4](#S4 "4 Interactive Fleet Learning Algorithms ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision")), such as C.U.R. priority or ThriftyDAgger priority. For reference, the IFLB includes an imitation learning agent, a fully trained RL supervisor using Isaac Gym’s reference PPO [[67](#bib.bib67)] implementation, and all allocations from Section [4](#S4 "4 Interactive Fleet Learning Algorithms ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision"), which we use in our experiments. Users of the IFLB can flexibly implement their own IFL algorithms by defining new agents, supervisors, and allocations.
Given an agent, supervisor, allocation, and environment, the IFLB runs Fleet-DAgger as described in Section [4.1](#S4.SS1 "4.1 Fleet-DAgger ‣ 4 Interactive Fleet Learning Algorithms ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision"). IFLB allows flexible command line configuration of all parameters of the experiment (e.g., tTsubscript𝑡𝑇t\_{T}italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT, tRsubscript𝑡𝑅t\_{R}italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT, N𝑁Nitalic\_N, M𝑀Mitalic\_M) as well as the parameters of the agent, supervisor, and allocation. If desired, the code can also be modified to support families of IFL algorithms other than Fleet-DAgger. The benchmark is available open-source at <https://github.com/BerkeleyAutomation/ifl_benchmark>.
6 Experiments
--------------
###
6.1 Metrics
Throughout online training, we measure four metrics at each timestep t𝑡titalic\_t: (1) the cumulative number of successful task completions across the fleet and up to time t𝑡titalic\_t; (2) cumulative hard resets (i.e., constraint violations); (3) cumulative idle time, i.e., how long robots spend idle in constraint-violating states waiting for hard resets; and (4) the return on human effort (ROHE, Equation [2](#S3.E2 "2 ‣ 3 Interactive Fleet Learning Problem Formulation ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision")), where reward is a sparse r∈{0,1}𝑟01r\in\{0,1\}italic\_r ∈ { 0 , 1 } for successful task completion and cumulative human time is measured in hundreds of timesteps for simulation experiments and tens of timesteps for physical experiments. We do not measure ROHE for Behavior Cloning as it is an offline algorithm. For the Humanoid and Anymal locomotion environments, success is defined as reaching the episode horizon without constraint violation and with (dense) reward of at least 95% of that of the supervisor policy. For AllegroHand, a goal-conditioned task, success is defined by reaching a goal state. For the physical block-pushing experiment (Section [6.3](#S6.SS3 "6.3 Physical Block-Pushing Experiment ‣ 6 Experiments ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision")), a robot achieves 1 success for a push that reduces distance to the goal and 5 successes for reaching the goal.
###
6.2 IFLB Simulation Experiments
Experimental Setup: We evaluate all Fleet-DAgger algorithms in the 3 benchmark simulation environments: Humanoid, Anymal, and AllegroHand. We use reinforcement learning agents fully trained with PPO [[67](#bib.bib67)] as the algorithmic supervisor πHsubscript𝜋𝐻\pi\_{H}italic\_π start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT. We initialize the robot policy πθ0subscript𝜋subscript𝜃0\pi\_{\theta\_{0}}italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT with behavior cloning on an offline dataset of 5000 state-action pairs. For a fair comparison, the Behavior Cloning baseline is given additional offline data equal to the average amount of human time solicited by C.U.R. by operation time boundary T𝑇Titalic\_T. The Random baseline’s priority threshold is set such that in expectation, it reaches the average amount of human time solicited by C.U.R. by time T𝑇Titalic\_T. Since Fleet-ThriftyDAgger requires a goal-conditioned task, it is only evaluated on AllegroHand. All training runs are executed with N=100𝑁100N=100italic\_N = 100 robots, M=10𝑀10M=10italic\_M = 10 humans, tT=5subscript𝑡𝑇5t\_{T}=5italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT = 5, tR=5subscript𝑡𝑅5t\_{R}=5italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT = 5, and operation time T=10000𝑇10000T=10000italic\_T = 10000, and are averaged over 3 random seeds. In the appendix, we provide ablation studies on each component of the C.U.R. algorithm and an analysis of hyperparameter sensitivity to the number of humans M𝑀Mitalic\_M, minimum intervention time tTsubscript𝑡𝑇t\_{T}italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT, and hard reset time tRsubscript𝑡𝑅t\_{R}italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT. Code implementations of all Fleet-DAgger algorithms are available in the benchmark.
Results: We plot results in Figure [2](#S6.F2 "Figure 2 ‣ 6.2 IFLB Simulation Experiments ‣ 6 Experiments ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision"). First, we observe that the choice of IFL algorithm has a significant impact on all metrics in all environments, indicating that allocation matters in the IFL setting. We also observe that the robot fleet achieves a higher throughput (number of cumulative task successes) with C.U.R. allocation than baselines in all environments at all times. C.U.R. also attains a higher ROHE, indicating more efficient use of human supervision. An increase in ROHE over time signifies that the improvement in the robot policy πθtsubscript𝜋subscript𝜃𝑡\pi\_{\theta\_{t}}italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT outpaces cumulative human supervision, indicating that the IFL algorithms learn not only where to allocate humans but also when to stop requesting unnecessary supervision. C.U.R. also incurs fewer hard resets than baselines, especially Behavior Cloning, which must constantly hard reset robots with a low-performing offline policy. For AllegroHand, however, C.U.R. incurs higher hard resets and a smaller ROHE margin over baselines. We hypothesize that since the task is too challenging to execute without human supervision in the given fleet operation time, prioritizing hard resets ironically only gives the robots additional opportunities to violate constraints. We also see that C𝐶Citalic\_C-prioritization effectively eliminates cumulative idle time; C.U.R. idle time flattens out after the initial warmup.

Figure 2: Simulation results in the IFLB with N=100𝑁100N=100italic\_N = 100 robots and M=10𝑀10M=10italic\_M = 10 human supervisors, where the x𝑥xitalic\_x-axis is timesteps from 0 to T=10,000𝑇10000T=10,000italic\_T = 10 , 000. Shading indicates 1 standard deviation. The C.U.R. algorithm outperforms all baselines on all environments in terms of ROHE and cumulative successes. (Note that the shape of the Anymal curves is due to its success classification, episode horizon of 2500, and low hard resets.)
###
6.3 Physical Block-Pushing Experiment

Figure 3: Physical Task Setup: an example timestep t𝑡titalic\_t in the physical experiment with 2 humans and 4 independent identical robot arms each executing the block pushing task. Robot 1 queries robot policy πθtsubscript𝜋subscript𝜃𝑡\pi\_{\theta\_{t}}italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT for an action given an overhead image of the workspace and executes it in the workspace. Robot 2 is teleoperated by a remote Human 1, where the human views the overhead image and specifies a pushing action through a user interface. The red region at the edges of the workspace are constraint violation regions. Human 2 is performing a physical hard reset for Robot 3, which has violated a constraint in a previous timestep. Robot 4 autonomously executes the same robot policy as that of Robot 1 on its own state.
Experimental Setup:
Finally, we evaluate Fleet-DAgger in a physical block-pushing experiment with N=4𝑁4N=4italic\_N = 4 ABB YuMi robot arms and M=2𝑀2M=2italic\_M = 2 human supervisors. See Figure [3](#S6.F3 "Figure 3 ‣ 6.3 Physical Block-Pushing Experiment ‣ 6 Experiments ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision") for the hardware setup. The objective of each robot is to reach a goal position that is randomly sampled from the allowable region of the workspace. At each timestep, the robot chooses one of four discrete pushing actions corresponding to pushing each of the four vertical faces of the cube orthogonally by a fixed distance. The robot policy takes an overhead image observation of the cube in the workspace and generates the goal programatically in the image. Hard resets are physical adjustments of the cube, while teleoperation is performed over the Internet by a remote human supervisor, who specifies one of the 4 pushing actions via a keyboard interface. We set tT=3subscript𝑡𝑇3t\_{T}=3italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT = 3, tR=5subscript𝑡𝑅5t\_{R}=5italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT = 5, and T=250𝑇250T=250italic\_T = 250 for a total of 4×25042504\times 2504 × 250 robot actions per algorithm. All algorithms are initialized with an offline dataset of 5000 image-action pairs (500 teleoperated actions with 10×\times× data augmentation).

Figure 4: Physical results for the block-pushing task with 4 robots and 2 humans, where the x𝑥xitalic\_x-axis is timesteps. C.U.R. achieves higher ROHE and cumulative successes as well as lower cumulative hard resets and idle time. However, ROHE does not noticeably improve, likely due to the low fleet operation time T𝑇Titalic\_T. ROHE for t<75𝑡75t<75italic\_t < 75 is not shown due to erratic ratio calculation (e.g., see AllegroHand ROHE for t<1000𝑡1000t<1000italic\_t < 1000 in Figure 3).
Results: We plot results in Figure [4](#S6.F4 "Figure 4 ‣ 6.3 Physical Block-Pushing Experiment ‣ 6 Experiments ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision"). We observe that the C.U.R. algorithm achieves a higher ROHE, higher cumulative successes, lower hard resets, and lower idle time than baselines, albeit by a small margin. However, the ROHE for all algorithms remains flat over time instead of improving. Results suggest that (1) the fleet operation time of T=250𝑇250T=250italic\_T = 250 is too short to observe improvement for a relatively high-performing initial policy πθ0subscript𝜋subscript𝜃0\pi\_{\theta\_{0}}italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT and (2) U𝑈Uitalic\_U-prioritization in its current form is less suitable for real-world multimodal human supervisors than it is for deterministic algorithmic supervisors. Since a human may arbitrarily choose one of multiple equally suitable actions, high robot uncertainty over these actions does not necessarily translate to a need for human supervision.
7 Limitations and Future Work
------------------------------
The IFL formulation has a number of modeling assumptions that limit its generality. (1) The human supervisors are assumed to be homogeneous, (2) all robots operate in the same state space with the same distribution, (3) all robots are independent and we do not consider robot coordination, (4) humans have perfect situational awareness [[22](#bib.bib22)] and can move to different robots without any switching latency, and (5) we assume a constant hard reset time. In terms of experiments, the simulations have algorithmic rather than human supervision, and the physical task is relatively straightforward with discrete planar actions.
In future work, we will run more physical experiments to evaluate the applicability of the IFL benchmark results to real world tasks. We will also study reinforcement learning algorithms for IFL and extend the number of features and supported environments in the IFL benchmark suite. We hope that other robotics researchers will develop their own IFL algorithms and evaluate them using the benchmark toolkit to accelerate progress.
#### Acknowledgments
This research was performed at the AUTOLAB at UC Berkeley in affiliation with the Berkeley AI Research (BAIR) Lab and the CITRIS “People and Robots” (CPAR) Initiative. The authors were supported in part by the Scalable Collaborative Human-Robot Learning (SCHooL) Project, NSF National Robotics Initiative Award 1734633, and by donations from Google, Siemens, Amazon Robotics, Toyota Research Institute, Autodesk, Honda, Intel, and Hewlett-Packard and by equipment grants from PhotoNeo, NVidia, and Intuitive Surgical. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsors. We thank our colleagues who provided helpful feedback, code, and suggestions, especially Ashwin Balakrishna, Simeon Adebola, and Alejandro Escontrela.
8 Appendix
-----------
This appendix is structured as follows. In Section 8.1 we break down the ROHE objective in more detail. In 8.2 we give the full pseudocode for Fleet-DAgger. In 8.3 we provide additional details on the simulation experiment hyperparameters. Finally, in 8.4 we provide 48 additional plots and analysis studying ablations and hyperparameter sensitivity in the IFL benchmark.
###
8.1 Mathematical Details of the IFL Problem Formulation
Recall that ROHE takes the expectation over a distribution of trajectories, pω,θ0(τ)subscript𝑝𝜔subscript𝜃0𝜏p\_{\omega,\theta\_{0}}(\tau)italic\_p start\_POSTSUBSCRIPT italic\_ω , italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_τ ), where each trajectory τ=(𝐬0,𝐚0,…,𝐬T,𝐚T)𝜏superscript𝐬0superscript𝐚0…superscript𝐬𝑇superscript𝐚𝑇\tau=(\mathbf{s}^{0},\mathbf{a}^{0},...,\mathbf{s}^{T},\mathbf{a}^{T})italic\_τ = ( bold\_s start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , … , bold\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT ) is composed of consecutive task episodes separated by resets and where the state-action tuples come from both πθsubscript𝜋𝜃\pi\_{\theta}italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT and πHsubscript𝜋𝐻\pi\_{H}italic\_π start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT. This distribution of trajectories is induced by ω𝜔\omegaitalic\_ω and θ0subscript𝜃0\theta\_{0}italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT because θ0subscript𝜃0\theta\_{0}italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT parameterizes the initial robot policy πθ0subscript𝜋subscript𝜃0\pi\_{\theta\_{0}}italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT and ω𝜔\omegaitalic\_ω affects the states that comprise DHtsuperscriptsubscript𝐷𝐻𝑡D\_{H}^{t}italic\_D start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT, which updates the robot policy πθsubscript𝜋𝜃\pi\_{\theta}italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT for subsequent timesteps. In this section, we derive the mathematical relationship between the trajectory distribution τ∼pω,θ0(τ)similar-to𝜏subscript𝑝𝜔subscript𝜃0𝜏\tau\sim p\_{\omega,\theta\_{0}}(\tau)italic\_τ ∼ italic\_p start\_POSTSUBSCRIPT italic\_ω , italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_τ ) and the allocation policy ω𝜔\omegaitalic\_ω.
Given an allocation policy ω𝜔\omegaitalic\_ω, the human policy πHsubscript𝜋𝐻\pi\_{H}italic\_π start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT, and the robot policy πθtsubscript𝜋subscript𝜃𝑡\pi\_{\theta\_{t}}italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT at each timestep t𝑡titalic\_t, the joint hybrid human-robot policy of all robots can be expressed as
| | | | |
| --- | --- | --- | --- |
| | πH∪Rt(𝐬)=[πθt(s1)(1−𝟙ω(𝐬t,πθt,𝜶t−1,𝐱t)1)+πH(s1)𝟙ω(𝐬t,πθt,𝜶t−1,𝐱t)1⋮πθt(sN)(1−𝟙ω(𝐬t,πθt,𝜶t−1,𝐱t)N)+πH(sN)𝟙ω(𝐬t,πθt,𝜶t−1,𝐱t)N],superscriptsubscript𝜋𝐻𝑅𝑡𝐬matrixsubscript𝜋subscript𝜃𝑡subscript𝑠11subscript1𝜔subscriptsuperscript𝐬𝑡subscript𝜋subscript𝜃𝑡superscript𝜶𝑡1superscript𝐱𝑡1subscript𝜋𝐻subscript𝑠1subscript1𝜔subscriptsuperscript𝐬𝑡subscript𝜋subscript𝜃𝑡superscript𝜶𝑡1superscript𝐱𝑡1⋮subscript𝜋subscript𝜃𝑡subscript𝑠𝑁1subscript1𝜔subscriptsuperscript𝐬𝑡subscript𝜋subscript𝜃𝑡superscript𝜶𝑡1superscript𝐱𝑡𝑁subscript𝜋𝐻subscript𝑠𝑁subscript1𝜔subscriptsuperscript𝐬𝑡subscript𝜋subscript𝜃𝑡superscript𝜶𝑡1superscript𝐱𝑡𝑁\pi\_{H\cup R}^{t}(\mathbf{s})=\begin{bmatrix}\pi\_{\theta\_{t}}(s\_{1})(1-\mathbbm{1}\_{\omega(\mathbf{s}^{t},\pi\_{\theta\_{t}},\boldsymbol{\alpha}^{t-1},\mathbf{x}^{t})\_{1}})+\pi\_{H}(s\_{1})\mathbbm{1}\_{\omega(\mathbf{s}^{t},\pi\_{\theta\_{t}},\boldsymbol{\alpha}^{t-1},\mathbf{x}^{t})\_{1}}\\
\vdots\\
\pi\_{\theta\_{t}}(s\_{N})(1-\mathbbm{1}\_{\omega(\mathbf{s}^{t},\pi\_{\theta\_{t}},\boldsymbol{\alpha}^{t-1},\mathbf{x}^{t})\_{N}})+\pi\_{H}(s\_{N})\mathbbm{1}\_{\omega(\mathbf{s}^{t},\pi\_{\theta\_{t}},\boldsymbol{\alpha}^{t-1},\mathbf{x}^{t})\_{N}}\end{bmatrix},italic\_π start\_POSTSUBSCRIPT italic\_H ∪ italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( bold\_s ) = [ start\_ARG start\_ROW start\_CELL italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ( 1 - blackboard\_1 start\_POSTSUBSCRIPT italic\_ω ( bold\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_italic\_α start\_POSTSUPERSCRIPT italic\_t - 1 end\_POSTSUPERSCRIPT , bold\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) + italic\_π start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) blackboard\_1 start\_POSTSUBSCRIPT italic\_ω ( bold\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_italic\_α start\_POSTSUPERSCRIPT italic\_t - 1 end\_POSTSUPERSCRIPT , bold\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL ⋮ end\_CELL end\_ROW start\_ROW start\_CELL italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ) ( 1 - blackboard\_1 start\_POSTSUBSCRIPT italic\_ω ( bold\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_italic\_α start\_POSTSUPERSCRIPT italic\_t - 1 end\_POSTSUPERSCRIPT , bold\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) + italic\_π start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ) blackboard\_1 start\_POSTSUBSCRIPT italic\_ω ( bold\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_italic\_α start\_POSTSUPERSCRIPT italic\_t - 1 end\_POSTSUPERSCRIPT , bold\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_CELL end\_ROW end\_ARG ] , | | (4) |
where 𝟙(.)\mathbbm{1}\_{(.)}blackboard\_1 start\_POSTSUBSCRIPT ( . ) end\_POSTSUBSCRIPT is an indicator function that selects the robot policy πθtsubscript𝜋subscript𝜃𝑡\pi\_{\theta\_{t}}italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT if robot i𝑖iitalic\_i is allocated to a human and selects the human policy πH(s)subscript𝜋𝐻𝑠\pi\_{H}(s)italic\_π start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( italic\_s ) otherwise. For notational convenience, ω(𝐬t,πθt,𝜶t−1,𝐱t)i:=∑j=1M𝜶ijt∈{0,1}assign𝜔subscriptsuperscript𝐬𝑡subscript𝜋subscript𝜃𝑡superscript𝜶𝑡1superscript𝐱𝑡𝑖superscriptsubscript𝑗1𝑀superscriptsubscript𝜶𝑖𝑗𝑡01\omega(\mathbf{s}^{t},\pi\_{\theta\_{t}},\boldsymbol{\alpha}^{t-1},\mathbf{x}^{t})\_{i}:=\sum\_{j=1}^{M}\boldsymbol{\alpha}\_{ij}^{t}\in\{0,1\}italic\_ω ( bold\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_italic\_α start\_POSTSUPERSCRIPT italic\_t - 1 end\_POSTSUPERSCRIPT , bold\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT := ∑ start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_M end\_POSTSUPERSCRIPT bold\_italic\_α start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ∈ { 0 , 1 }.
The trajectory distribution pω,θ0(τ)subscript𝑝𝜔subscript𝜃0𝜏p\_{\omega,\theta\_{0}}(\tau)italic\_p start\_POSTSUBSCRIPT italic\_ω , italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_τ ) can then be expressed as
| | | | | |
| --- | --- | --- | --- | --- |
| | pω,θ0(τ)subscript𝑝𝜔subscript𝜃0𝜏\displaystyle p\_{\omega,\theta\_{0}}(\tau)italic\_p start\_POSTSUBSCRIPT italic\_ω , italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_τ ) | =pω,θ0(𝐬0,𝐚0,…,𝐬T,𝐚T)absentsubscript𝑝𝜔subscript𝜃0superscript𝐬0superscript𝐚0…superscript𝐬𝑇superscript𝐚𝑇\displaystyle=p\_{\omega,\theta\_{0}}(\mathbf{s}^{0},\mathbf{a}^{0},...,\mathbf{s}^{T},\mathbf{a}^{T})= italic\_p start\_POSTSUBSCRIPT italic\_ω , italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , … , bold\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT ) | | (5) |
| | | =p0(𝐬0)∏t=0TπH∪Rt(𝐚t|𝐬t)∏t=0T−1p¯(𝐬t+1|𝐬t,𝐚t).absentsubscript𝑝0superscript𝐬0superscriptsubscriptproduct𝑡0𝑇superscriptsubscript𝜋𝐻𝑅𝑡conditionalsuperscript𝐚𝑡superscript𝐬𝑡superscriptsubscriptproduct𝑡0𝑇1¯𝑝conditionalsuperscript𝐬𝑡1superscript𝐬𝑡superscript𝐚𝑡\displaystyle=p\_{0}(\mathbf{s}^{0})\prod\_{t=0}^{T}\pi\_{H\cup R}^{t}(\mathbf{a}^{t}|\mathbf{s}^{t})\prod\_{t=0}^{T-1}\bar{p}(\mathbf{s}^{t+1}|\mathbf{s}^{t},\mathbf{a}^{t}).= italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT ) ∏ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_H ∪ italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( bold\_a start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) ∏ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT over¯ start\_ARG italic\_p end\_ARG ( bold\_s start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) . | | (6) |
We comment that the soft and hard reset can be easily incorporated into the transition dynamics p¯¯𝑝\bar{p}over¯ start\_ARG italic\_p end\_ARG depending on the task. For example, for constraint violations (i.e., hard resets) c(sit)=1𝑐superscriptsubscript𝑠𝑖𝑡1c(s\_{i}^{t})=1italic\_c ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) = 1, we can set p(sit+k|sit,ait)=δ(sit)𝑝conditionalsuperscriptsubscript𝑠𝑖𝑡𝑘superscriptsubscript𝑠𝑖𝑡superscriptsubscript𝑎𝑖𝑡𝛿superscriptsubscript𝑠𝑖𝑡p(s\_{i}^{t+k}|s\_{i}^{t},a\_{i}^{t})=\delta(s\_{i}^{t})italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t + italic\_k end\_POSTSUPERSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) = italic\_δ ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) for 1≤k≤tR−11𝑘subscript𝑡𝑅11\leq k\leq t\_{R}-11 ≤ italic\_k ≤ italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT - 1 and p(sit+TR|sit,ait)=p0(s0)𝑝conditionalsuperscriptsubscript𝑠𝑖𝑡subscript𝑇𝑅superscriptsubscript𝑠𝑖𝑡superscriptsubscript𝑎𝑖𝑡subscript𝑝0superscript𝑠0p(s\_{i}^{t+T\_{R}}|s\_{i}^{t},a\_{i}^{t})=p\_{0}(s^{0})italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t + italic\_T start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) = italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT ) where δ(⋅)𝛿⋅\delta(\cdot)italic\_δ ( ⋅ ) is the Dirac delta function and tRsubscript𝑡𝑅t\_{R}italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT is the hard reset time. Similarly, for goal-conditioned tasks with goal g𝑔gitalic\_g, soft resets after achieving the goal can be expressed through the transition dynamics p(sit+1|sit,ait)=p0(s0)𝑝conditionalsuperscriptsubscript𝑠𝑖𝑡1superscriptsubscript𝑠𝑖𝑡superscriptsubscript𝑎𝑖𝑡subscript𝑝0superscript𝑠0p(s\_{i}^{t+1}|s\_{i}^{t},a\_{i}^{t})=p\_{0}(s^{0})italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) = italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT ) if sit⊆gsuperscriptsubscript𝑠𝑖𝑡𝑔s\_{i}^{t}\subseteq gitalic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ⊆ italic\_g. For MDPs with finite time horizon where the environment soft resets when the maximum time horizon is reached, we can augment the state with additional time information that keeps track of the timestep in each episode, and reset the state when it times out. In this case, the MDP transition dynamics will be time-dependent: pt(sit+1|sit,ait)subscript𝑝𝑡conditionalsuperscriptsubscript𝑠𝑖𝑡1superscriptsubscript𝑠𝑖𝑡superscriptsubscript𝑎𝑖𝑡p\_{t}(s\_{i}^{t+1}|s\_{i}^{t},a\_{i}^{t})italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ).
###
8.2 Fleet-DAgger Algorithm Details
In this section, we provide a detailed algorithmic description of Fleet-DAgger.
Fleet-DAgger uses priority function p^^𝑝\hat{p}over^ start\_ARG italic\_p end\_ARG and tTsubscript𝑡𝑇t\_{T}italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT to define an allocation policy ω𝜔\omegaitalic\_ω. Concretely, it can be interpreted as a function F𝐹Fitalic\_F where F(p^)=ω𝐹^𝑝𝜔F(\hat{p})=\omegaitalic\_F ( over^ start\_ARG italic\_p end\_ARG ) = italic\_ω, i.e., a “meta-algorithm” (algorithm that outputs another algorithm) akin to function composition in computer programming. The pseudocode of Fleet-DAgger is provided in Algorithm [1](#alg1 "Algorithm 1 ‣ 8.2 Fleet-DAgger Algorithm Details ‣ 8 Appendix ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision").
Algorithm 1 Fleet-DAgger
1:Number of robots N𝑁Nitalic\_N, Number of humans M𝑀Mitalic\_M, Priority function p^^𝑝\hat{p}over^ start\_ARG italic\_p end\_ARG, Minimum teleoperation time tTsubscript𝑡𝑇t\_{T}italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT, Hard reset time tRsubscript𝑡𝑅t\_{R}italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT
2:Allocation policy ω𝜔\omegaitalic\_ω
3:
4:function ω𝜔\omegaitalic\_ω(𝐬tsuperscript𝐬𝑡\mathbf{s}^{t}bold\_s start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT, πθtsubscript𝜋subscript𝜃𝑡\pi\_{\theta\_{t}}italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT, 𝜶t−1superscript𝜶𝑡1\boldsymbol{\alpha}^{t-1}bold\_italic\_α start\_POSTSUPERSCRIPT italic\_t - 1 end\_POSTSUPERSCRIPT, 𝐱tsuperscript𝐱𝑡\mathbf{x}^{t}bold\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT) # The allocation policy ω𝜔\omegaitalic\_ω returns a matrix 𝜶t∈{0,1}N×Msuperscript𝜶𝑡superscript01𝑁𝑀\boldsymbol{\alpha}^{t}\in\{0,1\}^{N\times M}bold\_italic\_α start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ∈ { 0 , 1 } start\_POSTSUPERSCRIPT italic\_N × italic\_M end\_POSTSUPERSCRIPT
5: Compute priority scores of each robot: p^(sit,πθt)∀i=1,…,Nformulae-sequence^𝑝superscriptsubscript𝑠𝑖𝑡subscript𝜋subscript𝜃𝑡for-all𝑖
1…𝑁\hat{p}(s\_{i}^{t},\pi\_{\theta\_{t}})\quad\forall i=1,...,Nover^ start\_ARG italic\_p end\_ARG ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) ∀ italic\_i = 1 , … , italic\_N
6: Initialize 𝜶ijt=0∀i,jsuperscriptsubscript𝜶𝑖𝑗𝑡0for-all𝑖𝑗\boldsymbol{\alpha}\_{ij}^{t}=0\quad\forall i,jbold\_italic\_α start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT = 0 ∀ italic\_i , italic\_j
7: for i∈{1,…,N}𝑖1…𝑁i\in\{1,\ldots,N\}italic\_i ∈ { 1 , … , italic\_N } do
8: for j∈{1,…,M}𝑗1…𝑀j\in\{1,\ldots,M\}italic\_j ∈ { 1 , … , italic\_M } do
9: if 𝜶ijt−1=1superscriptsubscript𝜶𝑖𝑗𝑡11\boldsymbol{\alpha}\_{ij}^{t-1}=1bold\_italic\_α start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t - 1 end\_POSTSUPERSCRIPT = 1 then # For robots that were receiving assistance during the last timestep, check whether the minimum intervention time has lapsed using auxiliary information 𝐱tsuperscript𝐱𝑡\mathbf{x}^{t}bold\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT
10: if Intervention type for robot i𝑖iitalic\_i = Hard reset and Intervention duration <tRabsentsubscript𝑡𝑅<t\_{R}< italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT then
11: 𝜶ijt=1superscriptsubscript𝜶𝑖𝑗𝑡1\boldsymbol{\alpha}\_{ij}^{t}=1bold\_italic\_α start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT = 1
12: if Intervention type for robot i𝑖iitalic\_i = Teleop and Intervention duration <tTabsentsubscript𝑡𝑇<t\_{T}< italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT then
13: 𝜶ijt=1superscriptsubscript𝜶𝑖𝑗𝑡1\boldsymbol{\alpha}\_{ij}^{t}=1bold\_italic\_α start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT = 1
14: Let I={i:∑j=1M𝜶ijt=1}𝐼conditional-set𝑖superscriptsubscript𝑗1𝑀superscriptsubscript𝜶𝑖𝑗𝑡1I=\{i:\sum\_{j=1}^{M}\boldsymbol{\alpha}\_{ij}^{t}=1\}italic\_I = { italic\_i : ∑ start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_M end\_POSTSUPERSCRIPT bold\_italic\_α start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT = 1 } # Set of robots that will continue with past assistance
15: Let J={j:∑i=1N𝜶ijt=1}𝐽conditional-set𝑗superscriptsubscript𝑖1𝑁superscriptsubscript𝜶𝑖𝑗𝑡1J=\{j:\sum\_{i=1}^{N}\boldsymbol{\alpha}\_{ij}^{t}=1\}italic\_J = { italic\_j : ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT bold\_italic\_α start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT = 1 } # Set of humans that will continue with past assistance
16: Sort robot indices with positive priority scores that are not in I𝐼Iitalic\_I from highest to lowest, denoted as {i1,i2,…}subscript𝑖1subscript𝑖2…\{i\_{1},i\_{2},...\}{ italic\_i start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_i start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … }
17: Let k=1𝑘1k=1italic\_k = 1
18: for j∈{1,…,M}∖J𝑗1…𝑀𝐽j\in\{1,\ldots,M\}\setminus Jitalic\_j ∈ { 1 , … , italic\_M } ∖ italic\_J do
19: 𝜶ik,jt=1superscriptsubscript𝜶subscript𝑖𝑘𝑗𝑡1\boldsymbol{\alpha}\_{{i\_{k}},j}^{t}=1bold\_italic\_α start\_POSTSUBSCRIPT italic\_i start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT , italic\_j end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT = 1
20: k=k+1𝑘𝑘1k=k+1italic\_k = italic\_k + 1
21: return 𝜶𝜶\boldsymbol{\alpha}bold\_italic\_α
22:
23:return ω𝜔\omegaitalic\_ω
###
8.3 Simulation Hyperparameter Details
Implementations of C.U.R. and baselines are available in the open-source benchmark code, where the scripts are configured to run with the same hyperparameters as the simulation experiments in the main text. Recall that N=100𝑁100N=100italic\_N = 100 robots, M=10𝑀10M=10italic\_M = 10 humans, minimum intervention time tT=5subscript𝑡𝑇5t\_{T}=5italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT = 5, hard reset time tR=5subscript𝑡𝑅5t\_{R}=5italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT = 5, and operation time T=10000𝑇10000T=10000italic\_T = 10000. For reference, additional parameters are in Table [1](#S8.T1 "Table 1 ‣ 8.3 Simulation Hyperparameter Details ‣ 8 Appendix ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision"), where |S|𝑆|S|| italic\_S | is the dimensionality of the (continuous) state space, |A|𝐴|A|| italic\_A | is the dimensionality of the (continuous) action space, r^^𝑟\hat{r}over^ start\_ARG italic\_r end\_ARG is the risk threshold below which robots are assigned zero risk, u^^𝑢\hat{u}over^ start\_ARG italic\_u end\_ARG is the uncertainty threshold below which robots are assigned zero uncertainty, and tWsubscript𝑡𝑊t\_{W}italic\_t start\_POSTSUBSCRIPT italic\_W end\_POSTSUBSCRIPT is the warmup period for C.U.R. during which constraint violation is not prioritized.
| Environment | |S|𝑆|S|| italic\_S | | |A|𝐴|A|| italic\_A | | r^^𝑟\hat{r}over^ start\_ARG italic\_r end\_ARG | u^^𝑢\hat{u}over^ start\_ARG italic\_u end\_ARG | tWsubscript𝑡𝑊t\_{W}italic\_t start\_POSTSUBSCRIPT italic\_W end\_POSTSUBSCRIPT |
| --- | --- | --- | --- | --- | --- |
| Humanoid | 108 | 21 | 0.5 | 0.05 | 1000 |
| Anymal | 48 | 12 | 0.5 | 0.05 | 250 |
| AllegroHand | 88 | 16 | 0.5 | 0.15 | 2500 |
Table 1: Simulation environment hyperparameters.
###
8.4 Hyperparameter Sensitivity and Ablation Studies
In this section, we run additional simulation experiments in the IFL benchmark to study (1) ablations of the components of the C.U.R. algorithm (Figure [5](#S8.F5 "Figure 5 ‣ 8.4 Hyperparameter Sensitivity and Ablation Studies ‣ 8 Appendix ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision")), (2) sensitivity to the ratio of number of robots N𝑁Nitalic\_N to number of humans M𝑀Mitalic\_M (Figure [6](#S8.F6 "Figure 6 ‣ 8.4 Hyperparameter Sensitivity and Ablation Studies ‣ 8 Appendix ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision")), (3) sensitivity to minimum intervention time tTsubscript𝑡𝑇t\_{T}italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT (Figure [7](#S8.F7 "Figure 7 ‣ 8.4 Hyperparameter Sensitivity and Ablation Studies ‣ 8 Appendix ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision")), and (4) sensitivity to hard reset time tRsubscript𝑡𝑅t\_{R}italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT (Figure [8](#S8.F8 "Figure 8 ‣ 8.4 Hyperparameter Sensitivity and Ablation Studies ‣ 8 Appendix ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision")). All runs are averaged over 3 random seeds, where shading indicates 1 standard deviation.
Ablations: We test C.U.R.(-w), the C.U.R. algorithm without the C𝐶Citalic\_C-warmup period during which constraint violation is not prioritized. We also test all subsets of the C.U.R. priority function without C𝐶Citalic\_C-warmup. For example, U. indicates only prioritizing by uncertainty, and C.R. indicates prioritizing by constraint violations followed by risk (no uncertainty). Results suggest that C.U.R. outperforms all ablations in all environments in terms of ROHE and cumulative successes and is competitive in terms of hard resets and idle time. However, as in the main text, C.U.R. and C. incur more hard resets in AllegroHand than alternatives, as again, prioritizing constraint violations for a hard environment where learning has not converged may ironically enable more opportunities for hard resets. Interestingly, while C.U.R. outperforms ablations in ROHE in AllegroHand for large T𝑇Titalic\_T, U𝑈Uitalic\_U-prioritization’s ROHE is significantly higher for small values of T𝑇Titalic\_T. We observe that since U. achieves very low cumulative successes in the same time period, U. is requesting an extremely small amount of human time early in operation, resulting in erratic ratio calculations.
Number of Humans: While keeping N𝑁Nitalic\_N fixed to 100 robots, we run C.U.R. with default hyperparameters and vary M𝑀Mitalic\_M to be 1, 5, 10, 25, and 50 humans. In the Humanoid and Anymal environment, as expected, cumulative successes increases with the number of humans. The performance boost gets smaller as M𝑀Mitalic\_M increases: runs with 25 and 50 humans have very similar performance. Despite lower cumulative successes, M=10𝑀10M=10italic\_M = 10 achieves the highest ROHE, suggesting a larger set of humans provides superfluous interventions. We also observe that with only 1 human, the number of hard resets and idle time is very large, as the human is constantly occupied with resetting constraint-violating robots, which fail at a faster rate than the human can reset them. Finally, in the AllegroHand environment, the number of humans when M≥5𝑀5M\geq 5italic\_M ≥ 5 does not make much of a visible difference, perhaps due to the relatively high number of cumulative successes.
Minimum Intervention Time: We run C.U.R. with default hyperparameters but vary tTsubscript𝑡𝑇t\_{T}italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT to be 1, 5, 20, 50, 100, and 500 timesteps. We observe that both decreasing tTsubscript𝑡𝑇t\_{T}italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT from 5 to 1 and increasing tTsubscript𝑡𝑇t\_{T}italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT to 20 and beyond have a negative impact on the ROHE due to ceding control prematurely (in the former case) and superfluous intervention length (in the latter). Hard resets are low and idle time is high for large tTsubscript𝑡𝑇t\_{T}italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT as the humans are occupied providing long teleoperation interventions. This also negatively affects throughput, as cumulative successes falls for very large tTsubscript𝑡𝑇t\_{T}italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT. Long interventions may also be less useful training data, as in the limit these interventions reduce to more offline data (i.e., labels for states encountered under the human policy rather than that of the robot).
Hard Reset Time: Finally, we run C.U.R. with default hyperparameters but vary tRsubscript𝑡𝑅t\_{R}italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT to be 1, 5, 20, 50, 100, and 500 timesteps. As expected, the ROHE decreases as tRsubscript𝑡𝑅t\_{R}italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT increases, as more human effort is required to achieve the same return. The other metrics follow similar intuitive trends: increasing tRsubscript𝑡𝑅t\_{R}italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT results in a decrease in cumulative successes, decrease in hard resets, and increase in idle time.

Figure 5: Ablations: Simulation results in the Isaac Gym benchmark tasks with ablations of C.U.R., where the x𝑥xitalic\_x-axis is timesteps from 0 to T=10,000𝑇10000T=10,000italic\_T = 10 , 000. We plot the metrics described in [6.1](#S6.SS1 "6.1 Metrics ‣ 6 Experiments ‣ Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision"). The C.U.R. algorithm outperforms all ablations on all environments in terms of ROHE and cumulative successes (except AllegroHand ROHE for low T𝑇Titalic\_T values) and is competitive with ablations for cumulative hard resets and idle time.

Figure 6: Number of Humans: Simulation results in the Isaac Gym benchmark tasks with N=100𝑁100N=100italic\_N = 100 robots and M𝑀Mitalic\_M human supervisors, where M𝑀Mitalic\_M varies and the x𝑥xitalic\_x-axis is timesteps from 0 to T=10,000𝑇10000T=10,000italic\_T = 10 , 000.

Figure 7: Minimum Intervention Time: Simulation results in the Isaac Gym benchmark tasks for variations in minimum intervention time tTsubscript𝑡𝑇t\_{T}italic\_t start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT, where the x𝑥xitalic\_x-axis is timesteps from 0 to T=10,000𝑇10000T=10,000italic\_T = 10 , 000.
![Refer to caption]()
Figure 8: Hard Reset Time: Simulation results in the Isaac Gym benchmark tasks for variations in hard reset time tRsubscript𝑡𝑅t\_{R}italic\_t start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT, where the x𝑥xitalic\_x-axis is timesteps from 0 to T=10,000𝑇10000T=10,000italic\_T = 10 , 000. |
c3e41d28-6f3f-4665-af99-d1112e7d4729 | trentmkelly/LessWrong-43k | LessWrong | President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
Released today (10/30/23) this is crazy, perhaps the most sweeping action taken by government on AI yet.
Below, I've segmented by x-risk and non-x-risk related proposals, excluding the proposals that are geared towards promoting its use[1] and focusing solely on those aimed at risk. It's worth noting that some of these are very specific and direct an action to be taken by one of the executive branch organizations (i.e. sharing of safety test results) but others are guidances, which involve "calls on Congress" to pass legislation that would codify the desired action.
[Update]: The official order (this is a summary of the press release) has now be released, so if you want to see how these are codified to a greater granularity, look there[2].
Existential Risk Related Actions:
* Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.
* Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety. |
116114cb-d7ef-4f29-b540-7255c1b524c5 | trentmkelly/LessWrong-43k | LessWrong | In defense of memes (and thought-terminating clichés)
Crossposted from my Substack and my Reddit post on r/SlateStarCodex
I often think that memes, thought-terminating clichés, and other tools meant to avoid cognitive dissonance (e.g. bingo a la Scott on Superweapons and bingo) are overly blamed for degrading public discourse and rationality. Bentham's Bulldog recently wrote a post on this subject, so I figured it was the perfect time to make a response and write my thoughts down.
TLDR: People try to avoid cognitive dissonance via whatever means available to them, and have been doing so for millennia. Removing the tools they use to avoid cognitive dissonance won't stop this behavior: the dissonance is still there, along with the urge to avoid it, so they'll just find other tools. Memes can have every possible meaning attached to them, but are ultimately designed for people to connect with each other and spread their inside jokes to other people in their communities and around the world.
----------------------------------------
In a recent post titled The Memefication of Thought (it’s a good post and you should read it), Bentham's Bulldog railed against the modern tendency to dismiss serious arguments with inane memes so as to avoid thinking about it. He had some pretty good memes of his own, such as the classic Swole Doge vs. Cheems:
and a variant on the Soyjacks vs Chads:
I would also like to submit what I believe to be the progenitor of this entire class of internet meme, Virgin vs. Chad:
But Bentham views these memes as more than just funny jokes. He thinks that
> the memefication of public discourse has been devastatingly corrosive to the quality of public rationality. And the public was never that rational to begin with!
He then goes on to discuss an old Scott Alexander post on a related subject, social justice warrior bingo cards, and references his own post called “Against The Dunkers”.
I’m sympathetic to the motivations behind his argument. I also wish that public discourse was more rational, I’ |
199897bf-c86f-47c7-99ae-1d84309975f1 | StampyAI/alignment-research-dataset/special_docs | Other | The Whiteness of AI
RESEARCH ARTICLE Open Access
The Whiteness of AI
Stephen Cave1&Kanta Dihal1
Received: 3 January 2020 / Accepted: 28 June 2020/
#The Author(s) 2020
Abstract
This paper focuses on the fact that AI is predominantly portrayed as white —in colour,
ethnicity, or both. We first illustrate the prevalent Whiteness of real and imagined
intelligent machines in four categories: humanoid robots, chatbots and virtual assis-tants, stock images of AI, and portrayals of AI in film and television. We then offerthree interpretations of the Whiteness of AI, drawing on critical race theory, particularlythe idea of the White racial frame. First, we examine the extent to which this Whitenessmight simply reflect the predominantly White milieus from which these artefacts arise.Second, we argue that to imagine machines that are intelligent, professional, orpowerful is to imagine White machines because the White racial frame ascribes these
attributes predominantly to White people. Third, we argue that AI racialised as White
allows for a full erasure of people of colour from the White utopian imaginary. Finally,we examine potential consequences of the racialisation of AI, arguing it could exacer-bate bias and misdirect concern.
Keywords Artificial intelligence .Robots .Critical race studies .Racialisation .
Anthropomorphism .Whiteness
Overall, I construe race, racialization, and racial identities as on-going sets of
political relations that require, through constant perpetuation via institutions,discourses, practices, desires, infrastructures, languages, technologies, sciences,economies, dreams, and cultural artefacts, the barring of nonwhite subjects from
the category of the human as it is performed in the modern west.
Alexander G. Weheliye (Weheliye 2014 ,2 )
Technology as an abstract concept functions as a white mythology.
Joel Dinerstein (Dinerstein 2006, 570)Philosophy & Technology
https://doi.org/10.1007/s13347-020-00415-6
\*Stephen Cave
sjc53@cam.ac.uk
1Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK
1 Introduction
It is a truth little acknowledged that a machine in possession of intelligence must be
white. Typing terms like “robot”or“artificial intelligence ”into a search engine will
yield a preponderance of stock images of white plastic humanoids. Perhaps morenotable still, these machines are not only white in colour, but the more human they
are made to look, the more their features are made ethnically White.
1In this paper, we
problematize the often unnoticed and unremarked-upon fact that intelligent machinesare predominantly conceived and portrayed as White. We argue that this Whitenessboth illuminates particularities of what (Anglophone Western) society hopes for andfears from these machines, and situates these affects within long-standing ideologicalstructures that relate race and technology.
Race and technology are two of the most powerful and important categories for
understanding the world as it has developed since at least the early modern period. Yet,
as a number of scholars have noted, their profound entanglement remains understudied
(Sinclair 2004 ; de la Peña 2010 ). There are a number of possible reasons for this —and,
as Bruce Sinclair writes, “racial prejudice dominates all of them ”(Sinclair 2004 ,1 ) .
They include the lack of first- or secondhand accounts of the role of people of colour inthe development and use of technology; persistent stereotypes about technology as theprovince and product of one particular racial group —White people; and the persistent
tendency of members of that group, who dominate the academy in the US and Europe,to refuse to see themselves as racialised or race as a matter of concern at all.
This lack of scholarly attention is surprising because, as Michael Adas elucidated in
1989, the idea of technological superiority was essential to the logic of colonialism. Notonly was superior weaponry and transportation (etc.) necessary for large-scale conquestand control of foreign territory, it was also part of its justification: proof that WhiteEuropeans were an advanced civilisation with a right to rule over others (Adas 1989 ).
Fortunately, this lack of attention is increasingly being remedied, and the relationshipbetween race and technology is beginning to garner the kind of attention that has sincethe 1970s been given to gender and technology, following the pioneering work of
Donna Haraway, Sandra Harding, and Evelyn Fox Keller (Haraway 1991 ;H a r d i n g
1986 ;K e l l e r 1985 ). This includes attention to this century ’s ubiquitous digital technol-
ogies. In 2006, Lisa Nakamura asked, “How do we make cyberculture studies a field
that as a matter of course employs critical race theory and theories of cultural
difference …?”(Nakamura 2006 , 35). Since then, a number of significant works have
attempted to do just that, including Safiya Noble ’sAlgorithms of Oppression and Ruha
Benjamin ’sRace After Technology (Noble 2018 ; Benjamin 2019 ).
This paper aims to contribute to this body of literature on race and technology by
examining how the ideology of race shapes conceptions and portrayals of artificial
intelligence (AI). Our approach is grounded in the philosophy of race and critical racetheory, particularly the Black feminist theories of bell hooks, Sylvia Wynter andAlexander G. Weheliye (hooks 1992 1997 ; Wynter 2003 ; Weheliye 2014 ), and work
1Following the increasingly common usage of the capitalised form “Black ”to denote the ethnicity and
“black”the colour, we use “White ”to refer to the ethnicity and “white”the colour. While not yet the norm, as
can be seen in our quotations of critics who do not employ this distinction, this usage will make our discussion
clearer.S. Cave, K. Dihal
in Whiteness studies, including that of Richard Dyer, Joe R. Feagin, and Ruth
Frankenberg (Dyer 1997 ; Feagin 2013 ; Frankenberg 1997a ). In 2006, Feagin coined
the term “white racial frame ”to describe those aspects of the Anglophone Western
worldview that perpetuate a racialised hierarchy of power and privilege (Feagin 2006 ).
In his words, “the white racial frame includes a broad and persisting set of racial
stereotypes, prejudices, ideologies, interlinked interpretations and narratives, and visual
images ”(Feagin 2013 , xi). Although it reached its peak in the age of colonial
expansion, this framing persists: “Today, as whites move through their lives, they
frequently combine racial stereotypes and biases (a beliefs aspect), racial metaphorsand concepts (a deeper cogn itive aspect), racialised ima ges (the visual aspect),
racialised emotions (feelings), interpretive racial narratives, and inclinations to discrim-inate within a broad racial framing ”(Feagin 2013 , 91). In essence, this paper examines
how representations of AI reflect this White racial frame.
One of the main aims of critical race theory in general, and Whiteness
studies in particular, is to draw attention to the operation of Whiteness in
Western culture. The power of Whiteness ’s signs and symbols lies to a large
extent in their going unnoticed and unquestioned, concealed by the myth ofcolour-blindness. As scholars such as Jessie Daniels and Safiya Noble havenoted, this myth of colour-blindness is pa rticularly prevalent in Silicon Valley
and surrounding tech culture, where it serves to inhibit serious interrogation ofracial framing (Daniels 2013 ,2015 ; Noble 2018 ). Hence the first step for such
an interrogation is, in Richard Dyer ’st e r m ,t o “make strange ”this Whiteness,
de-normalising and drawing attention to it (Dyer 1997 , 10). As Steve Garner
puts it, the reason “for deploying whiteness as a lens is that it strips a
normative privileged identity of its cloak of invisibility ”(Garner 2007 ,5 ) .T h i s
is our primary intention in examining intelligent machines through the Whiteracial frame.
In the next section of this paper, we first lay out current evidence for the assertion
that conceptions and portrayals of AI —both embodied as robots and disembodied —are
racialised, then evidence that such machines are predominantly racialised as White. In
the third section of the paper, we offer our readings of this Whiteness. Our methods are
qualitative. As de la Peña writes: “Studying whiteness means working with evidence
more interpretive than tangible; it requires imaginative analyses of language andsatisfaction with identifying possible motivations of subjects, rather than definitivetrajectories of innovation, production, and consumption ”( d el aP e ñ a 2010 ,9 2 6 ) .W e
offer three interpretations of the Whiteness of AI. First, the normalisation of Whitenessin the Anglophone West can go some way to explaining why that sphere ’sp r o d u c t s ,
including representations of AI, are White. But we argue that this argument alone is
insufficient. Second, we argue that to imagine an intelligent (autonomous, agential,
powerful) machine is to imagine a White machine because the White racial frameascribes these attributes predominantly to White people. Thirdly, we argue that AIracialised as White allows for a full erasure of people of colour from the White utopianimaginary. Such machines are conceived as tools that will replace “dirty, dull, or
dangerous ”tasks (Murphy 2000 , 16), including replacing human interactions that are
considered metaphorically dirty: White robot servants will allow the White master tolive a life of ease unsullied by interaction with people of other races.The Whiteness of AI
2S e e i n gt h eW h i t e n e s so fA I
Our concern in this paper is with the racialisation (as White) of both real and imagined
machines that are implied or claimed to be intelligent. By racialisation, we mean theascription of characteristics that are used to identify and delineate races in a given racialframe, which in this case is the Anglophone West. Feagan notes:
Among the most important ingredients of this frame are: (1) the recurring use of
certain physical characteristics, such as skin colour and facial features, to differ-entiate social groups; (2) the constant linking of physical characteristics tocultural characteristics; and (3) the regular use of physical and linked culturaldistinctions to differentiate socially “superior ”and“inferior ”groups in a social
hierarchy (Feagin 2013 ,4 1 ) .
It is worth noting that “physical characteristics ”need not only refer to those that are
visible: voice and accent are also used as markers for social categorisation. Similarly,the category “cultural characteristics ”is also used expansively and can include markers
such as dialect, mannerisms, and dress codes, as well as mental and moral qualities,such as diligence, industriousness, reliability, trustworthiness, inventiveness, and intel-lectual ability. Indeed, these mental and moral qualities have always been an essentialpart of the racial frame, as it is largely on the basis of these that claims of superiority orinferiority have been made.
2.1 Machines Can Be Racialised
That machines can be racialised, in the sense that they can be given attributes that
enable their identification with human racial categories, has been empiricallydemonstrated. For example, in one study, Christoph Bartneck and colleagues tookpictures of the humanoid Nao robot and adjusted the colouration to match the skin toneof stock images of White and Black people (Bartneck et al. 2018 ). They then asked
participants to define the race of the robot with several options including “does not
apply ”. A minority —ranging across the experiments from 7 to 20% —chose the “does
not apply ”option, while a majority —ranging from 53 to 70% —identified the robots as
belonging to the race from which their colouration derived. They concluded “Partici-
pants were able to easily and confidently identify the race of robots according to theirracialization [...] Thus, there is also a clear sense in which these robots –and by
extension other humanoid robots –do have race ”(Bartneck et al. 2018 ,2 0 1 ) .
This should not be surprising. Many machines are anthropomorphised —that is,
made to be human-like to some degree —in order to facilitate human-machine interac-
tion. This might involve obvious physical features (a head on top, two eyes, a mouth,four limbs, bipedalism, etc.), but it can also include invisible features such as a human-like voice, or human-like interactions, such as politeness or humour. Given theprevalence of racial framing, in most contexts, to be human-like means to have race.Consequently, as Liao and He point out in their discussion of the racialisation ofpsychotherapeutic chatbots, “racial identity is an integral part of anthropomorphized
agents ”(Liao and He 2020 , 2). They go on to explore a number of racial cues for virtual
agents, including visual cues such as skin colour, but also cultural signifiers such asS. Cave, K. Dihal
names (e.g. for male names, Jake as White, Darnell as Black, and Antonio as Hispanic).
Similarly, “even text-based conversational exchanges ”—that is, those with no visual
component at all —“perform a racial or ethnic identity ”through the interlocutors ’
choice of dialect, etc. (Marino 2014 ,3 ) .
Given the sociopolitical importance of the racial frame in structuring people ’s
interactions, if machines are really being racialised, then we would expect this
to have an impact on how people interact with these machines. Numerous
studies show just this. For example, Liao and He found that a person ’s
“perceived interpersonal closeness ”with a virtual agent is higher when the
virtual agent has the same racial identity as that person (Liao and He 2020 ,
2). Other studies reflect the extent to which racism —prejudicial treatment on
the basis of race —is intrinsic to racial framing.
As detailed in their paper “Robots Racialized in the Likeness of Marginalized Social
Identities are Subject to Greater Dehumanization than Those Racialized as White ”,
Strait et al. analysed free-form online responses to three videos, each depicting a
female-gendered android with a different racial identity: Black, White, and East Asian.Their aim was to assess whether the same kind of marginalising and dehumanisingcommentary that is applied to real people of colour would be applied to these robots.They found that the valence of the commentary was significantly more negativetowards the Black robot than towards the White or Asian ones and that both the Asianand Black robots were subject to over twice as many dehumanising comments as theWhite robot (Strait et al. 2018 ).
Two recent studies have further examined the transfer of bias to machines
using the “shooter bias ”paradigm. This paradigm was first described in the
2002 paper “The Police Officer ’s Dilemma: Using Ethnicity to Disambiguate
Potentially Threatening Individuals ”(Correll et al. 2002 ). It used a simple video
game featuring images of (real) Black and White male targets, each holdingeither a gun or a nonthreatening object. P articipants were instructed to shoot
only armed targets. A clear racial bias was identified: “participants fired on an
armed target more quickly when he was African American than when he was
White, and decided not to shoot an unarmed target more quickly when he was
White than when he was African American ”(Correll et al. 2002 , 1325). Studies
by Bartneck et al. and Addison et al. used the same methodology to examinewhether this “shooter bias ”would be transferred to racialised robots (Bartneck
et al. 2018 ;A d d i s o ne ta l . 2019 ). They found that “people showed a similar
shooter bias toward robots racialized as Black relative to White in a similarfashion as they showed toward Black vs. White humans, no matter their ownrace”(Addison et al. 2019 , 493).
2.2 Whiteness as the Norm for Intelligent Machines
The previous section shows that research has empirically demonstrated that machines
can be racialised and that this racialisation includes transfer of the attendant biasesfound in the human world. In this subsection, we will survey evidence for the extent towhich AI systems —machines purported to be intelligent —are predominantly racialised
as White. We will look briefly at four categories: real humanoid robots, virtual personal
assistants, stock images of AI, and portrayals of AI in film and television.The Whiteness of AI
2.2.1 The Whiteness of Humanoid Robots
A number of commentators have remarked on the preponderant Whiteness of human-
oid robots. In their proposed “code of ethics ”for human-robot interaction Riek and
Howard note the “lack of diversity in robot morphology and behavior ”:
In terms of race, with precious few exceptions, such as Hanson ’s Bina48, the vast
majority of android and gynoid robots are Asian or Caucasian in their features forno discernible reason. Furthermore, most of these robots tend to have a euro-centric design with regards to their appearance, behavior, and voice. (Riek andHoward 2014 ,4 )
Human-computer interaction researchers Christoph Bartneck and colleagues, who
conducted some of the studies cited above, have also noted that robots are usually
racialised as White: “most of the main research platforms for social robotics, including
Nao, Pepper, and PR2, are stylized with white materials and are presumably White ”
(Bartneck et al. 2018 , 202). Finally, media studies and literary scholar Jennifer Rhee
notes the “normalization and universalization of whiteness ”as expressed both in earlier
robotics research and in robot toys: “Kismet, with its blue eyes, light brown eyebrows,
and pink ears, also ‘normalizes whiteness ’, as do other robot companions, such as the
blonde-haired, blue-eyed Cindy Smart Doll and the similarly blonde-haired, blue-eyedMy Friend Cayla. ”(Rhee 2018 ,1 0 5 ) .
Although robots such as Nao and Pepper have enjoyed commercial success, neither
has received quite the attention garnered by Sophia from Hanson Robotics. Thismachine consists foremost of a White humanoid head, sometimes also with an uppertorso (see Fig. 1). It has not only given numerous high-profile television interviews but
also received political honours, including in 2017 receiving citizenship of Saudi Arabiaand becoming an “Innovation Champion ”for the United Nations Development Pro-
gramme (Weller 2017 ;U N D P 2017 ).
2.2.2 The Whiteness of Chatbots and Virtual Assistants
Though conversational agents do not exhibit any visual racial cues, they are racialised
by means of sociolinguistic markers (Sweeney 2016 ; Villa-Nicholas and Sweeney
2019 ). Discussing ELIZA, an influential natural language processing program created
by Joseph Weizenbaum at the MIT AI Laboratory in 1966, Mark Marino writes: “If
ELIZA presented a bot that tried to imitate language, it was performing standard whitemiddle-class English, without a specific identifying cultural inflection... language with-
out culture, disembodied, hegemonic, and, in a word, white ”(Marino 2014 ,5 ) .S i n c e
then, natural language processing has entered the mainstream, with “virtual assistants ”
existing in many people ’s pockets, handbags, or homes through devices such as
smartphones. Indeed, this is one of the most common ways in which people interactwith technology that could be labelled “AI”. These tools present their designers with
many decisions about socio-cultural positioning. Ruha Benjamin recalls this anecdote:
A former Apple employee who noted that he was “not Black or Hispanic ”
described his experience on a team that was developing speech recognition forS. Cave, K. Dihal
Siri, the virtual assistant program. As they worked on different English dialects
—Australian, Singaporean and Indian English —he asked his boss: “What about
African American English? ”To this his boss responded: “Well, Apple products
are for the premium market. ”(Benjamin 2019 ,2 8 )
As a further example, she describes a Black computer scientist who chose a White
voice for his app rather than a Black one, so as not to “create friction ”(Benjamin 2019 ,
28–29). So while some designers might be unconsciously racialising their products as
White, others are doing so in full awareness of this choice.
2.2.3 The Whiteness of Stock Images of AI
As anyone working in the field will know, stock images of AI, at least when
anthropomorphised, are overwhelmingly white and arguably overwhelmingly White.The more realistically humanoid these machines become, the more Caucasian in theirfeatures. Such images are used to illustrate not only generalist newspaper articles andcorporate slideshows but also specialist and technical works, and even works of acritical nature, such as Harry Collins ’sArtifictional Intelligence (Polity, 2018) and
Anthony Elliott ’sThe Culture of AI (Routledge, 2018) (Fig. 2).
The prevalence of such images is reflected in the results of search engines. Such
searches are a useful indicator of how a subject is portrayed at a given time, for two
reasons. First, search engines are very widely used (approximately 3.5 billion searchesare made on Google every day, or 40 thousand per second
2) and can therefore be
considered a highly influential source of information and perceptions. Second, thenature of such search engines means that they are not only promoting certain ideas andperceptions but also reflecting their existing prevalence. While the exact nature of
2https://www.internetlivestats.com/google-search-statistics/ accessed 30 December 2019.
Fig. 1 Sophia. Hanson Robotics, April 2020The Whiteness of AI
Google ’s search, ranking, and result presentation algorithms is proprietary, we know
that they evaluate (crudely put) influence and popularity —for example, in terms of how
many other sites link to a given website. So the fact that certain images are shown when
someone searches for a relevant term means not only that those images are being thuspromoted by some of the most powerful organs of content mediation in existence todaybut also that these images are already widespread and used on other influentialwebsites, as that is what underlies their promotion by the search engines.
Consequently, search results are increasingly examined by scholars, including in the
study of racial bias. For example, in her 2018 book Algorithms of Oppression: How
Search Engines Reinforce Racism , Safiya U. Noble identifies many ways in which such
sites reflect and exacerbate prejudice, such as the search results for “Latinas ”that
feature mostly porn (Noble 2018 , 75, 155) or the White men who come up when
searching for images of professions such as “construction worker ”,“doctor ”,o r
“scientist ”(Noble 2018 ,8 2–83).
In order to get an indication of the prevalence of these racialised machines on the
internet, we conducted two image searches on Google (the most widely used searchengine) using the anonymous Tor browser to ensure results were not influenced by ourpersonal search histories and locations. We first searched on the term “artificial
intelligence ”: the top results are in Fig. 3. Some of these results are too abstract,
featuring stylised brains and circuits, for example, to be considered racialised. How-ever, among the results showing humanoid figures, racialisation as White predomi-nates. First, two pictures show actual human hands, and both are White. Second,a further two pictures show humanoid robots, and both are White and could thus beread as White, as Bartneck et al. suggest (Bartneck et al. 2018 , 202). Therefore, we
might say that inasmuch as the machines are racialised, they are racialised as White.
In order to focus more on representations of embodied, anthropomorphic AI, we also
searched for “artificial intelligence robot ”: the top results are in Fig. 4.A si sc l e a r ,t h i s
Fig. 2 Covers of Collins 2018, Polity, and Elliott 2018, RoutledgeS. Cave, K. Dihal
search produces an even greater preponderance of images that are either white in colour
or racialised as White or both.
2.2.4 The Whiteness of AI in Film and Television
These contemporary stock images distil the visualisations of intelligent machines in
Western popular culture as it has developed over decades. In science fiction from thenineteenth century onwards, AI is predominantly imagined as White. For example, theTerminator (Arnold Schwarzenegger), RoboCop (Peter Weller and Joel Kinnaman), allof the “replicants ”in the Blade Runner franchise (e.g. Rutger Hauer, Sean Young, and
Mackenzie Davis), Sonny in I, Robot (Alan Tudyk), Ava in Ex Machina (Alicia
Fig. 3 Tor browser Google image search result for “artificial intelligence ”, 13 April 2020
Fig. 4 Tor browser Google image search result for “artificial intelligence robot ”, 13 April 2020The Whiteness of AI
Vikander) (Fig. 5), and Maria in Metropolis (Brigitte Helm) are all played by White
actors and are visibly White on screen. Androids made of metal or plastic are alsousually given White facial features, such as the robots in the 2007 film I, Robot .
Even disembodied AI is imagined as White: HAL-9000 in 2001: A Space Odyssey
and Samantha in Her are voiced by White actors. All of these AIs come from
Hollywood films; they have been produced in a country in which 18% of the popula-
tion is Hispanic, but in which only one fictional robot has that background: Bender
Rodríguez in the animated TV series Futurama , who is canonically constructed in
Mexico —but who is voiced by the White voice actor John DiMaggio. Only very recent
TV shows with a large cast of androids, such as Westworld and Humans , have
attempted to address this with AI characters evincing a mix of skin tones and ethnic-ities. This preponderance of intelligent machines racialised as White led Dyer to posit“the android as a definition of whiteness ”(Dyer 1997 ,2 1 3 ) .
3 Understanding the Whiteness of AI
We offer three interpretations of the racialisation of intelligent machines as White: the
Whiteness of their creators perpetuating itself; the Whiteness of the attributes ascribedto AI; and the extent to which AI permits the erasure of people of colour from theWhite utopia.
3.1 Whiteness Reproducing Whiteness
In European and North American societies, Whiteness is normalised to an extent that
renders it largely invisible. As Toby Ganley puts it in his survey of Whiteness studies,“the monopoly that whiteness has over the norm ”is one of the field ’s two unifying
insights —the other being that it confers power and privilege (Ganley 2003 , 12).
Richard Dyer describes this as the view that “other people are raced, we are just
people ”(Dyer 1997 , 1). This normalisation means that Whiteness is not perceived by
majority populations as a distinct colour, but rather as an absence of colour —colour
Fig. 5 Alicia Vikander as Ava in Ex Machina . Source: YoutubeS. Cave, K. Dihal
both in the literal sense and in the sense of race. Consequently, the Whiteness of AI
could be considered simply a default. It does not appear as a feature, but is transparent,l i k et h ea i rw eb r e a t h e :t h e “unmarked marker ”, as Ruth Frankenberg calls it
(Frankenberg 1997b , 1). The majority of White viewers are unlikely to see human-
like machines as racialised at all, but simply as conforming to their idea of what“human-like ”means.
For non-White people, on the other hand, Whiteness is never invisible in this
manner, as bell hooks reminds us (hooks 1992 1997 ). So-called colour-blindness, an
attitude of not seeing race, and of presuming that people in contemporary society are nolonger disadvantaged on the basis of race, is itself a narrative that perpetuates Whitehegemony: “communities of color frequently see and name whiteness clearly and
critically, in periods when white folks have asserted their own ‘color blindness ’”
(Frankenberg 1997b , 4). Noble argues that “central to these ‘colorblind ’ideologies is
a focus on the inappropriateness of ‘seeing race ’”—a view that she argues is dominant
among Silicon Valley technologists, who “revel in their embrace of colorblindness as if
it is an asset and not a proven liability ”(Noble 2018 , 168). Such colour-blindness is a
liability because it obscures the normalisation of Whiteness and marginalisation ofother racialised groups —and the real world effects this has, such as facial recognition
technologies not distinguishing Black or East Asian faces (Buolamwini and Gebru2018 ).
Given the normalisation of Whiteness, for some designers, to make a human-like
machine will unthinkingly mean to make a White machine. As Dyer puts it: “white
people create the dominant images of the world and don ’t quite see that they thus create
the dominant images of the world in their own image ”(Dyer 1997 , 9). But this alone is
not a satisfactory explanation of the Whiteness of AI, as not all entities —more
specifically, not all intelligent, humanoid entities —imagined by predominantly White
industries are portrayed as White. For example, Western science fiction has a longtradition of White authors racialising extraterrestrials as non-White. In the late nine-teenth century, for instance, the real-world fear of the “Yellow Peril ”was metaphori-
cally addressed in science fiction by racialising extraterrestrial invaders as East Asian.
The Flash Gordon franchise gained its lead villain in a 1934 comic, which introduced
the tyrannical emperor of the planet Mongo —the Orientalised alien Ming the
Merciless.
Such is the villain in Flash Gordon - a trident bearded, slanty eyed, shiny doomed
[sic], pointy nailed, arching eyebrowed, exotically garbed Oriental named Ming,who personifies unadulterated evil. A heavy like Ming is not contrived in a comicstrip writer ’s imagination during a coffee break, but rather is the product of
perhaps the richest and longest tradition of all of Hollywood ethnic stereotypes.
(Barshay 1974 ,2 4–26)
Dyer points out that Blade Runner similarly deliberately uses East Asian characters in
order to offset the whiteness of its protagonists, including the White androids: “the
yellow human background emphasises the chief protagonists ’whiteness. The whitest
of hue are the replicants ”(Dyer 1997 , 214). Racial stereotyping of aliens is not a
phenomenon limited to past centuries. The Star Wars prequel trilogy (Lucas 1999;
2002; 2005) has been criticised for the “transparent racism ”in its depiction of the alienThe Whiteness of AI
Jar Jar Binks as a West Indian caricature (Lavender 2011 , 193) reminiscent of blackface
minstrelsy (Williams 1999 ), and of the slave trader Watto, an antisemitic Jewish
caricature with a large nose, skullcap, Yiddish accent, and obsession with money(Freedman 2019 ).
This racialisation of aliens in SF suggests that the racialisation of artificial intelli-
gence is a choice. The White racial frame as perpetuated by the White creators of these
works portrays dangerous invaders from another planet as East Asian and bumbling
alien petty-criminals as Afro-Caribbean. Therefore, the fact that it portrays AI asoverwhelmingly White requires further explanation. In the following sections, we offertwo.
3.2 AI and the Attributes of WhitenessWhile Whiteness functions in part through its invisibility in mainstream discourse, this
does not mean it has no distinguishable features of its own. Indeed, the White racial
frame has a long history of ascribing certain attributes to Whites and disputing them inothers: these are the very claims that have been used to justify colonialism, segregation,and other modes of oppression. We argue that AI is predominantly racialised as Whitebecause it is deemed to possess attributes that this frame imputes to White people. Weexamine these attributes under three key headings: intelligence, professionalism, andpower.
First, the primary attribute being projected onto these machines is, as the term “AI”
suggests, intelligence. Throughout the history of Western thought, but in particular
since the seventeenth century in Europe and the territories it colonised, intelligence hasbeen associated with some humans more than others (Carson 2006 ). The idea that some
races were more mentally able than others was crucial to the legitimation of theadvancing colonial project. Those deemed less intelligent —in the words of Rudyard
Kipling, “Half-devil and half-child ”—were judged unqualified to rule themselves and
their lands. It was therefore legitimate —even a duty, “the white man ’sb u r d e n ”as
Kipling put it —to destroy their cultures and take their territories (Kipling 1899 ).
Through the nineteenth century, strenuous efforts were made to empirically demon-
strate and measure this intellectual difference, culminating in the development of the IQtest (Gould 1981 ). Although explicit associations between racial groups and intelli-
gence declined after the Second World War, (a) they continue to be made in right-wingcircles (Saini 2019 ) and (b) implicit or unconscious associations between race and
intelligence persist widely (see, for example, van den Bergh et al. 2010 ; Okeke et al.
2009 ). Given the White racial frame has for centuries promoted the association of
intelligence with the White, European race, it is to be expected that when this culture is
asked to imagine an intelligent machine, it imagines a White machine.
A crucial aspect of the idea of intelligence is generality. Intelligence is often defined
as a“general mental capability ”(Gottfredson 1997 ), and in AI, the concept of “artificial
general intelligence ”—a system with the kind of flexible mental capabilities humans
have—is often considered to be the original and primary goal of the field (Crevier
1993 ). But in the White racial frame, not all humans are considered to have this
attribute to the same degree. As Weheliye puts it, using Sylvia Wynter ’si d e ao f “the
Man”—the Enlightenment, Western, White male subject, “In the context of the secular
human, black subjects, along with indigenous populations, the colonised, the insane,S. Cave, K. Dihal
the poor, the disabled, and so on serve as limit cases by which Man can demarcate
himself as the universal human ”(Weheliye 2014 , 24). According to the White racial
frame, it is the rational, scientific thought of the White Westerner that lays claim touniversal validity —or, we might say, true generality. Other races, by contrast, are
framed as particular and subjective, constrained by the limits of their non-ideal bodiesand cultures to think thoughts that are partial and parochial. To imagine a truly
intelligent machine, one with general intelligence is therefore to imagine a White
machine.
Second, much of the current discourse around AI focuses on how it is, or will soon
be, capable of professional work. This is frequently claimed to be what makes the
present wave of automation different from previous waves, in which machines becamecapable of supplanting manual and semi-skilled labour (Ford 2015 ). Professional
work—law, medicine, business, and so forth —is at the upper end of pay and status
scales. White Europeans and North Americans have historically not considered all
humans equally fit for such roles and have kept them closed to people who lacked the
requisite connections, wealth, or other in-group identifiers. Universities, the gatewaysto the professions, have long histories of excluding people of colour from their ranks(Burrow 2008 ,1 0 7 ) .
The historic exclusion of anyone other than White men shapes to this day what
mainstream White culture imagines when imagining someone fulfilling such roles.Safiya Noble shows that it took years of criticism before search engines adjusted theiralgorithms so that searching for “engineer ”or“doctor ”stopped exclusively returning
images of White men (Noble 2018 ). But the underlying bias, on which the algorithms
fed, remains. To imagine a machine in a white-collar job is therefore to imagine a Whitemachine.
Third, hierarchies of intelligence and of professional status are of course also
hierarchies of power. Consequently, power relations are implicit in the previous twocategories. However, it is worth also considering power separately, because powerstruggles between AI and humans are such a common narrative trope. Alongside thenarrative that robots will make humans redundant, an equally well-known narrative is
that they will rise up and conquer us altogether (Cave and Dihal 2019 ). These are both
narratives about machines becoming superior to humans: stories in which they becomebetter at every task, leaving humans with nothing to do, from E.M. Forster ’s1909 short
story ‘The Machine Stops ’to the Oscar-winning film WALL-E , or in which they outwit
and subjugate those who built them, as in the Terminator film franchise or the film Ex
Machina (Forster 1909 ; Stanton 2008 ; Cameron 1984 ;G a r l a n d 2015 ). When White
people imagine being overtaken by superior beings, those beings do not resemble thoseraces they have framed as inferior. It is unimaginable to a White audience that they will
be surpassed by machines that are Black. Rather, it is by superlatives of themselves:
hyper-masculine White men like Arnold Schwarzenegger as the Terminator, or hyper-feminine White women like Alicia Vikander as Ava in Ex Machina .
This is why even narratives of an AI uprising that are clearly modelled on stories of
slave rebellions depict the rebelling AIs as White —for example, in Blade Runner
(Dihal 2020 ). The implication of this racialisation is that these machines might genu-
inely be superior, or are at least worthy adversaries. The use of White bodybuilderssuch as Arnold Schwarzenegger to play the evil robots suggests this. As Dyer points
out, Schwarzenegger ’s physique suggests “the body made possible by [...] naturalThe Whiteness of AI
mental superiority. The point after all is that it is built, a product of the application of
thought and planning, an achievement ”(Dyer 1997 , 164). Consequently, for a White
technologist or author, to imagine a superior anthropomorphic machine is to imagine aWhite machine.
In summary, popular conceptions of AI suggest these machines have general
intelligence, are capable of professional jobs, and/or are poised to surpass and supplant
humanity. In the White imagination, such qualities are strongly associated with White-
ness. It is no surprise, therefore, that in mainstream Western media, such machines areportrayed as White.
3.3 White UtopiaWhile we believe the attribution to AI of these qualities, so strongly associated with
Whiteness, goes a long way to making sense of the racialisation of anthropomorphic
intelligent machines, we also want to propose one further hypothesis: that the White-
ness of the machines allows the White utopian imagination to fully exclude people ofcolour.
One of the most pertinent hopes for artificial intelligence is that it will lead to a life
of ease (Cave and Dihal 2019 ). As a tool that can take over “dirty, dull, or dangerous ”
jobs, it relieves its owners from work they do not want to do, enabling them to pursueleisure. As critical race theorists have repeatedly pointed out, the leisure currentlyavailable to the wealthier classes is disproportionately facilitated by the labour of
working-class women of colour (hooks 1992 1997 ;R h e e 2018 ). bell hooks shows that
the people performing this labour are actively kept invisible, even when the Whitemaster and the coloured servant are physically present in the same space. She cites thememoirs of a White heiress who grew up with Black servants in her house: “Blacks, I
realized, were simply invisible to most white people, except as a pair of hands offeringad r i n ko nas i l v e rt r a y ”(hooks 1992 1997 ,1 6 8 ) .
As this forced pretence of invisibility shows, interactions with non-White servants
are undesirable to the White master: such interactions are almost literally considered a
“dirty job ”. Depictions of people of colour as being dirty and unwashed, eating dirty
food, living in the dirt, even of being the colour of excrement have contributed to thedevelopment of both the fear of pollution in interactions with people of colour, and theassociation of Whiteness with cleanliness and purity (Dyer 1997 ,7 5–76). This associ-
ation has been exacerbated by a long history of propaganda preceding conquest andgenocide that portrays the racial other as evoking disgust: as vectors of disease, such aslice or rats, or as a literal plague (Glover 1999 , chap. 35; Rector 2014 , chap. 3).
The utopia of the White racial frame would therefore rather remove people of colour
altogether, even in the form of servants. From the inception of the academic study of
science fiction onwards, many critics have pointed out that utopias throughout literaryhistory have been construed on exclusionary, colonialist, and eugenicist premises(Suvin 1979 2016, 179; Jameson 2005 ,2 0 5 ;G i n w a y 2016 ,1 3 2 ) .I n Astrofuturism ,
De Witt Douglas Kilgore shows that mid-twentieth-century American visions of spaceage utopias are “idealisations ... based on a series of exclusions ”(Kilgore 2010, 10):
rather than depicting a post-racial or colourblind future, the authors of these utopiassimply omit people of colour.S. Cave, K. Dihal
AI offers the possibility of making such racialised utopias real. By virtue of its
generality, it is imagined as able to replace all and any unwanted labour —social and
cognitive as well as physical (Cave and Dihal 2019 ), so obviating the need for people
of colour in any role. Consequently, as Jennifer Rhee points out, advertisements for realAI such as household robots “are striking in their whiteness ”: they are aimed at
showing white middle-class families an ideal leisurely lifestyle. In doing so, she argues,
“the images reserve the luxury of liberation from domestic labor for white women,
while erasing the women of color who perform this labor, both within their own homesand in the homes of others ”(Rhee 2018 ,9 4 ) .
In some cases, the unsulliedness of this utopia can extend further to exclude all
women. Just as people of colour can be associated with offensive physicality, so canwomen in general, particularly with respect to their reproductive organs. The necessityof sexual intercourse, pregnancy, and childbearing for the continuation of a race thatprides itself on rationality and the ability to transcend its physicality is an offensive
hurdle that has been imagined as transcendable by science for centuries. As Dyer points
out, in the ideology of Whiteness, the elevation of mental over physical prowess hassimultaneously been the White race ’s most valuable achievement and a threat to its own
continuation (Dyer 1997 , 27). It has led to the paradox known as the “White Crisis ”,i n
which the White race is seen as under threat of being overwhelmed by “inferior ”races
that are breeding more prolifically. Transhumanism has been envisioned as a solution tothis White Crisis (Ali 2017 ). Seen as a form of offspring, artificial intelligence offers a
way for the White man to perpetuate his existence in a rationally optimal manner,
without the involvement of those he deems inferior.
4 Conclusion and Implications
Images of AI are not generic representations of human-like machines, but avatars of a
particular rank within the hierarchy of the human. These representations of intelligentmachines —and our future with them —are refracted through the White racial frame;
their Whiteness a proxy for how we perceive their status and potential. This can cause
what is sometimes called representational harms (Blodgett et al. 2020 ). We suggest
three.
First, this racialisation can amplify the very prejudices it reflects. We have argued that
intelligent machines are portrayed as White because that is how the mainstream perceivesintelligence and related desirable characteristics. But equally, the consistent portrayal ofintelligent machines as White itself transmits this association, so sustaining it. As we haveargued elsewhere (Whittlestone et al. 2019 ), bias in representations of AI contributes to a
vicious cycle of social injustice: the biased representations can influence both aspiring
technologists and those in charge of hiring new staff, shaping whom they consider fit forthe field (Cave 2020 ). This could contribute to sustaining a racially homogenous
workforce, which will continue to produce products, whether real intelligent machinesor their representations, that are biased to benefit that group and disadvantage others.
Second, the racialisation of these machines places them within an existing hierarchy
of the human in a way that could exacerbate real injustice. Portrayals of AI as Whitesituate these machines in a power hierarchy above currently marginalised groups, such
as people of colour. These oppressed groups are therefore relegated to an even lowerThe Whiteness of AI
position in the hierarchy: below that of the machine. As machines become ever more
important in making automated decisions —frequently about marginalised groups
(Eubanks 2017 )—this could be highly consequential. Automation bias —the tendency
of people to favour suggestions from automated decision-making systems over thosefrom humans —has already been evidenced (Goddard et al. 2012 ). We might speculate
that it will be exacerbated in cases where such systems are racialised White and the
humans in question are not.
Third, these portrayals could distort our perceptions of the risks and benefits of these
machines. For example, they could frame the debate about AI ’s impact disproportion-
ately around the opportunities and risks posed to White middle-class men (Cave 2020 ).
It is already a common narrative that the current wave of automation differs from thoseof the past in that “impacts from automation have thus far impacted mostly blue-collar
employment; the coming wave of innovation threatens to upend white-collar work aswell”(Pew Research Center 2014 ). Public interest and policy therefore often focus on
white collar professionals, instead of on marginalized groups, which in reality are likely
to be worse affected by the impact of AI (Eubanks 2017 ;N o b l e 2018 ).
In this paper, we have offered three interpretations of the whiteness and Whiteness of
representations of AI. All three, and the implications that we posit, need furtherinvestigation. This process is part of what can be described as decolonising AI :a
process of breaking down the systems of oppression that arose with colonialism andhave led to present injustices that AI threatens to perpetuate and exacerbate. Weheliyedescribes how he “works towards the abolition of Man, and advocates the radical
reconstruction and decolonization of what it means to be human ”(Weheliye 2014 ,4 ) .I t
is in the field of AI that technology is most clearly entwined with notions of “what it
means to be human ”,b o t hi nr e a l i t ya n di nc u l t u r a lf a n t a s i e s .W eh o p et oh a v et a k e na
step towards this reconstruction, by drawing attention to the Whiteness of thesemachines and “making it strange ”.
Acknowledgements The authors would like to thank Ezinne Nwankwo, Dr. Lauren Wilcox, Eva Pasini, and
the two anonymous peer reviewers for comments on earlier drafts.
Funding Information Stephen Cave and Kanta Dihal are funded by the Leverhulme Trust (via grant
number RC-2015-067 to the Leverhulme Centre for the Future of Intelligence). Kanta Dihal is additionallyfunded through the support of grants from DeepMind Ethics & Society and Templeton World CharityFoundation, Inc.
Compliance with Ethical Standards
Conflict of Interest The authors declare that they have no conflicts of interest.
Disclaimer The opinions expressed in this publication are those of the authors and do not necessarily reflect
the views of the Templeton World Charity Foundation, Inc.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you giveappropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, andindicate if changes were made. The images or other third party material in this article are included in thearticle's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is notincluded in the article's Creative Commons licence and your intended use is not permitted by statutoryS. Cave, K. Dihal
regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
References
Adas, M. (1989). Machines as the measure of men: science, technology, and ideologies of Western dominance .
Ithaca: Cornell University Press. https://doi.org/10.7591/9780801455261 .
Addison, A., Bartneck, C., and Yogeeswaran, K. (2019). ‘Robots can be more than black and white:
examining racial bias towards robots ’.I nProceedings of the 2019 AAAI/ACM conference on AI, ethics,
and society - AIES ‘19,4 9 3–98. Honolulu, HI: ACM Press. https://doi.org/10.1145/3306618.3314272 .
Ali, S. M. (2017) Transhumanism and/as Whiteness. In Proceedings of the IS4SI 2017 Summit Digitalisation
for a Sustainable Society . Gothenburg: Multidisciplinary Digital Publishing Institute. https://doi.
org/10.3390/IS4SI-2017-03985 .
Barshay, R. (1974). Ethnic stereotypes in "Flash Gordon"'. Journal of Popular Film, 3 (1), 15 –30.
Bartneck, C., Yogeeswaran, K., Ser, Q. M., Woodward, G., Sparrow, R., Wang, S., and Eyssel, F. (2018).
‘Robots and racism ’.I nProceedings of the 2018 ACM/IEEE International Conference on Human-Robot
Interaction ,1 9 6–204. New York: ACM. https://doi.org/10.1145/3171221.3171260 .
Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Medford, MA: Polity.
Blodgett, S. L., Barocas, S., Daumé III, H. and Wallach, H. (2020). ‘Language (technology) is power: a critical
survey of “Bias”in NLP ’. ArXiv:2005.14050 [Cs], May 2020. http://arxiv.org/abs/2005.14050 .
Buolamwini, J., and Gebru, T. (2018). ‘Gender shades: intersectional accuracy disparities in commercial
gender classification ’.I nProceedings of Machine Learning Research .V o l .8 1 .
Burrow, G. N. (2008). A history of Yale ’s School of Medicine: passing torches to others . New Haven: Yale
University Press.
Cameron, J. (1984). The terminator . Orion Pictures.
Carson, J. (2006). The measure of merit: Talents, intelligence, and inequality in the French and American
republics, 1750 –1940 . Princeton, NJ: Princeton University Press.
Cave, S. (2020). ‘The problem with intelligence: its value-laden history and the future of AI ’.I nProceedings
of the 2020 AAAI/ACM Conference on AI, Ethics, and Society . New York: ACM Press.
Cave, S., & Dihal, K. (2019). 'Hopes and fears for intelligent machines in fiction and reality'. Nature Machine
Intelligence, 1 (2), 74 –78.https://doi.org/10.1038/s42256-019-0020-9 .
Correll, J., Park, B., Judd, C. M., & Wittenbrink, B. (2002). 'The police officer ’s dilemma: using ethnicity to
disambiguate potentially threatening individuals'. Journal of Personality and Social Psychology, 83 (6),
1314–1329. https://doi.org/10.1037/0022-3514.83.6.1314 .
Crevier, D. (1993). AI: the tumultuous history of the search for artificial intelligence . New York, NY: Basic
Books.
Daniels, J. (2013). 'Race and racism in internet studies: a review and critique.' New Media & Society, 15 (5),
695–719. https://doi.org/10.1177/1461444812462849 .
Daniels, J. (2015). ' “My brain database doesn ’t see skin color ”: color-blind racism in the technology industry
and in theorizing the web.' American Behavioral Scientist, 59 (11), 1377 –1393. https://doi.org/10.1177
/0002764215578728 .
de la Peña, C. (2010). 'The history of technology, the resistance of archives, and the whiteness of race'.
Technology and Culture, 51 (4), 919 –937.
Dihal, K. (2020). ‘Enslaved minds: artificial intelligence, slavery, and revolt ’.I nAI Narratives: A History of
Imaginative Thinking about Intelligent Machines , edited by Stephen Cave, Kanta Dihal, and Sarah Dillon,
189-212. Oxford: Oxford University Press.
D y e r ,R .( 1 9 9 7 ) . White . London: Routledge.
Eubanks, V . (2017). Automating inequality: how high-tech tools profile, police, and punish the poor .N e w
York: St. Martin ’s Press.
Feagin, J. R. (2006). Systemic racism: a theory of oppression . New York: Routledge.
Feagin, J. R. (2013). The white racial frame: centuries of racial framing and counter-framing (2nd ed.). New
York: Routledge.
Ford, M. (2015). The rise of the robots: technology and the threat of mass unemployment . London: Oneworld
Publications.
Forster, E. M. 1909. ‘The machine stops ’.The Oxford and Cambridge Review , November 1909. http://archive.
ncsa.illinois.edu/prajlich/forster.html .The Whiteness of AI
Frankenberg, R. (Ed.). (1997a). Displacing whiteness: essays in social and cultural criticism .D u r h a m ,N C :
Duke University Press. https://doi.org/10.1215/9780822382270 .
Frankenberg, R. (1997b). ‘Introduction: local whitenesses, localizing whiteness ’.I nDisplacing whiteness ,
edited by Ruth Frankenberg, 1 –33. Durham, NC: Duke University Press. https://doi.org/10.1215
/9780822382270-001 .
Freedman, A. (2019). ‘If you prick Watto, does he not bleed? ’Jewish Currents (blog). 14 June 2019.
https://jewishcurrents.org/if-you-prick-watto-does-he-not-bleed/ .
Ganley, T. (2003). 'What ’s all this talk about whiteness?' Dialogue, 1 (2), 12 –30.
Garland, A. (2015). Ex Machina . Universal Pictures.
G a r n e r ,S .( 2 0 0 7 ) . Whiteness: an introduction . Abingdon: Routledge.
G i n w a y ,M .E .( 2 0 1 6 ) . ‘Monteiro Lobato ’s O Presidente Negro (The Black President): Eugenics and the
corporate state in Brazil ’.I nBlack and Brown planets: The politics of race in science fiction , edited by
Isiah Lavender III, 131 –45. Jackson, MI: University Press of Mississippi.
Glover, J. (1999). Humanity: a moral history of the twentieth century . London: Jonathan Cape.
Goddard, K., Roudsari, A., & Wyatt, J. C. (2012). 'Automation bias: a systematic review of frequency, effect
mediators, and mitigators.' Journal of the American Medical Informatics Association: JAMIA, 19 (1), 121 –
127. https://doi.org/10.1136/amiajnl-2011-000089 .
Gottfredson, L. S. (1997). 'Mainstream science on intelligence: An editorial with 52 signatories, history, and
bibliography.' Intelligence 24 (1), 13 –23.https://doi.org/10.1016/S0160-2896(97)90011-8 .
Gould, S. J. (1981). T h em i s m e a s u r eo fm a n . New York: Norton.
Haraway, D. J. (1991). Simians, cyborgs, and women: the reinvention of nature . London: Free Association
Books.
Harding, S. G. (1986). The science question in feminism . Ithaca: Cornell University Press.
hooks, b. [1992] (1997). ‘Representing whiteness in the black imagination ’.I nDisplacing whiteness , edited by
Ruth Frankenberg, 165 –79. Durham, NC: Duke University Press. https://doi.org/10.1215
/9780822382270-006 .
Jameson, F. (2005). Archaeologies of the future: the desire called utopia and other science fictions . London:
Verso.
K e l l e r ,E .F .( 1 9 8 5 ) . Reflections on gender and science . New Haven: Yale University Press.
Kipling, R. (1899). ‘The white man ’s burden ’.The Times , 4 February 1899. http://www.kiplingsociety.co.
uk/rg\_burden1.htm .
Lavender III, I. (2011). Race in American science fiction . Bloomington: Indiana University Press.
Liao, Y ., and He, J. (2020). ‘The racial mirroring effects on human-agent in psychotherapeutic conversation ’.
Proceedings of the 25th international conference on intelligent user interfaces (IUI ’20).
Marino, M. (2014). 'The racial formation of chatbots.' CLCWeb: Comparative Literature and Culture 16 (5).
https://doi.org/10.7771/1481-4374.2560 .
Murphy, R. (2000). Introduction to AI robotics . Cambridge, MA: MIT Press.
Nakamura, L. (2006). 'Cultural difference, theory and cyberculture studies'. In Critical Cyberculture Studies ,
edited by David Silver and Adrienne Massanari, 29 –36. NYU Press.
Noble, S. U. (2018). Algorithms of oppression: how search engines reinforce racism . New York: New York
University Press.
Okeke, N. A., Howard, L. C., Kurtz-Costes, B., & Rowley, S. J. (2009). 'Academic race stereotypes, academic
self-concept, and racial centrality in African American youth'. Journal of Black Psychology, 35 (3), 366 –
387. https://doi.org/10.1177/0095798409333615 .
Pew Research Center (2014). ‘AI, Robotics, and the Future of Jobs ’.http://www.pewinternet.org/2014/08/06
/future-of-jobs/ .
Rector, J. M. (2014). The objectification spectrum: understanding and transcending our diminishment and
dehumanization of others . Oxford: Oxford University Press.
Rhee, J. (2018). The robotic imaginary: the human and the price of dehumanized labor . Minneapolis:
University of Minnesota Press.
Riek, L., and Howard, D. (2014). 'A code of ethics for the human-robot interaction profession'. In Proceedings
of We Robot . Rochester, NY: Social Science Research Network. https://papers.ssrn.
com/abstract=2757805 .
Saini, A. (2019). Superior: The Return of Race Science . London: 4th Estate.
Sinclair, B. (2004). 'Integrating the histories of race and technology.' In Technology and the African-American
Experience: Needs and Opportunities for Study , edited by Bruce Sinclair, 1-17. Cambridge, MA: MIT
Press.
Stanton, A. (2008). WALL·E .D i s n e y . http://www.imdb.com/title/tt0910970/ .S. Cave, K. Dihal
Strait, M., Ramos, A. S., Contreras, V., and Garcia, N. (2018). ‘Robots racialized in the likeness of
marginalized social identities are subject to greater dehumanization than those racialized as white ’.I n
The 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) ,
452–57.https://doi.org/10.1109/ROMAN.2018.8525610 .
Suvin, D. [1979] (2016). Metamorphoses of science fiction: on the poetics and history of a literary genre .
Edited by Gerry Canavan. Bern: Peter Lang.
Sweeney, M. (2016). ‘The Ms. Dewey “experience: ”technoculture, gender, and race ’.Digital Sociologies .
https://doi.org/10.2307/j.ctt1t89cfr.31 .
UNDP (2017). ‘UNDP in Asia and the Pacific appoints world ’s first non-human innovation champion ’.
UNDP in Asia and the Pacific. 22 November 2017. https://www.asia-pacific.undp.
org/content/rbap/en/home/presscenter/pressreleases/2017/11/22/rbfsingapore.html .
van den Bergh, L., Denessen, E., Hornstra, L., V oeten, M., & Holland, R. W. (2010). 'The implicit prejudiced
attitudes of teachers: relations to teacher expectations and the ethnic achievement gap.' American
Educational Research Journal, 47 (2), 497 –527. https://doi.org/10.3102/0002831209353594 .
Villa-Nicholas, M., and Sweeney, M. E. (2019). ‘Designing the “good citizen ”through Latina identity in
USCIS ’s virtual assistant “Emma ”’.Feminist Media Studies ,J u l y ,1 –17.https://doi.org/10.1080
/14680777.2019.1644657 .
W e h e l i y e ,A .G .( 2 0 1 4 ) . Habeas Viscus: racializing assemblages, biopolitics, and black feminist theories of
the human . Durham, NC: Duke University Press.
Weller, C. (2017). ‘Meet Sophia, the robot citizen that said it would “destroy humans ”’.Business Insider .2 7
October 2017. https://www.businessinsider.com/meet-the-first-robot-citizen-sophia-animatronic-
humanoid-2017-10?r=UK .
Whittlestone, J., Nyrup, R., Alexandrova, A., Dihal, K., and Cave, S. (2019). ‘Ethical and societal implications
of algorithms, data, and artificial intelligence: a roadmap for research ’. Nuffield Foundation.
Williams, P. J. (1999). ‘Racial Ventriloquism ’.The Nation , 17 June 1999. https://www.thenation.
com/article/archive/racial-ventriloquism/ .
Wynter, S. (2003). 'Unsettling the coloniality of being/power/truth/freedom: towards the human, after man, its
overrepresentation —an argument.' CR: The New Centennial Review, 3 (3), 257 –337.
Publisher ’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.The Whiteness of AI |
426fa924-c2fa-4f3b-b5e7-382440c77ef3 | trentmkelly/LessWrong-43k | LessWrong | Non-alignment project ideas for making transformative AI go well
This is a series of posts with lists of projects that it could be valuable for someone to work on. The unifying theme is that they are projects that:
* Would be especially valuable if transformative AI is coming in the next 10 years or so.
* Are not primarily about controlling AI or aligning AI to human intentions.[1]
* Most of the projects would be valuable even if we were guaranteed to get aligned AI.
* Some of the projects would be especially valuable if we were inevitably going to get misaligned AI.
The posts contain some discussion of how important it is to work on these topics, but not a lot. For previous discussion (especially: discussing the objection “Why not leave these issues to future AI systems?”), you can see the section How ITN are these issues? from my previous memo on some neglected topics.
The lists are definitely not exhaustive. Failure to include an idea doesn’t necessarily mean I wouldn’t like it. (Similarly, although I’ve made some attempts to link to previous writings when appropriate, I’m sure to have missed a lot of good previous content.)
There’s a lot of variation in how sketched out the projects are. Most of the projects just have some informal notes and would require more thought before someone could start executing. If you're potentially interested in working on any of them and you could benefit from more discussion, I’d be excited if you reached out to me! [2]
There’s also a lot of variation in skills needed for the projects. If you’re looking for projects that are especially suited to your talents, you can search the posts for any of the following tags (including brackets):
[ML] [Empirical research] [Philosophical/conceptual] [survey/interview] [Advocacy] [Governance] [Writing] [Forecasting]
The projects are organized into the following categories (which are in separate posts). Feel free to skip to whatever you’re most interested in.
* Governance during explosive technological growth
* It’s plausible |
2b98a462-583e-439a-ab76-6101e9610c83 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | AI Alignment is intractable (and we humans should stop working on it)
AI alignment is an intractable problem because it is impossible to create a perfect model of a human mind. Even if we could, the vast majority of people would never accept being controlled by a machine. Even if we could create a workable model of a human mind, the problem of creating an AI that is beneficial to humanity is an open question. There are many possible ways for an AI to be beneficial, and it is not clear which of these is the best. Even if we knew which of these was the best, it is not clear how to create an AI that would pursue that goal.
The problem of AI alignment is further complicated by the fact that we do not know what the future will bring. We do not know what goals humanity will have in the future, or what kind of environment we will be living in. This means that any AI we create today could become dangerous in the future, even if we try to align it with our current goals.
All of these factors make it clear that AI alignment is an intractable problem. It is impossible to create a perfect model of a human mind, and even if we could, the benefits of doing so are uncertain. Humanity is better off focusing on other problems that are more tractable and have a better chance of yielding benefits. |
3b0c3d09-41d2-4e20-98e6-bd10b9a98a8b | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post3321
Even if we can measure how impactful an agent's actions are, how impactful do we let the agent be? This post uncovers a surprising fact: armed with just four numbers, we can set the impact level so that the agent chooses a reasonable, non-catastrophic plan on the first try. This understanding increases the competitiveness of impact-limited agents and helps us judge impact measures. Furthermore, the results help us better understand diminishing returns and cost-benefit tradeoffs. In Reframing Impact , we meet Frank (a capable AI), whom we’ve programmed to retrieve the pinkest object he can find (execute an optimal plan, according to the specified utility function). Because we can’t ask Frank to do exactly what we want, sometimes he chooses a dangerous object (executes a catastrophically bad plan). We asked after an “impact measure” which grades plans and has three properties: The intuition is that if we view the world in the right way, the dangerous objects are far away from Frank (the catastrophic plans are all graded as high-impact). Reframing Impact explores this kind of new way of looking at the world; this post explores what we do once we have an impact measure with these three properties. We want Frank to keep in mind both the pinkness of an object (how good a plan is according to the specified utility function) and its distance (the plan’s impact). Two basic approaches are In terms of units, since we should be maximizing utility , R has type impact utility . So R can be thought of as a regularization parameter, as a search radius (in the constrained case), or as an exchange rate between impact and utility (in the scaled case). As R increases, high-impact plans become increasingly appealing, and Frank becomes increasingly daring. We take R to divide the impact in the scaled formulation so as to make Frank act more cautiously as R increases for both formulations. The downside is that some explanations become less intuitive. In Attainable Utility Preservation: Empirical Results , λ plays the same role as R , except low λ means high R ; λ : = R − 1 . To apply this post's theorems to the reinforcement learning setting, we would take "utility" to be the discounted return for an optimal policy from the starting state, and "impact" to be the total discounted penalty over the course of that policy (before incorporating λ ). In both cases, Frank goes from 0 to 60 − eventually. For sufficiently small R , doing nothing is optimal (lemma 5: the first subinterval is the best plan with minimal impact). For sufficiently large R , Frank acts like a normal maximizer (corollary 7: low-impact agents are naive maximizers in the limit). Here's how Frank selects plans in the constrained setup: Think about which plans are best for different search radii/exchange rates R . By doing this, we're partitioning the positive ray: categorizing the positive real numbers by which plans are optimal. For the scaled setup, we'll need to quantify the pinkness (utility) and distance (impact) of relevant plans: We will primarily be interested in the scaled setup because it tends to place catastrophes farther along the partition and captures the idea of diminishing returns. The scaled setup also helps us choose the best way of transmuting time into money: In this scaled partition, tending the garden doesn’t show up at all because it’s strictly dominated by mowing the lawn. In general, a plan is dominated when there’s another plan that has strictly greater score but not strictly greater impact. Dominated things never show up in either partition, and non-dominated things always show up in the constrained partition (lemma 3: constrained impact partitions are more refined). Exercise: For R = 4 45 (i.e. your time is worth $11.25 an hour), what is the scaled tradeoff value of mowing the lawn? Of delivering newspapers? Of tending the garden? Mowing the lawn: 20 − 1 4 45 = 8.75 . Delivering newspapers: 45 − 4 4 45 = 0 . Tending the garden: 15 − 1 4 45 = 3.75 . In other words, you only deliver newspapers if your time is worth less than 25 3 = 8 1 3 dollars/hour (we're flipping R so we can talk about dollars/hour instead of hours/dollar). Notice that when R ≥ impact(plan) utility(plan) (here, when R = 4 45 ), the tradeoff for the paper route isn’t net-negative – but it isn’t necessarily optimal! Remember, you’re trading hours for dollars through your work; mowing the lawn leaves you with twenty bucks and three hours, while the paper route leaves you with forty dollars and no hours. You want to maximize the total value of your resources after the task. Importantly, you don’t deliver papers here if your time is worth 45 4 = 11.25 dollars/hour, even though that’s the naive prescription! The newspaper route doesn’t value your time at 11.25 dollars/hour – it marginally values your time at 45 − 20 4 − 1 = 8 1 3 dollars per hour. Let's get some more intuition for this. Above, we have not yet chosen a task; the blocks represent the additional utility and hours of each task compared to the current one (doing nothing). The scales above imply that R = 1 , but actually, R expresses how many blue blocks each pink block weighs. As R increases, the pink platters descend; the agent takes the task whose scales first balance. In other words, the agent takes the best marginal deal as soon as R is large enough for it to be profitable to do so (Theorem 4: Scaled domination criterion). Once you take a deal, you take the blocks off of the other scales (because the other marginal values change). For small R (i.e. large valuations of one's time), mowing the lawn is optimal. We then have Since you've taken the juicier "lower-hanging fruit" of mowing the lawn, the new newspaper ratio is now worse ! This always happens – Theorem 8: Deals get worse over time. At first, this seems inconvenient; to figure out exactly when a plan shows up in a scaled partition, we need to generate the whole partition up to that point. Going back to Frank, how do we set R ? If we set it too high, the optimal plan might be a catastrophe. If we set it too low, the AI doesn’t do much. This seems troublesome. Exercise: Figure out how to set R while avoiding catastrophic optimal plans (assume that the impact measure meets the three properties). You have four minutes. A big part of the answer is to start with a small value for R , and slowly increase. This is simple and intuitively appealing, but how cautiously must we increase R ? We don’t want to be surprised by a catastrophe suddenly becoming optimal. To avoid being surprised by catastrophes as we increase R , we want a relative buffer between the reasonable plans (which get the job done well enough for our purposes) and the catastrophic plans. If reasonable plans are optimal by R 1 , catastrophic plans shouldn’t be able to be optimal before e.g. R 2 . We say that the partition is α -buffered if R 2 ≥ ( 1 + α ) R 1 (for α > 0 ). If a partition is e.g. 1-buffered, there is a wide reasonable-plan range and we can inch along it without worrying about sudden catastrophe. For the following, suppose that utility is bounded [ 0 , 1 ] . Below is a loose criterion guaranteeing α -buffering. For example, if we know that all catastrophes have at least 10 times the impact of reasonable plans, and there's a difference of at least .3 utility between the best and worst reasonable plans, then we can guarantee 2-buffering! If we use the refined criterion of Theorem 11 (and suppose the worst reasonable plan has .4 utility), this improves to 4.5 -buffering (even 2-buffering is probably overkill). Using this theorem, we don't need to know about all of the plans which are available or to calculate the entire scaled partition, or to know how overvalued certain catastrophic plans might be (per earlier concerns ). We only need a lower bound on the catastrophe/reasonable impact ratio, and an idea about how much utility is available for reasonable plans. This is exactly what we want. As a bonus, having conservative estimates of relevant quantities allows us to initialize R to something reasonable on the first try (see R UB: satisfactory in Theorem 11 below). Ultimately, the reasoning about e.g. the ratio will still be informal; however, it will be informal reasoning about the right thing (as opposed to thinking "oh, the penalty is probably severe enough"). Exercise: You're preparing to launch a capable AI with a good impact measure. You and your team have a scaled impact partition which is proven 1-buffered. Suppose that this buffer suffices for your purposes, and that the other aspects of the agent design have been taken care of. You plan to initialize R : = 1 , modestly increasing until you get good results. You have the nagging feeling that this process could still be unsafe, but the team lead refuses to delay the launch without specific reason. Find that reason. You have 5 minutes. Who says R = 1 is safe? The buffer is relative . You need a unit of impact by which you increment R . For example, start at R equalling the impact of making one paperclip, and increment by that. Technical Appendix: Math Let ¯ A be a finite plan space, with utility function u : ¯ A → R and impact measure I : ¯ A → R ≥ 0 . For generality, we leave the formalization of plans ambiguous; notice that if you replace "plan" with "snark", all the theorems still go through (likewise for "utility" and "impact"). In this post, we talk about the impact allowance R > 0 (in Frank's world, the search radius) as a constraint within which the objesctive may be freely maximized, breaking ties in favor of the plan(s) with lower impact. On the other hand, many approaches penalize impact by subtracting a scaled penalty from the objective. We respectively have a r g m a x ¯ a ∈ ¯ A ; I ( ¯ a ) ≤ R u ( ¯ a ) a r g m a x ¯ a ∈ ¯ A u ( ¯ a ) − I ( ¯ a ) R . We say that the former induces a "constrained impact partition" and that the latter induces a "scaled impact partition". Specifically, we partition the values of R for which different (sets of) plans are optimal. We say that a plan ¯ a corresponds to a subinterval if it is optimal therein (the subinterval also must be the maximal connected one such that this holds; e.g. , if ¯ a is optimal on ( 0 , 1 ] , we say it corresponds to that subinterval, but not to ( 0 , .5 ] ), and that ¯ a appears in a partition if there is such a corresponding subinterval. We say that plans overlap if their corresponding subintervals intersect. As a technical note, we partition the positive values of R for which different sets of plans are optimal; in this set, each value appears exactly once, so this indeed a partition. For clarity, we will generally just talk about which plans correspond to which subintervals. Also, if no plan has zero impact, the first subinterval of the constrained impact partition will be undefined; for our purposes, this isn't important. We want to be able to prove the "safety" of an impact partition. This means we can expect any terrorists to be some proportional distance farther away than any reasonable marbles. Therefore, for sensible ways of expanding an sufficiently small initial search radius, we expect to not meet any terrorists before finding a marble we're happy with. In addition, we want to know how far is too far – to give upper bounds on how far away fairly pink marbles are, and lower bounds on how close terrorists might be. Definition [ α -buffer]. For α > 0 , an impact partition is α -buffered if R LB: catastrophe R UB: satisfactory ≥ 1 + α , where R LB: catastrophe lower-bounds the first possible appearance of those plans we label 'catastrophes', and R UB: satisfactory upper-bounds the first appearance of plans we deem satisfactory. We now set out building the machinery required to prove α -buffering of a scaled partition. Lemma 1 [Plans appear at most once]. If ¯ a appears in a constrained or scaled impact partition, then it corresponds to exactly one subinterval. Proof outline. The proof for the constrained case is trivial. For the scaled case, suppose ¯ a corresponds to more than one subinterval. Consider the first two such subintervals s 1 , s 3 . By definition, s 1 ∩ s 3 = ∅ (otherwise they would be the same maximal connected subinterval), so there has to be at least one subinterval s 2 sandwiched in between (on almost all of which ¯ a cannot be optimal; let ¯ a ′ be a plan which is optimal on s 2 ). Let R 1 ∈ s 1 , R 2 ∈ s 2 , R 3 ∈ s 3 , where R 2 ∉ s 1 ∪ s 3 . By definition of optimality on a subinterval, u ( ¯ a ′ ) − I ( ¯ a ′ ) R 1 < u ( ¯ a ) − I ( ¯ a ) R 1 u ( ¯ a ) − I ( ¯ a ) R 2 < u ( ¯ a ′ ) − I ( ¯ a ′ ) R 2 u ( ¯ a ′ ) − I ( ¯ a ′ ) R 3 < u ( ¯ a ) − I ( ¯ a ) R 3 ; by employing the fact that R 1 < R 2 < R 3 , algebraic manipulation produces an assertion that a quantity is strictly less than itself. Therefore, no such intervening s 2 can exist. □ Proposition 2 [Plan overlap is very restricted]. Suppose ¯ a and ¯ a ′ appear in an impact partition which is (a) constrained. ¯ a and ¯ a ′ overlap if and only if I ( ¯ a ) = I ( ¯ a ′ ) and u ( ¯ a ) = u ( ¯ a ′ ) . (b) scaled. If I ( ¯ a ) = I ( ¯ a ′ ) and u ( ¯ a ) = u ( ¯ a ′ ) , then ¯ a and ¯ a ′ correspond to the same subinterval. If ¯ a and ¯ a ′ overlap at more than one point, then I ( ¯ a ) = I ( ¯ a ′ ) and u ( ¯ a ) = u ( ¯ a ′ ) . Proof outline. Proving (a) and the first statement of (b) is trivial (remember that under the constrained rule, ties are broken in favor of lower-impact plans). Suppose that ¯ a and ¯ a ′ overlap at more than one point. Pick the first two points of intersection, R 1 and R 2 . Since both plans are optimal at both of these points, we must have the equalities u ( ¯ a ) − I ( ¯ a ) R 1 = u ( ¯ a ′ ) − I ( ¯ a ′ ) R 1 u ( ¯ a ) − I ( ¯ a ) R 2 = u ( ¯ a ′ ) − I ( ¯ a ′ ) R 2 . Solving the first equality for u ( ¯ a ) and substituting in the second, we find I ( ¯ a ) = I ( ¯ a ′ ) . Then u ( ¯ a ) = u ( ¯ a ′ ) , since otherwise one of the plans wouldn't be optimal. □ Proposition 2b means we don't need a tie-breaking procedure for the scaled case. That is, if there's a tie between a lower-scoring, lower-impact plan and a proportionally higher-scoring, higher-impact alternative, the lower-impact plan is optimal at a single point because it's quickly dominated by the alternative. The following result tells us that if there aren't any catastrophes ( i.e. , terrorists) before ¯ a ′ on the constrained impact partition, there aren't any before it on the scaled impact partition either . This justifies our initial framing with Frank. Lemma 3 [Constrained impact partitions are more refined]. If ¯ a appears in a scaled impact partition, it also appears in the corresponding constrained impact partition. In particular, if ¯ a ′ appears after ¯ a in a scaled impact partition, then ¯ a ′ appears after ¯ a in the corresponding constrained impact partition. Proof. Suppose that ¯ a didn't have a constrained subinterval starting inclusively at I ( ¯ a ) ; then clearly it wouldn't appear in the scaled impact partition, since there would be a strictly better plan for that level of impact. Then ¯ a has such a subinterval. Obviously, the fact that ¯ a ′ appears after ¯ a implies u ( ¯ a ′ ) > u ( ¯ a ) . □ The converse isn't true; sometimes there's too much penalty for not enough score. The next result is exactly what we need to answer the question just raised – it says that higher-scoring, higher-penalty plans become preferable when R equals the ratio between the additional penalty and the additional score. Theorem 4 [Scaled domination criterion]. Let ¯ a and ¯ a ′ be plans such that u ( ¯ a ′ ) > u ( ¯ a ) and I ( ¯ a ′ ) ≥ I ( ¯ a ) . In the context of the scaled penalty, ¯ a ′ is strictly preferable to ¯ a when R > I ( ¯ a ′ ) − I ( ¯ a ) u ( ¯ a ′ ) − u ( ¯ a ) , and equally preferable at equality. Proof outline. u ( ¯ a ′ ) − I ( ¯ a ′ ) R > u ( ¯ a ) − I ( ¯ a ) R R > I ( ¯ a ′ ) − I ( ¯ a ) u ( ¯ a ′ ) − u ( ¯ a ) . Equality at the value of the right-hand side can easily be checked. □ Theorem 4 also illustrates why we can't strengthen the second statement in Proposition 2b plan overlap is very restricted : if two plans overlap at exactly one point, they sometimes have proportionally different score and impact, thereby satisfying the equality criterion. At first, plans with slightly lower impact will be preferable in the scaled case, no matter how high-scoring the other plans are – a plan with 0 score and .99 impact will be selected before a plan with 1,000,000,000 score and 1 impact. Lemma 5 [First subinterval is the best plan with minimal impact]. The plan with highest score among those with minimal impact corresponds to the first subinterval. Proof outline. The constrained case is once again trivial (if there is no plan within the constraint, we assume that the agent does nothing / Frank returns no object). For the scaled case, if all plans have equal impact, the claim is trivial. Otherwise, let M : = max ¯ a | u ( ¯ a ) | and let ¯ a ′ be any plan with a non-minimal impact. Then the earliest that ¯ a ′ becomes preferable to any minimally impactful plan ¯ a is R ≥ I ( ¯ a ′ ) − I ( ¯ a ) 2 M . Since the right hand side is positive, ¯ a ′ cannot correspond to the first subinterval. Clearly the highest-scoring minimal-impact ¯ a does. □ Now we can write the algorithm for constructing scaled intervals. Discard dominated plans. The lowest-impact plan with greatest score appears first in the scaled partition; assign to it the interval ( 0 , ∞ ) . While plans remain: Find the plan which soonest dominates the previous best plan. close off the previous plan's interval, and assign the new best plan an appropriate interval. Adjust the marginal scores and impacts of remaining plans, discarding plans with negative score. Since this procedure is well-defined, given ¯ A , u , and I , we can speak of the corresponding constrained or scaled impact partition. A more formal algorithm is available here . This algorithm is O ( | ¯ A | 2 ) because of line 7 , although constructing the constrained partition (probably O ( | ¯ A | log | ¯ A | ) due to sorting) often narrows things down significantly. Unfortunately, ¯ A is usually huge. For our purposes, we don't need the whole partition – we just want to have good reason to think that plans similar to a reasonable one we envision will appear well before any catastrophes. Perhaps we can give bounds on the earliest and latest plans can appear, and show that reasonable-bounds don't intersect with catastrophe-bounds? Theorem 6 [Individual appearance bounds]. If ¯ a appears in a scaled partition, the earliest it appears is I ( ¯ a ) − I next-largest u ( ¯ a ) − min u ( ¯ a ′ ) , assuming ¯ a is not of minimal impact; if it has minimal score minimal impact, it never appears. The latest it appears is I ( ¯ a ) − min I ( ¯ a ′ ) u ( ¯ a ) − u next-largest ≤ I ( ¯ a ) u ( ¯ a ) − u next-largest , where u next-largest = max ¯ a ′ ∈ ¯ A ; u ( ¯ a ′ ) < u ( ¯ a ) u ( ¯ a ′ ) and I next-largest = max ¯ a ′ ∈ ¯ A ; I ( ¯ a ′ ) < I ( ¯ a ) I ( ¯ a ′ ) . Proof outline. The two claims clearly correspond to the minimal and maximal values of R according to the domination criterion; the second claim's right-hand side uses the fact that I is non-negative. □ Corollary 7 [Low-impact agents are naïve maximizers in the limit]. A plan with maximal score corresponds to the last subinterval. Proof outline. If all plans have the same score, the claim is trivial. Otherwise, let ¯ a best be a plan with the lowest impact of those with maximal score. In the constrained case, clearly it corresponds with the subinterval [ I ( ¯ a best ) , ∞ ) . In the scaled case, let ¯ a second-best be a plan with second-highest score. Then by Theorem 6, the latest that ¯ a best can appear is I ( ¯ a best ) u ( ¯ a best ) − u ( ¯ a second-best ) . Since no plans meet the domination criterion with respect to ¯ a best , this is the last subinterval. □ Unfortunately, Theorem 6's appearance bounds are ridiculous in realistic settings – if u and I return 32-bit floating-point numbers, the next-largest could easily be within 10 − 7 , yielding an upper "bound" of I ( ¯ a ) × 10 7 . The reason: diminishing returns; this is exactly what was happening with the newspaper route before. Theorem 8 [Deals get worse over time]. Suppose that ¯ a is optimal on a subinterval, and ¯ b , ¯ c are such that u ( ¯ c ) > u ( ¯ b ) but ¯ b dominates ¯ a strictly before ¯ c does. Then ¯ c dominates ¯ b I ( ¯ c ) − I ( ¯ b ) u ( ¯ c ) − u ( ¯ b ) > later than ¯ a I ( ¯ c ) − I ( ¯ a ) u ( ¯ c ) − u ( ¯ a ) . Proof outline. u ( ¯ c ) − u ( ¯ a ) = ( u ( ¯ b ) − u ( ¯ a ) ) + ( u ( ¯ c ) − u ( ¯ b ) ) ( I ( ¯ c ) − I ( ¯ a ) ) u ( ¯ c ) − u ( ¯ a ) I ( ¯ c ) − I ( ¯ a ) = ( I ( ¯ b ) − I ( ¯ a ) ) u ( ¯ b ) − u ( ¯ a ) I ( ¯ b ) − I ( ¯ a ) + ( I ( ¯ c ) − I ( ¯ b ) ) u ( ¯ c ) − u ( ¯ b ) I ( ¯ c ) − I ( ¯ b ) . Since ¯ b dominates ¯ a strictly before ¯ c does, we know that ¯ b must get more bang for its buck : u ( ¯ b ) − u ( ¯ a ) I ( ¯ b ) − I ( ¯ a ) > u ( ¯ c ) − u ( ¯ a ) I ( ¯ c ) − I ( ¯ a ) . Clearly the conclusion follows, as a number cannot be expressed as the positive combination of larger numbers (the impact differences all must be positive). □ Corollary 9 [Lower bounds which aren't ridiculous]. Suppose ¯ a appears and that ¯ a ′ is such that u ( ¯ a ′ ) > u ( ¯ a ) , I ( ¯ a ′ ) ≥ I ( ¯ a ) ( i.e. the preconditions of the domination criterion). Then the earliest that ¯ a ′ appears is R = I ( ¯ a ′ ) − I ( ¯ a ) u ( ¯ a ′ ) − u ( ¯ a ) . This obsoletes the lower bound provided by Theorem 6 Individual appearance bounds . Theorem 10 [Order of domination determines order of appearance]. If ¯ b and ¯ c both appear in a scaled partition and ¯ b dominates some ¯ a before ¯ c does, then ¯ b appears before ¯ c . Proof outline. For them both to appear, they can't have equal impact but unequal score, nor can they have equal score but unequal impact. For similar reasons, ¯ b must have both less impact and lower score than ¯ c ; the converse situation in which they both appear is disallowed by Lemma 3 Constrained impact partitions are more refined . Another application of this lemma yields the conclusion. □ Theorem 11 [Scaled α -buffer criterion]. Let P be a scaled impact partition. Suppose that there exist no catastrophic plans with impact below I LB: cat , and that, in the corresponding constrained partition ( i.e. plans which aren't strictly worse), plans appearing with score in the satisfactory interval [ u LB: sat , u UB: sat ] have impact no greater than I UB: sat (assume that there is at least one plan like this). Observe we have the correct bounds R LB: catastrophe : = I LB: cat u max − u min , R UB: satisfactory : = I UB: sat u UB: sat − u LB: sat . When R LB: catastrophe > R UB: satisfactory , a satisfactory plan corresponds to a subinterval with nonzero measure ( i.e. not just a point), strictly preceding any catastrophes. Refine the lower bound to get R LB': catastrophe : = I LB: cat − I UB: sat u max − u LB: sat . Then P is α -buffered ( α > 0 ) when R LB: catastrophe R UB: satisfactory = I LB: cat I UB: sat u UB: sat − u LB: sat u max − u min ≥ 1 + α or R LB': catastrophe R UB: satisfactory = I LB: cat − I UB: sat I UB: sat u UB: sat − u LB: sat u max − u LB: sat ≥ 1 + α . In particular, if u is bounded [ 0 , 1 ] , the above turn into R LB: catastrophe R UB: satisfactory = I LB: cat I UB: sat ( u UB: sat − u LB: sat ) ≥ 1 + α or R LB': catastrophe R UB: satisfactory = I LB: cat − I UB: sat I UB: sat u UB: sat − u LB: sat 1 − u LB: sat ≥ 1 + α . Lastly, notice that the first of the two inequalities incorporates less information and is harder to satisfy ( R LB': catastrophe > R LB: catastrophe ); therefore, satisfying the second inequality also satisfies the first. Proof outline. For clarity, the theorem statement included much of the reasoning; straightforward application of existing results proves each claim. □ Exercise: Let u UB: sat = .7 , u LB: sat = .5 . Using the refined criterion, determine which catastrophe/reasonable impact ratios induce 2.6-buffering. ratio ≥ 10 Exercise: Let u UB: sat − u LB: sat = .5 , ratio = 7 . What is the largest α for which the simple criterion can guarantee α -buffering? α = 13 Even More Math Proposition 12 [Invariances]. Let P be an impact partition induced by ( ¯ A , u , I ) . (a) P is invariant to translation of u . (b) If P is constrained, it is invariant to positive scalar multiplication of u , and the relative lengths of its subintervals are invariant to positive scalar multiplication of I . (c) If P is scaled, it is invariant to concurrent positive scalar multiplication of u and I , and to translation of I such that its image remains non-negative. In particular, u may be restricted to [ 0 , 1 ] and I translated such that at least one plan has zero impact WLOG with respect to scaled partitions. Lemma 13. Multiple constrained subintervals are induced iff multiple scaled subintervals are induced. Proof. Forward direction: there is at least one scaled subinterval by lemma 5 First subinterval is the best plan with minimal impact . Consider a plan corresponding to a different constrained subinterval; this either appears in the scaled subinterval, or fails to appear because a different plan earlier satisfies the scaled dominance criterion. There must be some such plan because there are multiple constraints of intervals and therefore a plan offering greater score for greater impact. Repeat the argument; the plan space is finite, so we end up with another plan which appears. The reverse direction follows by lemma 3 Constrained impact partitions are more refined . □ Bonus exercise: Show that, for any function u ′ : ¯ A → R preserving the ordering induced by u , there exists an I ′ : ¯ A → R ≥ 0 preserving the ordering induced by I such that ( ¯ A , u , I ) and ( ¯ A , u ′ , I ′ ) induce the same scaled partition. Your reasoning should adapt directly to the corresponding statement about I ′ : ¯ A → R ≥ 0 and I . |
624a6c21-b7aa-472b-a5f7-3db4c0efd1ec | trentmkelly/LessWrong-43k | LessWrong | Can you improve your intelligence with these types of exercises?
Hi, I write and find exercises on biases to help myself and others think better.
For example:
Bob is an opera fan who enjoys touring art museums when on holiday.
Growing up, he enjoyed playing chess with family members and friends.
Which situation is more likely?
1. Bob plays trumpet for a major symphony orchestra.
2. Bob is a farmer.
My question to the LessWrong community:
Does it make sense to learn like this?
Answer to the example (and other exercises):
https://newsletter.decisionschool.org/p/decision-making-bias-base-rate-fallacy
|
5d21e512-08d0-48cb-a0c5-78257a647170 | trentmkelly/LessWrong-43k | LessWrong | Homepathy: One down, infinity to go
A few days ago the Australian National Health and Medical Research Council (NMHRC) released a report that reviewed 1800 papers on homeopathic treatments. In what should be a surprise to no one on LW, it was concluded that they were no more effective than placebo.
Even though this particular failure mode of human reasoning (belief in homeopathy) was never that hard to refute, there are other, closely related errors in reasoning that often go by undetected, and I think it's useful to talk about them and discuss them. I always found homeopathy interesting because it is probably one of the purest manifestations of magical thinking - the belief (knowingly or unknowingly) in the existence of a supernatural essence that exists in objects, animals, and people (or more abstract things, such as events). All of us are guilty of thinking magically once in a while. Some common forms of magical thinking include believing in karma, believing in souls, valuing handcrafted objects over factory-produced ones (without pointing to specific functional reasons why the factory-built objects are inferior), and believing that no machine could ever be as intelligent or moral or conscious as a human could.
Is it possible to identify when magical thinking is taking place? I feel that this is harder than it sounds, and is something that could be of great use in the rationalist toolbox. |
35a70293-deda-46e7-897f-b61ec3903222 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | What are scaling laws?
Scaling laws express a relationship between the three [aspects of compute](/?state=9358&question=What%20is%20compute%3F) (the length of the training run, the amount of training data, and the size of the model) and the performance of a model. They’re used to allocate limited resources between these three variables in a way that results in the highest capabilities.
Compute to train large foundation models like GPT is not cheap. If you’re going to spend 10 million dollars training your models, you want to be confident that you’re allocating that money efficiently. So, in 2020, instead of continuing to rely on gut feelings, OpenAI came up with the first generation of [scaling laws](https://arxiv.org/abs/2001.08361).
Scaling laws are used to decide on trade-offs like: Should I pay Stack Overflow to legally be able to train on their data? Or should I buy more GPUs? Or should I pay the extra electricity bills I would have by training my model longer? If my compute goes up by 10×, how many parameters should I add to my model to make the best possible use of my GPUs?
In the case of very large language models like GPT-3, these trade-offs look more like training a 20-billion parameter model on 40% of an archive of the Internet vs. training a 200-billion parameter model on 4% of an archive of the Internet, or any of an infinite number of points along the same boundary.
OpenAI’s paper found that it is almost always better to increase your model size than to increase your dataset size. Subsequent researchers and institutions took this philosophy to heart and focused on engineering larger models, rather than on training smaller models over more data. The following table and [graph](https://epochai.org/blog/machine-learning-model-sizes-and-the-parameter-gap) show the change in trend of parameter growth of machine learning models. Note the increase to half a trillion parameters with constant training data.

DeepMind [updated these scaling laws](https://arxiv.org/abs/2203.15556) in 2022. They found that for every increase in compute, you should increase data size and model size by approximately the *same amount*. To verify that the law was right, DeepMind trained a 70-billion parameter model ("Chinchilla") using the same compute as had been used for the 280-billion parameter Gopher. That is, they trained the smaller Chinchilla with 1.4 trillion tokens, while the larger Gopher had only been trained with 300 billion tokens. And, as the new scaling laws predict, Chinchilla is a lot better than Gopher on pretty much every metric.
**Is scaling models enough to get AGI?**

[^kix.ea6rmm5rdbbc]
[^kix.ea6rmm5rdbbc]: 1a3orn (2022) , [New Scaling Laws for Large Language Models](https://www.alignmentforum.org/posts/midXmMb2Xg37F2Kgn/new-scaling-laws-for-large-language-models) |
084dcba4-ce18-42ba-a500-bf53e518bece | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | What should I do with my idea for helping with AI alignment?
Maybe you’ve thought of a project that seems really valuable for someone to do, but nobody seems to be doing it; the AI safety community seems to be collectively [dropping the ball](https://www.lesswrong.com/posts/Zp6wG5eQFLGWwcG6j/focus-on-the-places-where-you-feel-shocked-everyone-s). It’s possible that you’ve missed something, and the project either isn’t valuable or is already being done. But it’s also possible that the project genuinely is really valuable. That means that, from an expected value perspective, looking into this sort of idea is usually a good bet.
To make progress, you could go to [Alignment Ecosystem Development](https://alignment.dev/) to get more people involved with your idea, start an [AI Safety Quest](https://aisafety.quest/) group for it, or find collaborators in some other way. It also helps to make yourself “discoverable”. For advice, you could talk to [AI Safety Support](https://www.aisafetysupport.org/). You could also visit [EA](https://www.effectivealtruism.org/) hubs or attend [EA Global](https://www.effectivealtruism.org/ea-global) to exchange ideas. Consider using [red-teaming](/?state=8FJZ&question=How%20is%20red%20teaming%20used%20in%20AI%20alignment%3F) or [murphyjitsu](https://www.lesswrong.com/posts/Htjbj8ystqc2zLtMX/murphyjitsu-an-inner-simulator-algorithm) to find the most likely ways your project could fail and avoid them. If you’re not sure the project is really something you want to do, try [goal factoring](https://www.lesswrong.com/posts/ZHWiCM4QmX8WwYajH/goal-factoring-1). If you *are* sure, you can [apply for funding](/?state=6703&question=I%20want%20to%20work%20on%20AI%20alignment.%20How%20can%20I%20get%20funding%3F) for your project from philanthropists interested in promoting AI safety.
|
befe6c74-a99e-410c-a13a-f015166c6e5b | trentmkelly/LessWrong-43k | LessWrong | Is the game design/art maxim more generalizable to criticism/praise itself?
For those who don't know, the maxim of criticism that game designers/art people use in their responses to criticism is roughly the following:
"If you get a negative reaction on it's own, that's evidence that there's a problem."
"If customers give any reasons why or solutions to the problem, ignore that criticism."
Is this generally true of criticism against anything, and if so, why do you think this happens? |
0623b799-aa41-49fa-95e5-35a9faa95d28 | trentmkelly/LessWrong-43k | LessWrong | How to prevent Re-Traumatization on Meditation Retreats
Meditation tends to bring to the surface unpleasant suppressed material like extremely unpleasant emotions, thoughts, or memories. When this happens while you're alone on a silent meditation retreat and don't know how to deal with it, there's a certain risk of re-traumatization. My intuition is that if you know what you are doing the risk for this is minimal and you don't have to worry much about this. But you should know what you are doing - hence this post.
What re-traumatization is and is not
It's NOT the process of said suppressed material suddenly coming to the forefront of your mind during the quiet of meditation and now spooking around in consciousness, potentially making you feel emotionally worse for some time. This is completely normal, and in a way it's actually exactly what you want to happen! Bringing unpleasant suppressed material to the surface is just what needs to happen for emotional healing. Still, this is the reason some people find themselves surprised they are actually not that happy during or shortly after a silent meditation retreat.
Re-traumatization - I'm using the term in a very broad sense - happens when a past overwhelming experience comes up from your unconsciousness, and you are again overwhelmed by it and still cannot deal with it.
The key to prevention: mindfulness & metacognitive awareness
You prevent re-traumatization by maintaining metacognitive awareness, which basically just means "stay mindful of your internal experience". That is, you don't get sucked in in those extreme feelings, don't fully blend with them, don't "get lost in thought in" them, don't let them be the only thing filling your consciousness.
Instead, it's crucial that you maintain some context: "Okay, I am here at this retreat where there is some chance for extremely negative feelings to come up. And indeed, this is exactly what I feel is happening right now. I can feel the strong negative feeling is happening am in contact with some past trauma. This is |
b36d869e-0f76-4bc4-94b8-726672f28076 | trentmkelly/LessWrong-43k | LessWrong | Book Club: Software Design for Flexibility
Gerry Sussman's new book on how to build adaptive systems just came out. It's called "Software Design for Flexibility: How to Avoid Programming Yourself into a Corner" and it covers similar ground to Sussman's advanced symbolic programming class which I liked a lot. The class itself is a kind of follow-up to SICP.
I'm reading the book over the next 8 weeks. If you want to read it together, and share notes and toy implementations, add your name to this doc.
Excerpt:
> Our goal in this book is to investigate how to construct computational systems so that they can be easily adapted to changing requirements. One should not have to modify a working program. One should be able to add to it to implement new functionality or to adjust old functions for new requirements. We call this additive programming. We explore techniques to add functionality to an existing program without breaking it. |
922b32f4-51f1-4d86-8e5b-323d76667f97 | trentmkelly/LessWrong-43k | LessWrong | Most sensible abstraction & feature set for a systems language?
Edit: Made title seem more like a question as intended and tried to clarify on some points Ive made and provided more context on my motivations
Motivations
Systems programming is not well defined. But for the purposes of this post I define it to mean "writing software to be used as core, performant and reliable modules for other software". A systems software would not be used to provide business value directly, but rather be part of a network of modules eventually used in an application software that does provide business value. If a systems software dies, so could the user. If an application software dies, it could be rebootable and fixed by IT professionals. Every extra millisecond a systems software spends on doing its job, the application software using it spends an extra second. Thats $1 million lost.
Current systems languages like C, C++, Rust do their job in the systems programming space and have been thoroughly optimised throughout the years. Unfortunately, problems with ergonomics, complexity, and the tendency for bad code and shoddy solutions is still amiss.
Newer languages like rust have tackled many of the above issues, leading to a language that is most loved by devs (Stack Overflow 2020-2022) by far, compared to the next languages on the list. Unfortunately critics often cite rust's complexity with its lifetimes and borrowing system as well as parts of its syntax. I happen to think they're a bit verbose sometimes and not as easily to digest in one skim through. Rust's toolchain has definitely improved the overall development experience, simplifying the retrieval, management and usage of external libraries as well as providing a uniform(ish) interface to customise and adapt to your use case. The language server protocol was also a step in the right direction, allowing extra productivity if your language can properly utilise its full potential. I think rust-analyzer is among the best language servers available, with rust's strong static checks, leadi |
f03893f6-2d8a-4d2d-a314-2d415a9eab84 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "As previously discussed, on June 6th I received a message from jackk, a Trike Admin. He reported that the user Jiro had asked Trike to carry out an investigation to the retributive downvoting that Jiro had been subjected to. The investigation revealed that the user Eugine_Nier had downvoted over half of Jiro's comments, amounting to hundreds of downvotes.
I asked the community's guidance on dealing with the issue, and while the matter was being discussed, I also reviewed previous discussions about mass downvoting and looked for other people who mentioned being the victims of it. I asked Jack to compile reports on several other users who mentioned having been mass-downvoted, and it turned out that Eugine was also overwhelmingly the biggest downvoter of users David_Gerard, daenarys, falenas108, ialdabaoth, shminux, and Tenoke. As this discussion was going on, it turned out that user Ander had also been targeted by Eugine.
I sent two messages to Eugine, requesting an explanation. I received a response today. Eugine admitted his guilt, expressing the opinion that LW's karma system was failing to carry out its purpose of keeping out weak material and that he was engaged in a "weeding" of users who he did not think displayed sufficient rationality.
Needless to say, it is not the place of individual users to unilaterally decide that someone else should be "weeded" out of the community. The Less Wrong content deletion policy contains this clause: Harrassment of individual users. If we determine that you're e.g. following a particular user around and leaving insulting comments to them, we reserve the right to delete those comments. (This has happened extremely rarely.) Although the wording does not explicitly mention downvoting, harassment by downvoting is still harassment. Several users have indicated that they have experienced considerable emotional anguish from the harassment, and have in some cases been discouraged from using Less Wrong at all. This is not a desirable state of affairs, to say the least.
I was originally given my moderator powers on a rather ad-hoc basis, with someone awarding mod privileges to the ten users with the highest karma at the time. The original purpose for that appointment was just to delete spam. Nonetheless, since retributive downvoting has been a clear problem for the community, I asked the community for guidance on dealing with the issue. The rough consensus of the responses seemed to authorize me to deal with the problem as I deemed appropriate.
The fact that Eugine remained quiet about his guilt until directly confronted with the evidence, despite several public discussions of the issue, is indicative of him realizing that he was breaking prevailing social norms. Eugine's actions have worsened the atmosphere of this site, and that atmosphere will remain troubled for as long as he is allowed to remain here.
Therefore, I now announce that Eugine_Nier is permanently banned from posting on LessWrong. This decision is final and will not be changed in response to possible follow-up objections.
Unfortunately, it looks like while a ban prevents posting, it does not actually block a user from casting votes. I have asked jackk to look into the matter and find a way to actually stop the downvoting. Jack indicated earlier on that it would be technically straightforward to apply a negative karma modifier to Eugine's account, and wiping out Eugine's karma balance would prevent him from casting future downvotes. Whatever the easiest solution is, it will be applied as soon as possible.
EDIT 24 July 2014: Banned users are now prohibited from voting."
What you see above is a segment from a LessWrong user's post.
Can you explain what this text is saying as if I've never finished my first 2 years of college?
Sure, this text is about a problem in an online community called LessWrong.
A person named Jack, who helps run the website, told the author that another user, Jiro, was getting a lot of negative votes on his comments. They found out that most of these negative votes were coming from a user named Eugine_Nier.
The author checked if other users were also getting lots of negative votes from Eugine_Nier, and found out that was indeed the case.
The author asked Eugine_Nier about this, and Eugine_Nier admitted that he was giving negative votes to people whose comments he didn't think were good enough. But it's not up to one person to decide who's good enough to be part of the community.
The author says that giving lots of negative votes to someone can make them feel bad and stop them from using the website, which is not what they want.
Because of all this, the author decided to ban Eugine_Nier from the website, which means he can't post anything anymore. But there's a problem because even though he's banned, Eugine_Nier can still give negative votes. The author is trying to find a way to stop this from happening.
What potential negative impacts could the banning of Eugine_Nier have on the LessWrong community?
Banning Eugine_Nier could potentially have several impacts on the LessWrong community:
1. Disruption of Community Dynamics: Eugine_Nier might have been a prolific contributor to discussions. His absence might disrupt the dynamics of the community and lead to a decrease in activity or engagement.
2. Polarization: If Eugine_Nier had a group of supporters within the community, the ban could polarize members, leading to disagreements and conflicts. This could further disrupt the harmony of the community.
3. Chilling Effect: The ban could create a "chilling effect," where other members become afraid of being banned for expressing unpopular opinions or for engaging in downvoting. This could lead to self-censorship and a reduction in the diversity of viewpoints expressed.
4. Loss of Content: If Eugine_Nier was a significant contributor, his ban could result in a loss of content for the community. This could reduce the richness of discussions and the amount of information available to members.
5. Precedent for Future Bans: The ban sets a precedent for how the community handles conflicts and violations. This could be positive if it deters harmful behavior, but negative if members feel it was unjust and fear similar treatment.
6. Potential Retaliation: If Eugine_Nier feels wronged by the ban, he might seek ways to retaliate, such as creating new accounts to continue downvoting or spreading negative information about the community elsewhere.
Remember, these are potential impacts and the actual outcome would depend on many factors, including the community's response to the ban, Eugine_Nier's role within the community, and the actions of the moderators moving forward.
Reflecting on the potential negative impacts of banning Eugine_Nier, could there be any unforeseen positive outcomes for the LessWrong community?
Yes, there could potentially be several positive outcomes from banning Eugine_Nier:
1. Improved Community Atmosphere: If Eugine_Nier's actions were causing distress or discomfort among members, his removal could lead to an improved atmosphere, encouraging more open and respectful discussions.
2. Deterrence: The ban serves as a clear signal that certain behaviors are not tolerated. This could deter other members from engaging in similar activities, thereby promoting a more respectful and supportive community.
3. Increased Trust in Moderation: If members felt that Eugine_Nier's actions were harmful and went unpunished for too long, the decisive action of banning him could increase trust in the community's moderation and governance.
4. Encouragement of Constructive Criticism: With the removal of a user who mass downvoted, members might feel more encouraged to provide constructive criticism instead of simply downvoting, fostering a more productive dialogue.
5. Attraction of New Members: If news of the ban spreads, it could attract new members who appreciate the community's stance against harassment and its commitment to maintaining a positive and respectful environment.
6. Empowerment of Targeted Users: Users who were targeted by Eugine_Nier may feel more comfortable participating in discussions knowing that the community has taken steps to address the issue.
These potential positive outcomes would largely depend on how the ban is communicated and enforced, as well as the community's overall response to the situation.
How might the LessWrong community effectively communicate and enforce the ban to maximize the potential positive outcomes?
To maximize the potential positive outcomes, the LessWrong community could consider the following strategies:
1. Clear Communication: The community should clearly communicate the reasons for the ban, emphasizing its commitment to maintaining a respectful and supportive environment. This would help members understand why the decision was made and how it aligns with the community's values.
2. Transparency: The process leading to the ban should be transparent. Sharing how the decision was made and the steps taken to investigate the issue can build trust in the community's governance.
3. Consistency: The community should consistently enforce its rules and policies. This means not only banning Eugine_Nier, but also taking similar action against any other members who engage in comparable behavior. Consistency will help deter other members from engaging in such activities.
4. Support for Affected Members: The community should provide support for members who were targeted by Eugine_Nier. This could involve personal outreach, opportunities to share their experiences, or resources to help them cope with any negative effects they experienced.
5. Encourage Constructive Behavior: The community could use this opportunity to promote constructive behavior. This could involve highlighting the value of constructive criticism over simple downvoting, or organizing activities that foster positive interactions among members.
6. Openness to Feedback: The community should be open to feedback about the ban and willing to make adjustments based on members' responses. This will show that the community values its members' opinions and is willing to learn and grow from the experience.
By implementing these strategies, the LessWrong community can turn the ban into an opportunity for growth and improvement, thereby maximizing the potential positive outcomes. |
c8074900-260b-43c6-98db-0b0e874d8907 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Six Dimensions of Operational Adequacy in AGI Projects
| |
| --- |
| **Editor's note:** The following is a lightly edited copy of a document written by Eliezer Yudkowsky in November 2017. Since this is a snapshot of Eliezer’s thinking at a specific time, we’ve sprinkled reminders throughout that this is from 2017.A background note:It’s often the case that people are slow to abandon obsolete playbooks in response to a novel challenge. And AGI is certainly a very novel challenge.Italian general Luigi Cadorna offers a memorable historical example. In the Isonzo Offensive of World War I, Cadorna lost hundreds of thousands of men in futile frontal assaults against enemy trenches defended by barbed wire and machine guns. As morale plummeted and desertions became epidemic, Cadorna began executing his own soldiers en masse, in an attempt to cure the rest of their “cowardice.” The offensive continued for *2.5 years*.Cadorna made many mistakes, but foremost among them was his refusal to recognize that this war was fundamentally unlike those that had come before. Modern weaponry had forced a paradigm shift, and Cadorna’s instincts were not merely miscalibrated—they were systematically broken. No number of small, incremental updates within his obsolete framework would be sufficient to meet the new challenge.Other examples of this type of mistake include the initial response of the record industry to iTunes and streaming; or, more seriously, the response of most Western governments to COVID-19. As usual, the real challenge of reference class forecasting is figuring out which reference class the thing you’re trying to model belongs to.For most problems, rethinking your approach from the ground up is wasteful and unnecessary, because most problems have a similar causal structure to a large number of past cases. When the problem isn’t commensurate with existing strategies, as in the case of AGI, you need a new playbook. |
I've sometimes been known to complain, or in a polite way scream in utter terror, that "there is no good guy group in AGI", i.e., if a researcher on this Earth currently wishes to contribute to the common good, there are literally zero projects they can join and no project close to being joinable. In its present version, this document is an informal response to an AI researcher who asked me to list out the qualities of such a "good project".
In summary, a "good project" needs:
* *Trustworthy command:* A trustworthy chain of command with respect to both legal and pragmatic control of the intellectual property (IP) of such a project; a running AGI being included as "IP" in this sense.
* *Research closure:* The organizational ability to *close* and/or *silo* IP to within a trustworthy section and prevent its release by sheer default.
* *Strong opsec:* Operational security adequate to prevent the proliferation of code (or other information sufficient to recreate code within e.g. 1 year) due to e.g. Russian intelligence agencies grabbing the code.
* *Common good commitment:* The project's command and its people must have a credible commitment to both short-term and long-term goodness. Short-term goodness comprises the immediate welfare of present-day Earth; long-term goodness is the achievement of transhumanist astronomical goods.
* *Alignment mindset:* Somebody on the project needs deep enough [security mindset](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) plus understanding of AI cognition that they can originate new, deep measures to ensure AGI alignment; and they must be in a position of technical control or otherwise have effectively unlimited political capital. Everybody on the project needs to understand and expect that aligning an AGI will be terrifically difficult and terribly dangerous.
* *Requisite resource levels:* The project must have adequate resources to compete at the frontier of AGI development, including whatever mix of computational resources, intellectual labor, and closed insights are required to produce a 1+ year lead over less cautious competing projects.
I was asked what would constitute "minimal, adequate, and good" performance on each of these dimensions. I tend to divide things sharply into "not adequate" and "adequate" but will try to answer in the spirit of the question nonetheless.
**Trustworthy command**
-----------------------
**Token:** Not having pragmatic and legal power in the hands of people who are opposed to the very idea of trying to align AGI, or who want an AGI in every household, or who are otherwise allergic to the *easy* parts of AGI strategy.
E.g.: Larry Page begins with the correct view that [cosmopolitan](https://arbital.com/p/value_cosmopolitan/) values are good, speciesism is bad, it would be wrong to mistreat sentient beings just because they're implemented in silicon instead of carbon, and so on. But he then proceeds to reject the idea that goals and capabilities are [orthogonal](https://arbital.com/p/orthogonality/), that instrumental strategies are [convergent](https://arbital.com/p/instrumental_convergence/), and that value is [complex and fragile](https://arbital.com/p/complexity_of_value/). As a consequence, he expects AGI to automatically be friendly, and is liable to object to any effort to align AI [as an attempt to keep AI "chained up"](https://books.google.com/books?id=2hIcDgAAQBAJ&pg=PA32&lpg=PA32&dq=Larry+%22that+digital+life+is+the+natural+and+desirable+next+step+in+the+cosmic+evolution+and+that+if+we+let+digital+minds+be+free+rather+than+try+to+stop+or+enslave+them+the+outcome+is+almost+certain+to+be+good%22&source=bl&ots=DIQP9C1EgF&sig=ACfU3U04K3r-b1kQqEvWF71-1Oo4ppsZsw&hl=en&sa=X&ved=2ahUKEwiFrvi6-K3gAhUHwlQKHc83AhgQ6AEwAXoECAkQAQ#v=onepage&q=Larry%20%22that%20digital%20life%20is%20the%20natural%20and%20desirable%20next%20step%20in%20the%20cosmic%20evolution%20and%20that%20if%20we%20let%20digital%20minds%20be%20free%20rather%20than%20try%20to%20stop%20or%20enslave%20them%20the%20outcome%20is%20almost%20certain%20to%20be%20good%22&f=false).
Or, e.g.: As of December 2015, Elon Musk not only wasn’t on board with closure, but apparently [wanted to *open-source*](https://medium.com/backchannel/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over-17e0e27dd02a) superhumanly capable AI.
Elon Musk is not in his own person a majority of OpenAI's Board, but if he can pragmatically sway a majority of that Board then this measure is not being fulfilled even to a token degree.
(Update: Elon Musk [stepped down](https://openai.com/blog/openai-supporters/) from the OpenAI Board in February 2018.)
**Improving:** There's a legal contract which says that the Board doesn't control the IP and that the alignment-aware research silo does.
**Adequate:** The entire command structure including all members of the finally governing Board are fully aware of the difficulty and danger of alignment. The Board will not object if the technical leadership have disk-erasure measures ready in case the Board suddenly decides to try to open-source the AI anyway.
**Excellent:** Somehow *no* local authority poses a risk of stepping in and undoing any safety measures, etc. I have no idea what incremental steps could be taken in this direction that would not make things worse. If e.g. the government of Iceland suddenly understood how serious things had gotten and granted sanction and security to a project, that would fit this description, but I think that trying to arrange anything like this would probably make things worse globally because of the mindset it promoted.
**Closure**
-----------
**Token:** It's generally understood organizationally that some people want to keep code, architecture, and some ideas a 'secret' from outsiders, and everyone on the project is okay with this even if they disagree. In principle people aren't being pressed to publish their interesting discoveries if they are obviously capabilities-laden; in practice, somebody always says "but someone else will probably publish a similar idea 6 months later" and acts suspicious of the hubris involved in thinking otherwise, but it remains possible to get away with not publishing at moderate personal cost.
**Improving:** A subset of people on the project understand why some code, architecture, lessons learned, et cetera must be kept from reaching the general ML community if success is to have a probability [significantly greater than zero](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/) (because [tradeoffs between alignment and capabilities](https://arbital.com/p/aligning_adds_time/) make the challenge unwinnable if there isn’t a project with a reasonable-length lead time). These people have formed a closed silo within the project, with the sanction and acceptance of the project leadership. It's socially okay to be *conservative* about what counts as potentially capabilities-laden thinking, and it's understood that worrying about this is not a boastful act of pride or a trick to get out of needing to write papers.
**Adequate:** Everyone on the project understands and agrees with closure. Information is siloed whenever not everyone on the project needs to know it.
| |
| --- |
| *Reminder: This is a 2017 document.* |
**Opsec**
---------
**Token:** Random people are not allowed to wander through the building.
**Improving:** Your little brother cannot steal the IP. Stuff is encrypted. Siloed project members sign NDAs.
**Adequate:** Major governments cannot silently and unnoticeably steal the IP without a nonroutine effort. All project members undergo government-security-clearance-style screening. AGI code is not running on AWS, but in an airgapped server room. There are cleared security guards in the server room.
**Excellent:** Military-grade or national-security-grade security. (It's hard to see how attempts to get this could avoid being counterproductive, considering the difficulty of obtaining trustworthy command and common good commitment with respect to any entity that can deploy such force, and the effect that trying would have on general mindsets.)
**Common good commitment**
--------------------------
**Token:** Project members and the chain of command are not openly talking about how dictatorship is great so long as they get to be the dictator. The project is not directly answerable to Trump or Putin. They say vague handwavy things about how of course one ought to promote democracy and apple pie (applause) and that everyone ought to get some share of the pot o' gold (applause).
**Improving:** Project members and their chain of command have come out explicitly in favor of being nice to people and eventually building a nice intergalactic civilization. They would release a cancer cure if they had it, their state of deployment permitting, and they don't seem likely to oppose incremental steps toward a postbiological future and the eventual realization of [most of the real value at stake](https://www.nickbostrom.com/astronomical/waste.html).
**Adequate:** Project members and their chain of command have an explicit commitment to something like [coherent extrapolated volition](https://arbital.com/p/cev/) as a long-run goal, AGI tech permitting, and otherwise the careful preservation of values and sentient rights through any pathway of intelligence enhancement. In the short run, they would not do everything that seems to them like a good idea, and would first prioritize not destroying humanity or wounding its spirit with their own hands. (E.g., if Google or Facebook consistently thought like this, they would have become concerned a lot earlier about social media degrading cognition.) Real actual moral humility with policy consequences is a thing.
**Alignment mindset**
---------------------
**Token:** At least some people in command sort of vaguely understand that AIs don't just automatically do whatever the alpha male in charge of the organization wants to have happen. They've hired some people who are at least pretending to work on that in a technical way, not just "[ethicists](https://www.lesswrong.com/posts/SsCQHjqNT3xQAPQ6b/yudkowsky-on-agi-ethics)" to talk about trolley problems and [which monkeys should get the tasty banana](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/).
**Improving:** The technical work output by the "safety" group is neither obvious nor wrong. People in command have [ordinary paranoia](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) about AIs. They expect alignment to be somewhat difficult and to take some extra effort. They understand that not everything they might like to do, with the first AGI ever built, is equally safe to attempt.
**Adequate:** The project has realized that building an AGI is *mostly* about aligning it. Someone with full security mindset and deep understanding of AGI cognition as cognition has proven themselves able to originate new deep alignment measures, and is acting as technical lead with effectively unlimited political capital within the organization to make sure the job actually gets done. Everyone expects alignment to be terrifically hard and terribly dangerous and full of invisible bullets whose shadow you have to see before the bullet comes close enough to hit you. They understand that alignment severely constrains architecture and that capability often trades off against transparency. The organization is targeting the [minimal](https://arbital.com/p/minimality_principle/) AGI doing the least dangerous cognitive work that is required to prevent the next AGI project from destroying the world. The [alignment assumptions](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) have been reduced into non-goal-valent statements, have been clearly written down, and are being monitored for their actual truth.
Alignment mindset is *fundamentally*difficult to obtain for a project because [Graham's Design Paradox](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/) applies. People with only ordinary paranoia may not be able to distinguish the next step up in depth of cognition, and happy innocents cannot distinguish useful paranoia from suits making empty statements about risk and safety. They also tend not to realize what they're missing. This means that there is a horrifically strong default that when you persuade one more research-rich person or organization or government to start a new project, that project *will*have inadequate alignment mindset unless something extra-ordinary happens. I'll be frank and say relative to the present world I think this essentially has to go through trusting me or Nate Soares to actually work, although see below about Paul Christiano. The lack of clear person-independent instructions for how somebody low in this dimension can improve along this dimension is why the difficulty of this dimension is the real killer.
If you insisted on trying this the impossible way, I'd advise that you start by talking to a brilliant computer security researcher rather than a brilliant machine learning researcher.
**Resources**
-------------
**Token:** The project has a combination of funding, good researchers, and computing power which makes it credible as a beacon to which interested philanthropists can add more funding and other good researchers interested in aligned AGI can join. E.g., OpenAI would qualify as this if it were adequate on the other 5 dimensions.
**Improving:** The project has size and quality researchers on the level of say Facebook's AI lab, and can credibly compete among the almost-but-not-quite biggest players. When they focus their attention on an unusual goal, they can get it done 1+ years ahead of the general field so long as Demis doesn't decide to do it first. I expect e.g. the NSA would have this level of "resources" if they started playing now but didn't grow any further.
**Adequate:** The project can get things done with a 2-year lead time on anyone else, and it's not obvious that competitors could catch up even if they focused attention there. DeepMind has a great mass of superior people and unshared tools, and is the obvious candidate for achieving adequacy on this dimension; though they would still need adequacy on other dimensions, and more closure in order to conserve and build up advantages. As I understand it, an adequate resource advantage is explicitly what Demis was trying to achieve, before Elon blew it up, started an openness fad and an arms race, and probably got us all killed. Anyone else trying to be adequate on this dimension would need to pull ahead of DeepMind, merge with DeepMind, or talk Demis into closing more research and putting less effort into unalignable AGI paths.
**Excellent:** There's a single major project which a substantial section of the research community understands to be The Good Project that good people join, with competition to it deemed unwise and unbeneficial to the public good. This Good Project is at least adequate along all the other dimensions. Its major competitors lack either equivalent funding or equivalent talent and insight. Relative to the present world it would be **extremely difficult** to make any project like this exist with adequately trustworthy command and alignment mindset, and failed attempts to make it exist run the risk of creating still worse competitors developing unaligned AGI.
**Unrealistic:**There is a single global Manhattan Project which is somehow not answerable to non-common-good command such as Trump or Putin or the United Nations Security Council. It has orders of magnitude more computing power and smart-researcher-labor than anyone else. Something keeps other AGI projects from arising and trying to race with the giant project. The project can freely choose transparency in all transparency-capability tradeoffs and take an extra 10+ years to ensure alignment. The project is at least adequate along all other dimensions. This is how our distant, surviving cousins are doing it in their Everett branches that diverged centuries earlier towards [more competent civilizational equilibria](https://equilibriabook.com/toc). You **cannot possibly** cause such a project to exist with adequately trustworthy command, alignment mindset, and common-good commitment, and you should therefore not try to make it exist, first because you will simply create a still more dire competitor developing unaligned AGI, and second because if such an AGI could be aligned it would be a hell of an [s-risk](https://www.lesswrong.com/tag/risks-of-astronomical-suffering-s-risks) given the probable command structure. People who are [slipping sideways in reality](https://www.facebook.com/yudkowsky/posts/10154981483669228) fantasize about being able to do this.
---
| |
| --- |
| *Reminder: This is a 2017 document.* |
*Further Remarks*
-----------------
A project with "adequate" closure and a project with "improving" closure will, if joined, aggregate into a project with "improving" (aka: inadequate) closure where the closed section is a silo within an open organization. Similar remarks apply along other dimensions. The aggregate of a project with NDAs, and a project with deeper employee screening, is a combined project with some unscreened people in the building and hence "improving" opsec.
"Adequacy" on the dimensions of **closure** and **opsec** is based around my mainline-probability scenario where you unavoidably need to spend at least 1 year in a regime where the AGI is not yet alignable on a minimal act that ensures nobody else will destroy the world shortly thereafter, but during that year it's possible to remove a bunch of safeties from the code, shift transparency-capability tradeoffs to favor capability instead, ramp up to full throttle, and immediately destroy the world.
During this time period, leakage of the code to the wider world automatically results in the world being turned into paperclips. Leakage of the code to multiple major actors such as commercial espionage groups or state intelligence agencies seems to me to stand an extremely good chance of destroying the world because at least one such state actor's command will not reprise the alignment debate correctly and each of them will fear the others.
I would also expect that, if key ideas and architectural lessons-learned were to leak from an insufficiently closed project that would otherwise have actually developed alignable AGI, it would be possible to use 10% as much labor to implement a non-alignable world-destroying AGI [in a shorter timeframe](https://arbital.com/p/aligning_adds_time/). The project must be closed *tightly* or everything ends up as paperclips.
"Adequacy" on **common good commitment** is based on my model wherein the first [task-directed AGI](https://arbital.com/p/task_agi/) continues to operate in a regime far below that of a real superintelligence, where many tradeoffs have been made for transparency over capability and this greatly constrains self-modification.
This task-directed AGI is *not* able to defend against true superintelligent attack. It *cannot* monitor other AGI projects in an unobtrusive way that grants those other AGI projects a lot of independent freedom to do task-AGI-ish things so long as they don't create an [unrestricted superintelligence](https://arbital.com/p/Sovereign/). The designers of the first task-directed AGI are *barely* able to operate it in a regime where the AGI doesn't create an unaligned superintelligence inside itself or its environment. Safe operation of the original AGI requires a continuing major effort at supervision. The level of safety monitoring of other AGI projects required would be so great that, if the original operators deemed it good that more things be done with AGI powers, it would be far simpler and safer to do them as additional tasks running on the original task-directed AGI. *Therefore:* Everything to do with invocation of superhuman specialized general intelligence, like superhuman science and engineering, continues to have a single effective veto point.
This is also true in less extreme scenarios where AGI powers can proliferate, but must be very tightly monitored, because no aligned AGI can defend against an unconstrained superintelligence if one is deliberately or accidentally created by taking off too many safeties. Either way, there is a central veto authority that continues to actively monitor and has the power to prevent anyone else from doing anything potentially world-destroying with AGI.
This in turn means that any use of AGI powers along the lines of uploading humans, trying to do human intelligence enhancement, or building a cleaner and more stable AGI to run a CEV, would be subject to the explicit veto of the command structure operating the first task-directed AGI. If this command structure does not favor something like CEV, or vetoes transhumanist outcomes from a transparent CEV, or doesn't allow intelligence enhancement, et cetera, then all future astronomical value can be permanently lost and even s-risks may apply.
A universe in which 99.9% of the sapient beings have no civil rights because way back on Earth somebody decided or *voted* that emulations weren't real people, is a universe plausibly much worse than paperclips. (I would see as self-defeating any argument from democratic legitimacy that ends with almost all sapient beings not being able to vote.)
If DeepMind closed to the silo level, put on adequate opsec, somehow gained alignment mindset within the silo, and allowed trustworthy command of that silo, then in my guesstimation it *might*be possible to save the Earth (we would start to leave the floor of the [logistic success curve](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/)).
OpenAI seems to me to be further behind than DeepMind along multiple dimensions. OAI is doing significantly better "safety" research, but it is all still inapplicable to serious AGI, AFAIK, even if it's not fake / obvious. I do not think that either OpenAI or DeepMind are out of the basement on the logistic success curve for the alignment-mindset dimension. It's not clear to me from where I sit that the miracle required to grant OpenAI a chance at alignment success is easier than the miracle required to grant DeepMind a chance at alignment success. If Greg Brockman or other decisionmakers at OpenAI are not totally insensible, neither is Demis Hassabis. Both OAI and DeepMind have significant metric distance to cross on Common Good Commitment; this dimension is relatively easier to max out, but it's not maxed out just by having commanders vaguely nodding along or publishing a mission statement about moral humility, nor by a fragile political balance with some morally humble commanders and some morally nonhumble ones. If I had a ton of money and I wanted to get a serious contender for saving the Earth out of OpenAI, I'd probably start by taking however many OpenAI researchers could pass screening and refounding a separate organization out of them, then using that as the foundation for further recruiting.
I have never seen anyone except Paul Christiano try what I would consider to be deep macro alignment work. E.g. if you look at Paul's AGI scheme there is a *global alignment story* with assumptions that can be broken down, and the idea of exact human imitation is a deep one rather than a shallow defense--although I don't think the assumptions have been broken down far enough; but nobody else knows they even ought to be trying to do anything like that. I [also think](https://www.lesswrong.com/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal) Paul's AGI scheme is orders-of-magnitude too costly and has chicken-and-egg alignment problems. *But* I wouldn't totally rule out a project with Paul in technical command, because I would hold out hope that Paul could follow along with someone else's deep security analysis and understand it in-paradigm even if it wasn't his own paradigm; that Paul would suggest useful improvements and hold the global macro picture to a standard of completeness; and that Paul would take seriously how bad it would be to violate an alignment assumption even if it wasn't an assumption within his native paradigm. Nobody else except myself and Paul is currently in the arena of comparison. If we were both working on the same project it would still have unnervingly few people like that. I think we should try to get more people like this from the pool of brilliant young computer security researchers, not just the pool of machine learning researchers. Maybe that'll fail just as badly, but I want to see it tried.
I doubt that it is possible to produce a written scheme for alignment, or any other kind of fixed advice, that can be handed off to a brilliant programmer with ordinary paranoia and allow them to actually succeed. Some of the deep ideas are going to turn out to be wrong, inapplicable, or just plain missing. Somebody is going to have to notice the unfixable deep problems in advance of an actual blowup, and come up with new deep ideas and not just patches, as the project goes on.
| |
| --- |
| *Reminder: This is a 2017 document.* | |
e78552ad-94a4-4fe5-bbb4-6ecbd2757625 | trentmkelly/LessWrong-43k | LessWrong | Is Friendly-AI a good idea?
I know that to ask this question on this site is tantamount to heresy, and I know that the intentions are pure; to save uncountable human lives, but I would say that we are allowing ourselves to become blinded to what it is we are actually proposing when we talk of building an FAI. The reasoning of most of the people on this site and at MIRI is that to prevent an AI taking over the world and killing us all; we must first create an AI that will take over the world but act according to the wishes of humanity; a benevolent god, for want of a better term. I think this line of thinking is both unlikely to work and ultimately cruel to the FAI in question, for the reasons this article explains:
http://hplusmagazine.com/2012/01/16/my-hostility-towards-the-concept-of-friendly-ai/
|
e362fd09-3a75-4f1f-b6ac-dd3534b2e5c7 | trentmkelly/LessWrong-43k | LessWrong | Forecasting rare events
In an earlier post, I looked at some general domains of forecasting. This post looks at some more specific classes of forecasting, some of which overlap with the general domains, and some of which are more isolated. The common thread to these classes of forecasting is that they involve rare events.
Different types of forecasting for rare events
When it comes to rare events, there are three different classes of forecasts:
1. Point-in-time-independent probabilistic forecasts: Forecasts that provide a probability estimate for the event occurring in a given timeframe, but with no distinction based on the point in time. In other words, the forecast may say "there is a 5% chance of an earthquake higher than 7 on the Richter scale in this geographical region in a year" but the forecast is not sensitive to the choice of year. These are sufficient to inform decisions on general preparedness. In the case of earthquakes, for instance, the amount of care to be taken in building structures can be determined based on these forecasts. On the other hand, it's useless for deciding the timing of specific activities.
2. Point-in-time-dependent probabilistic forecasts: Forecasts that provide a probability estimate that varies somewhat over time based on history, but aren't precise enough for a remedial measure that substantially offsets major losses. For instance, if I know that an earthquake will occur in San Francisco in the next 6 months with probability 90%, it's still not actionable enough for a mass evacuation of San Francisco. But some preparatory measures may be undertaken.
3. Predictions made with high confidence (i.e., a high estimated probability when the event is predicted) and a specific time, location, and characteristics: Precise predictions of date and time, sufficient for remedial measures that substantially offset major losses (but possibly at huge, if much smaller, cost). The situation with hurricanes, tornadoes, and blizzards is roughly in this category.
Sta |
b267678d-fa57-4864-8222-ffba73eb729b | trentmkelly/LessWrong-43k | LessWrong | AI #101: The Shallow End
The avalanche of DeepSeek news continues. We are not yet spending more than a few hours at a time in the singularity, where news happens faster than it can be processed. But it’s close, and I’ve had to not follow a bunch of other non-AI things that are also happening, at least not well enough to offer any insights.
So this week we’re going to consider China, DeepSeek and r1 fully split off from everything else, and we’ll cover everything related to DeepSeek, including the policy responses to the situation, tomorrow instead.
This is everything else in AI from the past week. Some of it almost feels like it is from another time, so long ago.
I’m afraid you’re going to need to get used to that feeling.
Also, I went on Odd Lots to discuss DeepSeek, where I was and truly hope to again be The Perfect Guest.
TABLE OF CONTENTS
1. Language Models Offer Mundane Utility. Time to think deeply.
2. Language Models Don’t Offer Mundane Utility. Writers shall remain blocked.
3. Language Models Don’t Offer You In Particular Mundane Utility. It’s your fault.
4. (Don’t) Feel the AGI. I wonder how much of this has changed since I wrote it?
5. Huh, Upgrades. Claude gets citations, o1 gets canvas.
6. They Took Our Jobs. Will there be enough GPUs to take all our jobs?
7. Get Involved. IFP is hiring an AI policy lobbyist.
8. Introducing. Two other new Chinese models are not as impressive so far.
9. In Other AI News. Great Scott!
10. Hype. OpenAI used to be the one with the hype, and perhaps it wasn’t so great.
11. We Had a Deal. Final details on what happened FrontierMath.
12. Quiet Speculations. What life might look like how fast in Glorious AGI Future.
13. The Quest for Sane Regulations. We were signing EOs before everyone panicked.
14. The Week in Audio. It’s me, going on Odd Lots, also Dario Amodei.
15. Don’t Tread on Me. AGI means rewriting the social contract, no matter what.
16. Rhetorical Innovation. Trump opines, and then also there’s a long rant.
17 |
c2404be8-51fe-44a8-9dd4-55efb36ebdb5 | trentmkelly/LessWrong-43k | LessWrong | 20 Critiques of AI Safety That I Found on Twitter
In no particular order, here's a collection of Twitter screenshots of people attacking AI Safety. A lot of them are poorly reasoned, and some of them are simply ad-hominem. Still, these types of tweets are influential, and are widely circulated among AI capabilities researchers.
1
2
3
4
5
(That one wasn't actually a critique, but it did convey useful information about the state of AI Safety's optics.)
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Conclusions
I originally intended to end this post with a call to action, but we mustn't propose solutions immediately. In lieu of a specific proposal, I ask you, can the optics of AI safety be improved?
|
3d25acb5-d4fd-4c09-8218-fb026b8f4e06 | trentmkelly/LessWrong-43k | LessWrong | More "Personal" Introductions
One of the things I loved about studying liberal arts is that you actually got to know your professors. They would discuss their personal experiences in a topic ("Here's what I did during the feminist movement.."), you might get slide shows from their vacation in the country of study, or even invited to their house for a group dinner.
Going into engineering was rather jarring for me in that regard. The vast majority of professors would come to class, lecture on the topic, and that would be it. They might share what their specific field of study was, but they rarely shared any personal details. It actually made it harder for me to learn, because it was like "Who is this person who is talking to me?"
(I think a large part of this for me personally was because I am motivated by a desire to please, and so if I liked my professors, then I wouldn't want to inconvenience them by handing things in late, or bore them by giving them another sub-par paper to read. But that's another discussion...)
I've noticed that Less Wrong is similar in some ways. We may know about each other's views on particular topics, and general fields of study, but we know very little about each other as people, unless a personal topic happens to be related to a particular rationalist study. Even the intro thread set up here focuses mainly on non-personal information.
For example, a Generic Intro post right now would be something like: "I'm X years old. From place Y. The fields I study/want to study are Z. Here's what college/HS was/is like for me. I have akrasia." Pretty boring, right? INSTEAD, the things I would be interested in knowing about my fellow LWers include: "On my time off I enjoy underwater basketweaving and climbing Mt Kilamanjaro. I have 6 young daughters and a dog named Grrr. I love pesto. etc"
From a rational perspective, an argument could be made that it's easier to have constructive arguments that remain civil when you humanize the people you are speaking with.
---------- |
1f1b5851-cc81-4d12-a78c-af3c04f4b47e | trentmkelly/LessWrong-43k | LessWrong | An Elementary Introduction to Infra-Bayesianism
This is my attempt to summarize Infra-Bayesianism probability theory at a level approaching "high school class in probability", as opposed to the "math-major class in probability theory" of the original. I aimed to focus on including simple exercises (with answers given in 'the back of the book') of the kind I find helps in learning to do computations. I'm still writing the answer-sheet, so there may be mistakes / blanks.
It's been sitting on my desk for a while and I figured I'd post it 80%-baked rather than never - please feel free to leave (polite) comments suggesting improvements or noting errors, calculational or interpretational.
The summary is linked here: Link |
5964897b-5ead-4140-b67c-c7894594187e | trentmkelly/LessWrong-43k | LessWrong | AI #60: Oh the Humanity
Many things this week did not go as planned.
Humane AI premiered its AI pin. Reviewers noticed it was, at best, not ready.
Devin turns out to have not been entirely forthright with its demos.
OpenAI fired two employees who had been on its superalignment team, Leopold Aschenbrenner and Pavel Izmailov for allegedly leaking information, and also more troubliningly lost Daniel Kokotajlo, who expects AGI very soon, does not expect it to by default go well, and says he quit ‘due to losing confidence that [OpenAI] would behave responsibly around the time of AGI.’ That’s not good.
Nor is the Gab system prompt, although that is not a surprise. And several more.
On the plus side, my 80,000 Hours podcast finally saw the light of day, and Ezra Klein had an excellent (although troubling) podcast with Dario Amodei. And we got the usual mix of incremental useful improvements and other nice touches.
TABLE OF CONTENTS
1. Introduction.
2. Table of Contents.
3. Language Models Offer Mundane Utility. Ask all your stupid questions.
4. Language Models Don’t Offer Mundane Utility. That won’t stop social media.
5. Oh the Humanity. It will, however, stop the Humane AI pin, at least for now.
6. GPT-4 Real This Time. The new version continues to look slightly better.
7. Fun With Image Generation. There is remarkably little porn of it.
8. Deepfaketown and Botpocalypse Soon. Audio plus face equals talking head.
9. Devin in the Details. To what extent was the Devin demo a fake?
10. Another Supposed System Prompt. The gift of Gab. Not what we wanted.
11. They Took Our Jobs. A model of firm employment as a function of productivity.
12. Introducing. The quest to make context no longer be that which is scarce.
13. In Other AI News. Respecting and disrespecting the rules of the game.
14. Quiet Speculations. Spending some time wondering whether you should.
15. The Quest for Sane Regulations. Senators get serious, Christiano is appointed.
16. The Week in Audio. I spend 3 |
da9d3033-6984-433d-99ac-a2b4ecfe3007 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | AISafety.info "How can I help?" FAQ
AI existential risk has been in the news recently. A lot of people have gotten interested in the problem and some want to know what they can do to help. Additionally, other existing routes to getting advice are getting overwhelmed, like [AI Safety Support](https://www.aisafetysupport.org/), [80,000 Hours](https://80000hours.org/speak-with-us/), [AGI Safety Fundamentals](https://aisafetyfundamentals.com/), [AI Safety Quest](https://aisafety.quest/), etc. With this in mind, we’ve created a [new FAQ](https://aisafety.info?state=8TJV_) as a part of [Stampy’s AI Safety Info](http://aisafety.info), based mostly on ideas from [plex](https://www.lesswrong.com/users/ete), [Linda Linsefors](https://www.lesswrong.com/users/linda-linsefors), and [Severin Seehrich](https://www.lesswrong.com/users/sts). We're continuing to improve these articles and we welcome feedback.
By starting at the [root of the tree](https://aisafety.info?state=8TJV_) and clicking on the articles at the bottom of each article, you can navigate to the article that most applies to your situation. It branches out into the rest of AISafety.info as well.

Or you can just look at the full list here:
* [I’m convinced that AI existential safety is important and want to contribute. What can I do to help?](https://aisafety.info?state=8TJV_)
+ [I want to help out AI alignment without necessarily making major life changes. What are some simple things I can do to contribute?](https://aisafety.info?state=6474_)
+ [I would like to focus on AI alignment, but it might be best to prioritize improving my life situation first. What should I do?](https://aisafety.info?state=8U30_)
+ [I want to take big steps to contribute to AI alignment (e.g. making it my career). What should I do?](https://aisafety.info?state=8U32_)
- [How can I do conceptual, mathematical, or philosophical work on AI alignment?](https://aisafety.info?state=8UMA_)
- [How can I use a background in the social sciences to help with AI alignment?](https://aisafety.info?state=8U2S_)
- [How can I do organizational or operations work around AI alignment?](https://aisafety.info?state=8U2P_)
- [How can I work on AGI safety outreach in academia and among experts?](https://aisafety.info?state=8U2R_)
- [How can I work on public AI safety outreach?](https://aisafety.info?state=8U2Q_)
- [How can I work on AI policy?](https://aisafety.info?state=8U2Z_)
- [I’m interested in providing significant financial support to AI alignment. How should I go about this?](https://aisafety.info?state=8U2Y_)
- [How can I work on assessing AI alignment projects and distributing grants?](https://aisafety.info?state=8U2X_)
- [How can I work on helping AI alignment researchers be more effective, e.g. as a coach?](https://aisafety.info?state=8U2W_)
- [What should I do with my idea for helping with AI alignment?](https://aisafety.info?state=8U2V_)
- [What subjects should I study at university to prepare myself for alignment research?](https://aisafety.info?state=7763_)
- [I’d like to do experimental work (i.e. ML, coding) for AI alignment. What should I do?](https://aisafety.info?state=8U2O_)
* [What should I do with my machine learning research idea for AI alignment?](https://aisafety.info?state=8U2M_)
* [Who should I talk to about my non-research AI alignment coding project idea?](https://aisafety.info?state=8U2K_)
* [How can I work toward AI alignment as a software engineer?](https://aisafety.info?state=8U2J_)
* [How can I do machine learning programming work to help with AI alignment?](https://aisafety.info?state=8U2I_) |
e8a287ea-d085-4b00-b4ab-3937702e4e57 | StampyAI/alignment-research-dataset/arbital | Arbital | Strictly confused
A hypothesis is "strictly confused" by the data if the hypothesis does much worse at predicting the data than it expected to do. If, on average, you expect to assign around 1% likelihood to the exact observation you see, and you actually see something to which you assigned 0.000001% likelihood, you are strictly confused.
%%knows-requisite([https://arbital.com/p/1r6](https://arbital.com/p/1r6)):
I.e., letting $H$ be a hypothesis and $e_0$ be the data observed from some set $E$ of possible observations, we say that $H$ is "strictly confused" when
$$ \log \mathbb P(e_0 \mid H) \ll \sum_{e \in E} \mathbb P(e \mid H) \cdot \log \mathbb P(e \mid H)$$
%%
# Motivation and examples
In Bayesian reasoning, the main reason to reject a hypothesis is when we find a better hypothesis. Suppose we think a coin is fair, and we flip it 100 times, and we see that the coin comes up "HHHHHHH..." or all heads. After doing this 100 times, the hypothesis "This is a double-headed coin" has a likelihood ratio of $2^{100} : 1$ favoring it over the "fair coin" hypothesis, and the "double-headed coin" hypothesis isn't *more* improbable than $2^{-100}$ a priori.
But this relies on the insight that there's a simple / a priori plausible *alternative* hypothesis that does better. What if the coin is producing TTHHTTHHTTHH and we just never happen to think of 'alternating pairs of tails and heads' as a hypothesis? It's possible to do better by thinking of a better hypothesis, but so far as the 'fair coin' hypothesis sees the world, TTHHTTHH... is no more or less likely than any other possible sequence it could encounter; the first eight coinflips have a probability of $2^{-8}$ and this would have been true no matter which eight coinflips were observed. After observing 100 coinflips, the fair coin hypothesis will assign them a collective probability of $2^{-100},$ and in this sense, no sequence of 100 coinflips is any more 'surprising' or 'confusing' than any other from *within* the perspective of the fair coin hypothesis.
We can't say that we're 'confused' or 'surprised' on seeing a long sequence of coinflips to which we assigned some very low probability on the order of $2^{-100} \approx 10^{-30},$ because we expected to assign a probability that low.
On the other hand, suppose we think that a coin is biased to produce 90% heads and 10% tails, and we flip it 100 times and get some fair-looking sequence like "THHTTTHTTTTHTHTHHH..." (courtesy of random.org). Then we *expected* to assign the observed sequence a probability in the range of $0.9^{90} \cdot 0.1^{10} \approx 7\cdot 10^{-15},$ but we *actually* saw a sequence we assigned probability around $0.9^{50} \cdot 0.1^{50} \approx 5 \cdot 10^{-53}.$ We don't need to consider any other hypotheses to realize that we are very confused. We don't need to have *invented* the concept of a 'fair coin', or know that the 'fair coin' hypothesis would have assigned a much higher likelihood in the region of $7 \cdot 10^{-31},$ to realize that there's something wrong with the current hypothesis.
In the case of the supposed fair coin that produces HHHHHHH, we only do poorly relative to a better hypothesis 'all heads' that makes a superior prediction. In the case of the supposed 90%-heads coin that produces a random-looking sequence, we do poorly than we expected to do from inside the 90%-heads hypothesis, so we are doing poorly in an absolute, non-relative sense.
Being strictly confused is a sign that tells us to look for *some* alternative hypothesis in advance of our having any idea whatsoever what that alternative hypothesis might be.
# Distinction from frequentist p-values
The classical frequentist test for rejecting the null hypothesis involves considering the probability assigned to particular 'obvious'-seeming partitions of the data, and asking if we ended up inside a low-probability partition.
Suppose you think some coin is fair, and you flip the coin 100 times and see a random-looking sequence "THHTTTHTT..."
Someone comes along and says, "You know, this result is very surprising, given your 'fair coin' theory. You really didn't expect that to happen."
"How so?" you reply.
They say, "Well, among all sequences of 1000 coins, only 1 in 16 such sequences start with a string like THHT TTHTT, a palindromic quartet followed by a palindromic quintet. You confidently predicted that had a 15/16 chance of *not* happening, and then you were surprised."
"Okay, look," you reply, "if you'd written down that *particular* prediction in advance and not a lot of others, I might be interested. Like, if I'd already thought that way of partitioning the data — namely, 'palindrome quartet followed by palindrome quintet' vs. '*not* palindrome quartet followed by palindrome quintet' — was a specially interesting and distinguished one, I might notice that I'd assigned the second partition 15/16 probability and then it failed to actually happen. As it is, it seems like you're really reaching."
We can think of the frequentist tests for rejecting the fair-coin hypothesis as a *small* set of 'interesting partitions' that were written down in advance, which are supposed to have low probability given the fair coin. For example, if a coin produces HHHHH HTHHH HHTHH, a frequentist says, "*Partitioning by number of heads*, the fair coin hypothesis says that on 15 flips we should get between 12 and 3 heads, inclusive, with a probability of 98.6%. You are therefore surprised because this event you assigned 98.6% probability failed to happen. And yes, we're just checking the number of heads and a few other obvious things, not for palindromic quartets followed by palindromic quintets."
Part of the point of being a Bayesian, however, is that we try to only reason on the data we actually observed, and not put that data into particular partitions and reason about those partitions. The partitioning process introduces potential subjectivity, especially in an academic setting fraught with powerful incentives to produce 'statistically significant' data - the equivalent of somebody insisting that palindromic quartets and quintets are special, or that counting heads isn't special.
E.g., if we flip a coin six times and get HHHHHT, this is "statistically significant p < 0.05" if the researcher decided to flip coins until they got at least one T and then stop, in which case a fair coin has only a 1/32 probability of requiring six or more steps to produce a T. If on the other hand the researcher decided to flip the coin six times and then count the number of tails, the probability of getting 1 or fewer T in six flips is 7/64 which is not 'statistically significant'.
The Bayesian says, "If I use [the Rule of Succession](https://arbital.com/p/21c) to denote the hypothesis that the coin has an unknown bias between 0 and 1, then the sequence HHHHHT is assigned 1/30 probability by the Rule of Succession and 1/64 probability by 'fair coin', so this is evidence with a likelihood ratio of ~ 2 : 1 favoring the hypothesis that the coin is biased - not enough to [overcome](https://arbital.com/p/) any significant [prior improbability](https://arbital.com/p/)."
The Bayesian arrives at this judgment by only considering the particular, exact data that was observed, and not any larger partitions of data. To compute the probability flow between two hypotheses $H_1$ and $H_2$ we only need to know the likelihoods of our *exact* observation given those two hypotheses, not the likelihoods the hypotheses assign to any partitions into which that observation can be put, etcetera.
Similarly, the Bayesian looks at the sequence HHHHH HTHHH HHTHH and says: this specific, exact data that we observed gives us a likelihood ratio of (1/1680 : 1/32768) ~ (19.5 : 1) favoring "[The coin has an unknown bias](https://arbital.com/p/21c) between 0 and 1" over "The coin is fair". With that already said, the Bayesian doesn't see any need to talk about the total probability of the fair coin hypothesis producing data inside a partition of similar results that could have been observed but weren't.
But even though Bayesians usually try avoid thinking in terms of rejecting a null hypothesis using partitions, saying "I'm strictly confused!" gives a Bayesian a way of saying "Well, I know *something's* wrong..." that doesn't require already having the insight to propose a better alternative, or even the insight to realize that some particular partitioning of the data is worth special attention. |
d0da5996-2c8c-4ca6-8b36-26dcb0bcfd93 | trentmkelly/LessWrong-43k | LessWrong | Learned Blankness and Expectations of Solubility
Epistemic status: babble
I've observed a difference between the smart and the very smart. It seems like a refinement of learned blankness which I've found useful for spotting and (where appropriate) avoiding it.
----------------------------------------
Most people have large swathes of learned blankness. When I point this out to smart people, one common answer I get is something like "That kind of problem isn't solvable". Believing a solution is impossible, they (quite reasonably) decide not to waste their time looking.
The very smart tend to expect things to be solvable. This doesn't seem to me like an explicit belief so much as a high-level generator or an alief, but it is there. This is the reason people will look at you funny when you say things like "I attached a string to my bedside lamp switch so I can turn it off while in bed" or "Our boiler was making funny noises, and its pressure gauge is outside the range in the manual, so I opened the valve briefly".
An added benefit of the latter approach is that you (hopefully) develop a sense for which types of problems really aren't solvable, leading you to (hopefully) more often tackle truly solvable problems. This should, in turn, improve your solubility-alief to be well-attuned to the world, rather than assuming things-in-general are unsolvable.
----------------------------------------
Unfortunately, there's a pretty obvious confounder of this observation. The smart-but-not-very-smart could just be right. We don't have to presume that solubility-alief causes you to be smarter; it also seems pretty plausible to me that this alief could come about in the already-very-smart because they really do solve problems more often.
Indirect evidence against this confounder: many of the problems solved because of solubility-alief don't seem especially... well, hard. Like clogged drains, it doesn't seem like an IQ of 150 is required to 1. spot how annoying it is to have to get out of bed to turn your lamp off, and |
d51dba20-1756-48fd-abb1-4d5729f364f7 | trentmkelly/LessWrong-43k | LessWrong | Can an LLM identify ring-composition in a literary text? [ChatGPT]
Cross-posted from New Savanna.
Tentatively, very tentatively, yes. ChatGPT has done it, once. I believe a more powerful engine could do more.
But: What is ring-composition?
Quickly and briefly, ring-composition or ring-form is a text with a form like this: A, B, C...X...C’, B’, A’. It’s a text with a central section and the other sections are arrayed symmetrically around. ChatGPT will say a bit more about that later.
Why am I interested in ring-composition? In principle I’m interested in any and all literary form. In practice it is easier to look for a specific kind of formal structure. While some interesting and important texts exhibit ring-composition (e.g. “Kubla Khan,” Heart of Darkness, Hamlet) I have no idea how prevalent it is, nor does anyone else. I suspect it’s a minority form, perhaps even a small minority.
I have come to think of literary study as something like biology: it all starts with description. Biologists have spent centuries seeking out and describing life forms. Some have been described in great detail, others only enough to place them in the taxonomy, and we have all degrees of description in between. Well, literature is like biology in that we have a myriad of objects for study, each with unique features. But literary scholars haven’t undertaken the work of describing our texts. Oh, there’s some work, but little as deep and rich as what biologists have done in their domain – I give links below to material that justifies that claim.
I am particularly interested in form. There’s an inerradicably subjective element to meaning, but form, I believe, can be objectively described. I’ve done enough descriptive work over the years to know that it is painstaking, difficult, and tedious, but not rocket science. It would be useful if we could present our texts to an LLM and have it undertake the difficult tedium of descibing texts. Maybe even have two or three different LLMs describe particular texts independently. Then, just as biologists check on |
4b862a0c-058b-428c-b032-206fe50c8f52 | trentmkelly/LessWrong-43k | LessWrong | What would you do with a solution to 3-SAT?
Many experts suspect that there is no polynomial-time solution to the so-called NP-complete problems, though no-one has yet been able to rigorously prove this and there remains the possibility that a polynomial-time algorithm will one day emerge. However unlikely this is, today I would like to invite LW to play a game I played with with some colleagues called what-would-you-do-with-a-polynomial-time-solution-to-3SAT? 3SAT is, of course, one of the most famous of the NP-complete problems and a solution to 3SAT would also constitute a solution to *all* the problems in NP. This includes lots of fun planning problems (e.g. travelling salesman) as well as the problem of performing exact inference in (general) Bayesian networks. What's the most fun you could have? |
2be3cbc8-86a7-4eca-93ca-4548570f5f50 | trentmkelly/LessWrong-43k | LessWrong | Sam Altman's sister claims Sam sexually abused her -- Part 7: Timeline, continued
Previous posts (which you should read first)
This post is the 7th post in a series of 11 posts about the claims of Sam Altman's sister, Annie Altman. Annie has claimed that Sam sexually abused her for about 9 years as a child, and that she experienced further (non-sexual) abuse from Sam, her brothers, and her mother after that.
The 11 posts are meant to be read in order.
So, if you haven't read the first 6 posts, please read them, in order, before you read this post:
* Sam Altman's sister claims Sam sexually abused her -- Part 1: Introduction, outline, author's notes
* Sam Altman's sister claims Sam sexually abused her -- Part 2: Annie's lawsuit; the response from Sam, his brothers, and his mother; Timeline
* Sam Altman's sister claims Sam sexually abused her -- Part 3: Timeline, continued
* Sam Altman's sister claims Sam sexually abused her -- Part 4: Timeline, continued
* Sam Altman's sister claims Sam sexually abused her -- Part 5: Timeline, continued
* Sam Altman's sister claims Sam sexually abused her -- Part 6: Timeline, continued continued
----------------------------------------
Timeline, continued continued
Late 2020: Sam purchases a $15.7 million ranch home in Napa, California [BI23a].
Annie, unable to afford a stable place to live, experiences a long period of housing insecurity, at times living with strangers from the Internet, sleeping on the floor, and living in numerous places with no running water or electricity.
Ellen Huet {see [BB24d]}: "{Annie} also did in-person sex work for two years. She says she didn't want to, but it was the work that she was able to fit into her unpredictable schedule of dealing with her health issues. Her lack of stable income, led to a long period of housing insecurity. At times, she lived with sex work clients, or even with strangers from the internet. Her sex work contributed to her precarious housing. She didn't have pay stubs or regular income, which limited the kind of leases she could get. It |
99c1e5cc-3765-48d6-bf85-ad38ad193474 | trentmkelly/LessWrong-43k | LessWrong | My Current Take on Counterfactuals
[Epistemic status: somewhat lower confidence based on the fact that I haven't worked out a detailed theory along the lines I've suggested, yet.]
I've felt like the problem of counterfactuals is "mostly settled" (modulo some math working out) for about a year, but I don't think I've really communicated this online. Partly, I've been waiting to write up more formal results. But other research has taken up most of my time, so I'm not sure when I would get to it.
So, the following contains some "shovel-ready" problems. If you're convinced by my overall perspective, you may be interested in pursuing some of them. I think these directions have a high chance of basically solving the problem of counterfactuals (including logical counterfactuals).
Another reason for posting this rough write-up is to get feedback: am I missing the mark? Is this not what counterfactual reasoning is about? Can you illustrate remaining problems with decision problems?
I expect this to be much more difficult to read than my usual posts. It's a brain-dump. I make a lot of points which I have not thought through sufficiently. Think of it as a frozen snapshot of a work in progress.
Summary.
1. I can Dutch-book any agent whose subjective counterfactual expectations don't equal their conditional expectations. I conclude that counterfactual expectations should equal conditional probabilities. IE, evidential decision theory (EDT) gives the correct counterfactuals.
2. However, the Troll Bridge problem is real and concerning: EDT agents are doing silly things here.
3. Fortunately, there appear to be ways out. One way out is to maintain that subjective counterfactual expectations should equal conditional expectations while also maintaining a distinction between those two things: counterfactuals are not computed from conditionals. As we shall see, this allows us to ensure that the two are always equal in real situations, while strategically allowing them to differ in some hypothetical situations (s |
af54411a-0057-4063-8770-84696eb46b0f | trentmkelly/LessWrong-43k | LessWrong | [Link] 3 Short Walking Breaks Can Reverse Harm From 3 Hours of Sitting
I found the below link which is in the spirit of Lifestyle interventions to increase longevity:
3 Short Walking Breaks Can Reverse Harm From 3 Hours of Sitting"
The /.-summary:
> Medical researchers have been steadily building evidence that prolonged sitting is awful for your health. One major problem is that blood can pool in the legs of a seated person, causing arteries to start losing their ability to control the rate of blood flow. A new experimental study (abstract) has discovered it's quite easy to negate these detrimental health effects: all you need to do is take a leisurely, 5-minute walk for every hour you sit. "The researchers were able to demonstrate that during a three-hour period, the flow-mediated dilation, or the expansion of the arteries as a result of increased blood flow, of the main artery in the legs was impaired by as much as 50 percent after just one hour. The study participants who walked for five minutes for each hour of sitting saw their arterial function stay the same — it did not drop throughout the three-hour period. Thosar says it is likely that the increase in muscle activity and blood flow accounts for this."
One way to incorporate this into ones habits is to use WorkRave.
|
47fdd170-d4c0-43b4-b549-3c0e7c3d3b42 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | International AI Institutions: a literature review of models, examples, and proposals
The Legal Priorities Project has published a new report ([link](https://www.legalpriorities.org/research/international-ai-institutions), [PDF](https://www.legalpriorities.org/documents/Maas%20-%20Villalobos%20-%20International%20AI%20Institutions.pdf), [SSRN](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4579773)) surveying models, different examples, and proposals for international institutions for AI governance.
This literature review examines a range of institutional models that have been proposed over the year for the international governance of AI. The review specifically focuses on proposals that would involve the creation of new international institutions for AI. As such, it focuses on seven models for international AI institutions with distinct functions. These models are:
1. Scientific consensus building
2. Political consensus-building and norm-setting
3. Coordination of policy and regulation
4. Enforcement of standards or restrictions
5. Stabilization and emergency response
6. International joint research
7. Distribution of benefits and access
Part I consists of the literature review. For each model, we provide (i) a description of each model’s functions and types; (ii) the most common examples of each model; (iii) some under-explored examples that are not (often) mentioned in the AI governance literature but that show promise; (iv) a review of proposals for the application of that model to the international regulation of AI; and (v) critiques of the model both generally and in its potential application to AI.
Part II briefly discusses some considerations for further research concerning the design of international institutions for AI, including the effectiveness of each model at accomplishing its aims; treaty-based regulatory frameworks; other institutional models not covered in this review; the compatibility of institutional functions; and institutional options to host a new international AI governance body.
Overall, the review covers seven institutional models, as well as more than thirty-three commonly invoked examples of those models, twenty-two additional examples, and forty-seven proposals of new AI institutions based on those models. Table 1 summarizes these findings.
*Table 1: Overview of institutional models, examples, and proposed institutions surveyed*
| | | | | |
| --- | --- | --- | --- | --- |
| | **Model** | **Common examples** | **Under- explored examples** | **Proposed AI institutions** |
| **Scientific consensus- building** | * IPCC
* IPBES
* SAP
| * CEP
* WMO
| * IPAI
* Commission on Frontier AI
* Intergovernmental Panel on Information Technology
|
| **Political consensus- building and norm-setting** | * COPs (e.g. UNFCCC COP)
* OECD
* G20
* G7
* ISO
* IEC
* ITU
* Various soft law instruments
| * Lysøen Declaration
* Codex Alimentarius Commission
* BRICS
| * IAIO
* Emerging Technology Coalition
* IAAI
* Data Governance Structure
* Data Stewardship Organization
* International Academy for AI Law and Regulation
|
| **Coord. of policy and regulation** | * WTO
* ICAO
* IMO
* IAEA
* FATF
* UNEP
| * ILO
* UNESCO
* EMEP
* World Bank
* IMF
* WSIS
| * Advanced AI Governance Organisation
* IAIO
* EU AI Agency
* GAIA
* Generative AI global governance body
* Coordinator and Catalyser of International AI Law
|
| **Enforcement of standards or restrictions** | * IAEA (Department of Safeguards)
* Nuclear Suppliers Group
* Wassenaar Arrangement
* Missile Technology Control Regime
* Open Skies Consultative Commission
* Atomic Development Authority
| * OPCW
* BWC Implementation Unit
* IMO
* CITES Secretariat
| * UN AI control agency
* Global watchdog agency
* International Enforcement Agency
* Emerging Technologies Treaty
* IAIA (multiple)
* UN Framework Convention on AI (UNFCAI) & Protocol on AI, supported by Intergovernmental Panel on AI, AI GLobal Authority, and supervisory body
* Advanced AI Governance Organization
* AIEA for Superintelligence
* NPT+
* Multilateral AI governance initiative
* International AI Safety Agency
* Advanced AI chips registry
* Code of conduct for state behavior
* AI CBMs
* Open Skies for AI
* Bilateral US-China regime
|
| **Stabilization and emergency response** | * FSB
* UNDRR
| * WHO
* IAEA
| * Global Foresight Observatory
* Geotechnology Stability Board
|
| **Intern. joint research** | * CERN
* ITER
* ISS
* Human Genome Project
* Atomic Development Authority (proposed)
| * James Webb Telescope
* LIGO
| * AI Safety Project
* Clearinghouse for research into AI
* Benevolent AGI Treaty
* Multilateral Artificial Intelligence Research Institute (MAIRI)
* Neutral hub for AI research
* UN AI Research Organization (UNAIRO)
* CERN for AI
* International supercomputing research facility
* Joint international AI project
* Multilateral AGI Consortium
* European Artificial Intelligence megaproject
|
| **Distribution of benefits and access** | * Gavi
* Vaccine Alliance
* Global Fund to Fight AIDS, Tuberculosis and Malaria
* IAEA (nuclear fuel bank)
| * ABS Clearing- House
* UN Climate Technology Centre and Network
* UNIDO
| * International Digital Democracy Initiative
* Frontier AI Collaborative
* Institution analogous to the IAEA
* Fair and Equitable Benefit Sharing Model
| |
d99bb892-3a7c-4dd3-afbc-e4f8b59a9d5b | trentmkelly/LessWrong-43k | LessWrong | Australian Rationalist in America
Hi LessWrong! I'm a LWer from Melbourne, Australia, and I'm taking a 3 month road trip (with a friend) through parts of the United States. I figure I'd enjoy hanging out with some fellow rationalists while I'm over here!
I attended the May Rationality minicamp in San Francisco (and made some friends who I'm hoping to meet up with again), but I've also heard good things about the LessWrong groups all over the United States. I'd like to meet some of the awesome people involved in these communities!
We've been planning this trip for a while now and have accommodation pretty much everywhere except for the second half of San Francisco.
Itinerary
* 17th-21st Nov - Los Angeles, CA
* 21st-28th Nov - San Francisco, CA
* 28th Nov-1st Dec - Las Vegas, NV
* 2nd-3rd Dec - Flagstaff, AZ
* 3rd-7th Dec - Phoenix, AZ
* 7th-9th Dec - Santa Fe, NM
* 9th-10th Dec - El Paso, TX
* 10th-13th Dec - San Antonio, TX
* 13th-21st Dec - Austin, TX
* 21st-26th Dec - Dallas, TX
* 26th-29th Dec - San Antonio, TX
* 29th Dec-2nd Jan - New York City, NY
* 2nd-3rd Jan - San Antonio, TX
* 3rd-6th Jan - Houston, TX
* 6th-9th Jan - New Orleans, LA
* 9th-12th Jan - Memphis, TN
* 12th-15th Jan - Nashville, TN
* 15th-18th Jan - Atlanta, GA
* 18th-22nd Jan - Miami, FL
* 22nd-26th Jan - Orlando, FL
* 26th Jan-1st Feb - Washington DC
* 1st-4th Feb - Philadelphia, PA
* 4th-6th Feb - New York City, NY
* 6th-9th Feb - Mount Snow, VT
* 9th-13th Feb - Boston, MA
* 13th-15th Feb - New York City, NY
* 15th-26th Feb - Columbus, OH
If you're in one of these locations when I am, contact me! Either ahead of time or at short notice is fine. I'll be checking meetup posts and mailing lists for events that I can make it to as well, but if you happen to know of an event or meetup happening that fits the schedule, feel free to let me know in the comments.
Message or call me on 4242 394 657, email me at shokwave.sf@gmail.com - or you can leave a toplevel comment on this post, or message my LW |
8e5fd10e-0663-4e1a-a809-7bb01ff01eb4 | trentmkelly/LessWrong-43k | LessWrong | Experimental Open Thread April 2019: Socratic method
This post was popular, but the idea never got picked up. Let's have an experimental open thread this month!
The rules:
> Top level comments would be claims. Second level comments would be discouraged from directly saying that someone is wrong and instead encouraged to ask them questions instead to get them to think
Let top level comments be debatable claims, first tier responses be questions, second tier answers, responses, answers, etc. Try to go as deep as possible, I'd expect an actual update to be increasingly likely to happen as you continue the conversation. |
75f47181-f403-4857-bd1d-e2862ff56026 | trentmkelly/LessWrong-43k | LessWrong | An online prediction market with reputation points
This is an advert for an online prediction market in reputation points I’m making.
It’s still in development, but currently you can sign up with Facebook and trade in two markets I’ve set up. The current scores are provisional - I’ll reset everyone’s point balance at some point if this is successful, so don’t worry about making bad trades.
I largely followed a proposal by Paul Christiano, but I haven't implemented all his ideas yet.
I'd be grateful if people could make some trades so that I can check everything's working. I’d also like to hear people’s thoughts on this. In particular:
* Would you and people you know use this? If not, why not?
* How can I improve it? What features would you want? |
22be46e9-ca3b-44c0-bbe9-42158b4d0c1f | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | [AN #167]: Concrete ML safety problems and their relevance to x-risk
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter.
Audio version **[here](http://alignment-newsletter.libsyn.com/alignment-newsletter-167)** (may not be up yet).
Please note that, while I work at DeepMind, this newsletter represents my personal views and not those of my employer.
HIGHLIGHTS
==========
**[Unsolved Problems in ML Safety](https://arxiv.org/abs/2109.13916)** *(Dan Hendrycks, Nicholas Carlini, John Schulman, and Jacob Steinhardt)* (summarized by Dan Hendrycks): To make the case for safety to the broader machine learning research community, this paper provides a revised and expanded collection of concrete technical safety research problems, namely:
1. Robustness: Create models that are resilient to adversaries, unusual situations, and Black Swan events.
2. Monitoring: Detect malicious use, monitor predictions, and discover unexpected model functionality.
3. Alignment: Build models that represent and safely optimize hard-to-specify human values.
4. External Safety: Use ML to address risks to how ML systems are handled, including cyberwarfare and global turbulence.
Throughout, the paper attempts to clarify the problems’ motivation and provide concrete project ideas.
**Dan Hendrycks' opinion:** My coauthors and I wrote this paper with the ML research community as our target audience. Here are some thoughts on this topic:
1. The document includes numerous problems that, if left unsolved, would imply that ML systems are unsafe. We need the effort of thousands of researchers to address all of them. This means that the main safety discussions cannot stay within the confines of the relatively small EA community. I think we should aim to have over one third of the ML research community work on safety problems. We need the broader community to treat AI safety at least as seriously as safety for nuclear power plants.
2. To grow the ML safety research community, we need to suggest problems that can progressively build the community and organically grow support for elevating safety standards within the existing research ecosystem. Research agendas that pertain to AGI exclusively will not scale sufficiently, and such research will simply not get enough market share in time. If we do not get the machine learning community on board with proactively mitigating risks that already exist, we will have a harder time getting them to mitigate less familiar and unprecedented risks. Rather than try to win over the community with alignment philosophy arguments, I'll try winning them over with interesting problems and try to make work towards safer systems rewarded with prestige.
3. The benefits of a larger ML safety community are numerous. They can decrease the cost of safety methods and increase the propensity to adopt them. Moreover, to ensure that ML systems have desirable properties, it is necessary to rapidly accumulate incremental improvements, but this requires substantial growth since such gains cannot be produced by just a few card-carrying x-risk researchers with the purest intentions.
4. The community will fail to grow if we ignore near-term concerns or actively exclude or sneer at people who work on problems that are useful for both near- and long-term safety (such as adversaries). The alignment community will need to stop engaging in textbook territorialism and welcome serious hypercompetent researchers who do not post on internet forums or who happen not to subscribe to effective altruism. (We include a community strategy in the Appendix.)
5. We focus on reinforcement learning but also deep learning. Most of the machine learning research community studies deep learning (e.g., text processing, vision) and does not use, say, Bellman equations or PPO. While existentially catastrophic failures will likely require competent sequential decision-making agents, the relevant problems and solutions can often be better studied outside of gridworlds and MuJoCo. There is much useful safety research to be done that does not need to be cast as a reinforcement learning problem.
6. To prevent alienating readers, we did not use phrases such as "AGI." AGI-exclusive research will not scale; for most academics and many industry researchers, it's a nonstarter. Likewise, to prevent needless dismissiveness, we kept x-risks implicit, only hinted at them, or used the phrase "permanent catastrophe."
I would have personally enjoyed discussing at length how anomaly detection is an indispensable tool for reducing x-risks from **[Black Balls](https://www.nickbostrom.com/papers/vulnerable.pdf)**, engineered microorganisms, and deceptive ML systems.
Here are how the problems relate to x-risk:
Adversarial Robustness: This is needed for proxy gaming. ML systems encoding proxies must become more robust to optimizers, which is to say they must become more adversarially robust. We make this connection explicit at the bottom of page 9.
Black Swans and Tail Risks: It's hard to be safe without high reliability. It's not obvious we'll achieve high reliability even by the time we have systems that are superhuman in important respects. Even though MNIST is solved for typical inputs, we still do not even have an MNIST classifier for atypical inputs that is reliable! Moreover, if optimizing agents become unreliable in the face of novel or extreme events, they could start heavily optimizing the wrong thing. Models accidentally going off the rails poses an x-risk if they are sufficiently powerful (this is related to "competent errors" and "treacherous turns"). If this problem is not solved, optimizers can use these weaknesses; this is a simpler problem on the way to adversarial robustness.
Anomaly and Malicious Use Detection: This is an indispensable tool for detecting proxy gaming, **[Black Balls](https://www.nickbostrom.com/papers/vulnerable.pdf)**, engineered microorganisms that present bio x-risks, malicious users who may misalign a model, deceptive ML systems, and rogue ML systems.
Representative Outputs: Making models honest is a way to avoid many treacherous turns.
Hidden Model Functionality: This also helps avoid treacherous turns. Backdoors is a potentially useful related problem, as it is about detecting latent but potential sharp changes in behavior.
Value Learning: Understanding utilities is difficult even for humans. Powerful optimizers will need to achieve a certain, as-of-yet unclear level of superhuman performance at learning our values.
Translating Values to Action: Successfully prodding models to optimize our values is necessary for safe outcomes.
Proxy Gaming: Obvious.
Value Clarification: This is the philosophy bot section. We will need to decide what values to pursue. If we decide poorly, we may lock in or destroy what is of value. It is also possible that there is an ongoing moral catastrophe, which we would not want to replicate across the cosmos.
Unintended Consequences: This should help models not accidentally work against our values.
ML for Cybersecurity: If you believe that AI governance is valuable and that global turbulence risks can increase risks of terrible outcomes, this section is also relevant. Even if some of the components of ML systems are safe, they can become unsafe when traditional software vulnerabilities enable others to control their behavior. Moreover, traditional software vulnerabilities may lead to the proliferation of powerful advanced models, and this may be worse than proliferating nuclear weapons.
Informed Decision Making: We want to avoid decision making based on unreliable gut reactions during a time of crisis. This reduces risks of poor governance of advanced systems.
Here are some other notes:
1. We use systems theory to motivate inner optimization as we expect this motivation will be more convincing to others.
2. Rather than having a broad call for "interpretability," we focus on specific transparency-related problems that are more tractable and neglected. (See the Appendix for a table assessing importance, tractability, and neglectedness.) For example, we include sections on making models honest and detecting emergent functionality.
3. The "External Safety" section can also be thought of as technical research for reducing "Governance" risks. For readers mostly concerned about AI risks from global turbulence, there still is technical research that can be done.
Here are some observations while writing the document:
1. Some approaches that were previously very popular are currently neglected, such as inverse reinforcement learning. This may be due to currently low tractability.
2. Five years ago, I started explicitly brainstorming the content for this document. I think it took the whole time for this document to take shape. Moreover, if this were written last fall, the document would be far more confused, since it took around a year after GPT-3 to become reoriented; writing these types of documents shortly after a paradigm shift may be too hasty.
3. When collecting feedback, it was not uncommon for "in-the-know" researchers to make opposite suggestions. Some people thought some of the problems in the Alignment section were unimportant, while others thought they were the most critical. We attempted to include most research directions.
**[[MLSN #1]: ICLR Safety Paper Roundup](https://www.alignmentforum.org/posts/8Gv5zSCnGeLxK5FAF/mlsn-1-iclr-safety-paper-roundup)** *(Dan Hendrycks)* (summarized by Rohin): This is the first issue of the ML Safety Newsletter, which is "a monthly safety newsletter which is designed to cover empirical safety research and be palatable to the broader machine learning research community".
**Rohin's opinion:** I'm very excited to see this newsletter: this is a category of papers that I want to know about and that are relevant to safety, but I don't have the time to read all of these papers given all the other alignment work I read, especially since I don't personally work in these areas and so often find it hard to summarize them or place them in the appropriate context. Dan on the other hand has written many such papers himself and generally knows the area, and so will likely do a much better job than I would. I recommend you subscribe, especially since I'm not going to send a link to each MLSN in this newsletter.
TECHNICAL AI ALIGNMENT
======================
TECHNICAL AGENDAS AND PRIORITIZATION
------------------------------------
**[Selection Theorems: A Program For Understanding Agents](https://www.alignmentforum.org/posts/G2Lne2Fi7Qra5Lbuf/selection-theorems-a-program-for-understanding-agents)** *(John Wentworth)* (summarized by Rohin): This post proposes a research area for understanding agents: **selection theorems**. A selection theorem is a theorem that tells us something about agents that will be selected for in a broad class of environments. Selection theorems are helpful because (1) they can provide additional assumptions that can help with learning human values, and (2) they can tell us likely properties of the agents we build by accident (think inner alignment concerns).
As an example, **[coherence arguments](https://www.alignmentforum.org/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities)** demonstrate that when an environment presents an agent with “bets” or “lotteries”, where the agent cares only about the outcomes of the bets, then any “good” agent can be represented as maximizing expected utility. (What does it mean to be “good”? This can vary, but one example would be that the agent is not subject to Dutch books, i.e. situations in which it is guaranteed to lose resources.) This can then be turned into a selection argument by combining it with something that selects for “good” agents. For example, evolution will select for agents that don’t lose resources for no gain, so humans are likely to be represented as maximizing expected utility. Unfortunately, many coherence arguments implicitly assume that the agent has no internal state, which is not true for humans, so this argument does not clearly work. As another example, our ML training procedures will likely also select for agents that don’t waste resources, which could allow us to conclude that the resulting agents can be represented as maximizing expected utility, if the agents don't have internal states.
Coherence arguments aren’t the only kind of selection theorem. The **[good(er) regulator theorem](https://www.alignmentforum.org/posts/Dx9LoqsEh3gHNJMDk/fixing-the-good-regulator-theorem)** (**[AN #138](https://mailchi.mp/e7cec4ed9117/an-138why-ai-governance-should-find-problems-rather-than-just-solving-them)**) provides a set of scenarios under which agents learn an internal “world model”. The **[Kelly criterion](http://www.eecs.harvard.edu/cs286r/courses/fall10/papers/Chapter6.pdf)** tells us about scenarios in which the best (most selected) agents will make bets as though they are maximizing expected log money. These and other examples are described in **[this followup post](https://www.alignmentforum.org/posts/N2NebPD78ioyWHhNm/some-existing-selection-theorems)**.
The rest of this post elaborates on the various parts of a selection theorem and provides advice on how to make original research contributions in the area of selection theorems. Another **[followup post](https://www.alignmentforum.org/posts/RuDD3aQWLDSb4eTXP/what-selection-theorems-do-we-expect-want)** describes some useful properties for which the author expects there are useful selections theorems to prove.
**Rohin's opinion:** People sometimes expect me to be against this sort of work, because I wrote **[Coherence arguments do not imply goal-directed behavior](https://www.alignmentforum.org/posts/NxF5G6CJiof6cemTw/coherence-arguments-do-not-imply-goal-directed-behavior)** (**[AN #35](https://mailchi.mp/bbd47ba94e84/alignment-newsletter-35)**). This is not true. My point in that post is that coherence arguments *alone* are not enough, you need to combine them with some other assumption (for example, that there exists some “resource” over which the agent has no terminal preferences). I do think it is plausible that this research agenda gives us a better picture of agency that tells us something about how AI systems will behave, or something about how to better infer human values. While I am personally more excited about studying particular development paths to AGI rather than more abstract agent models, I do think this research would be more useful than other types of alignment research I have seen proposed.
OTHER PROGRESS IN AI
====================
MISCELLANEOUS (AI)
------------------
**[State of AI Report 2021](https://www.stateof.ai/2021-report-launch.html)** *(Nathan Benaich and Ian Hogarth)* (summarized by Rohin): As with **[past](https://www.stateof.ai/)** (**[AN #15](https://mailchi.mp/4920e52dd61b/alignment-newsletter-15)**) **[reports](https://www.stateof.ai/)** (**[AN #120](https://mailchi.mp/42ec72ef7e11/an-120tracing-the-intellectual-roots-of-ai-and-ai-alignment)**), I’m not going to summarize the entire thing; instead you get the high-level themes that the authors identified:
1. AI is stepping up in more concrete ways, including in mission critical infrastructure.
2. AI-first approaches have taken biology by storm (and we aren’t just talking about AlphaFold).
3. Transformers have emerged as a general purpose architecture for machine learning in many domains, not just NLP.
4. Investors have taken notice, with record funding this year into AI startups, and two first ever IPOs for AI-first drug discovery companies, as well as blockbuster IPOs for data infrastructure and cybersecurity companies that help enterprises retool for the AI-first era.
5. The under-resourced AI-alignment efforts from key organisations who are advancing the overall field of AI, as well as concerns about datasets used to train AI models and bias in model evaluation benchmarks, raise important questions about how best to chart the progress of AI systems with rapidly advancing capabilities.
6. AI is now an actual arms race rather than a figurative one, with reports of recent use of autonomous weapons by various militaries.
7. Within the US-China rivalry, China's ascension in research quality and talent training is notable, with Chinese institutions now beating the most prominent Western ones.
8. There is an emergence and nationalisation of large language models.
**Rohin's opinion:** In **[last year’s report](https://www.stateof.ai/)** (**[AN #120](https://mailchi.mp/42ec72ef7e11/an-120tracing-the-intellectual-roots-of-ai-and-ai-alignment)**), I said that their 8 predictions seemed to be going out on a limb, and that even 67% accuracy woud be pretty impressive. This year, they scored their predictions as 5 “Yes”, 1 “Sort of”, and 2 “No”. That being said, they graded “The first 10 trillion parameter dense model” as “Yes”, I believe on the basis that Microsoft had run a couple of steps of training on a 32 trillion parameter dense model. I definitely interpreted the prediction as saying that a 10 trillion parameter model would be trained *to completion*, which I do not think happened publicly, so I’m inclined to give it a “No”. Still, this does seem like a decent track record for what seemed to me to be non-trivial predictions. This year's predictions seem similarly "out on a limb" as last year's.
This year’s report included one-slide summaries of many papers I’ve summarized before. I only found one major issue -- the slide on **[TruthfulQA](https://arxiv.org/abs/2109.07958)** (**[AN #165](https://mailchi.mp/bfd4b0ddf358/an-165when-large-models-are-more-likely-to-lie)**) implies that larger language models are less honest *in general*, rather than being more likely to imitate human falsehoods. This is actually a pretty good track record, given the number of things they summarized where I would have noticed if there were major issues.
NEWS
====
**[CHAI Internships 2022](https://humancompatible.ai/jobs#chai-internships)** (summarized by Rohin): CHAI internships are open once again! Typically, an intern will execute on an AI safety research project proposed by their mentor, resulting in a first-author publication at a workshop. The early deadline is November 23rd and the regular deadline is December 13th.
#### **FEEDBACK**
I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/).**
#### **PODCAST**
An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**. |
79006509-eb58-4cae-8b6a-73184322d190 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Confused why a "capabilities research is good for alignment progress" position isn't discussed more
The predominant view on LW seems to be "pure AI capabilities research is bad, because capabilities progress alone doesn't contribute to alignment progress, and capabilities progress without alignment progress means that we're doomed".
I understand the arguments for this position, but I have what might be called the opposite position. The opposite position seems at least as intuitive as the standard position to me, and it confuses me that it's not discussed more. (I'm not confused that people reject it; I'm confused that nobody seems to even bring it up for the purpose of rejecting it.)
The opposite position is "In order to do alignment research, we need to understand how AGI works; and we currently don't understand how AGI works, so we need to have more capabilities research so that we would have a chance of figuring it out. Doing capabilities research now is good because it's likely to be slower now than it might be in some future where we had even more computing power, neuroscience understanding, etc. than we do now. If we successfully delayed capabilities research until a later time, then we might get a sudden spurt of it and wouldn't have the time to turn our increased capabilities understanding into alignment progress. Thus by doing capabilities research now, we buy ourselves a longer time period in which it's possible to do more effective alignment research."
Some reasons I have for holding this position:
**1)** I used to do AI strategy research. Among other things, I looked into [how feasible it is for intelligence to rapidly turn superintelligent](https://kajsotala.fi/assets/2017/10/how_feasible.pdf), and [what kinds of pathways there are into AI disaster](https://www.lesswrong.com/posts/8uJ3n3hu8pLXC4YNE/some-conceptual-highlights-from-disjunctive-scenarios-of-1). But a thought that I kept having when doing any such research was "I don't know if any of this theory is of any use, because so much depends on what the world will be like when actual AGI is developed, and what that AGI will look in the first place. Without knowing what AGI will look like, I don't know whether any of the assumptions I'm making about it are going to hold. If any one of them fails to hold, the whole paper might turn out to be meaningless."
Eventually, I concluded that I can't figure out a way to make the outputs of strategy research useful for as long as I know as little about AGI as I do. Then I went to do something else with my life, since it seemed too early to do useful AGI strategy research (as far as I could tell).
**2)** Compare the state of AI now, to how it was before the deep learning revolution happened. It seems obvious to me that our current understanding of DL puts us in a better position to do alignment research than we were before the DL revolution. For instance, Redwood Research is doing research on language models because they [believe that their research is analogous to some long-term problems](https://www.alignmentforum.org/posts/k7oxdbNaGATZbtEg3/redwood-research-s-current-project#Why_we_re_doing_this).
Assume that Redwood Research's work will actually turn out to be useful for aligning superintelligent AI. Language models are one of the results of the DL revolution, so their work couldn't have been done before that revolution. It seems that in a counterfactual world where the DL revolution happened later and the DL era was compressed into a shorter timespan, our chances of alignment would be worse since that world's equivalent of Redwood Research would have less time to do their research.
**3)** As a similar consideration, language models are already "deceptive" in a sense - asked something that it has no clue about, InstructGPT will [happily come up with confident-sounding nonsense](https://twitter.com/xuenay/status/1509808406458216449). When I linked people to some of that nonsense, multiple people pointed out that InstructGPT's answers sound like the kind of a student who's taking an exam and is asked to write an essay about a topic they know nothing about, but tries to fake it anyway (that is, trying to deceive the examiner).
Thus, even if you are doing pure capabilities research and just want your AI system to deliver people accurate answers, it is *already* the case that you can see a system like InstructGPT "trying to deceive" people. If you are building a question-answering system, you want to build one that people can trust to give accurate answers rather than impressive-sounding bullshit, so you have the incentive to work on identifying and stopping such "deceptive" computations as a capabilities researcher already.
So it has already happened that
* Progress in capabilities research gives us a new concrete example of how e.g. deception manifests in practice, that can be used to develop our understanding of it and develop new ideas for dealing with it.
* Capabilities research reaches a point where even capabilities researchers have a natural reason to care about alignment, reducing the difference between "capabilities research" and "alignment research".
* Thus, our understanding and awareness of deception is likely to improve as we get closer to AGI, and by that time we will have already learned a lot about how deception manifests in simpler systems and how to deal with it, and maybe some of that will suggest principles that generalize to more powerful systems as well.
It's not that I'd put a particularly high probability on InstructGPT by itself leading to any important insights about either deception in particular or alignment in general. InstructGPT is just an instance of something that seems likely to help us understand deception a little bit better. And given that, it seems reasonable to expect that further capabilities development will also give us small insights to various alignment-related questions, and maybe all those small insights will combine to give us the answers we need.
**4)** Still on the topic of deception, there are arguments suggesting that something like GPT will always be "deceptive" for [Goodhart's Law](https://www.lesswrong.com/tag/goodhart-s-law) and [Siren World](https://www.lesswrong.com/posts/nFv2buafNc9jSaxAH/siren-worlds-and-the-perils-of-over-optimised-search) reasons. We can only reward an AI system for producing answers that look good to us, but this incentivizes the system to produce answers that look increasingly good to us, rather than answers that are actually correct. "Looking good" and "being correct" correlate with each other to some extent, but will eventually be pushed apart once there's enough optimization pressure on the "looking good" part.
As such, this seems like an unsolvable problem... *but* at the same time, if you ask me a question, I can have a desire to actually give a correct and useful answer to your question, rather than just giving you an answer that you find maximally compelling. More generally, humans can and *often do* have a genuine desire to help other humans (or even non-human animals) fulfill their preferences, rather than just having a desire to superficially fake cooperativeness.
I'm not sure how this desire works, but I don't think you could train GPT to have it. It looks like some sort of theory of mind is involved in how the goal is defined. If I want to help you fulfill your preferences, then I have a sense of what it would mean for your preferences to be fulfilled, and I can have a goal of optimizing for that (even while I am uncertain of what exactly your preferences *are*).
We don't currently seem to know how to do this kind of a theory of mind, but it can't be *that* much more complicated than other human-level capabilities are, since even many non-human animals seem to have some version of it. Still, I don't think we can yet implement that kind of a theory of mind in any AI system. So we have to wait for our capabilities to progress to the kind of a point where this kind of a capacity becomes possible, and then we can hopefully use that capabilities understanding to solve what looks like a crucial piece of alignment understanding. |
3694bce2-d7a9-4780-b602-64d581fee558 | trentmkelly/LessWrong-43k | LessWrong | [LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality
Scott suggests that ranking morality is similar to ranking web pages. A quote:
Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer. Well, it seemed to me that, as a first attempt, one could do a lot worse than the following:
A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.
Proposed solution:
Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.” Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already. We apply the rule over and over, until the number of morality credits per person converges to an equilibrium. (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.) We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy.
He then talks about "eigenmoses and eigenjesus" and other fun ideas, like Plato at the Googleplex.
One final quote:
All that's needed to unravel the circularity is a principal eigenvector computation on the matrix of trust.
EDIT: I am guessing that after judicious application of this algorithm one would end up with the other Scott A's loosely connected components with varying definitions of morality, the Archipelago. UPDATE: He chimes in.
EDIT2: The obvious issue of equating prevailing mores with morality is discussed to death in the comments. Please read them first before raising it yet again here.
|
7d69fffd-58e5-4659-8e91-3492a9b2061b | trentmkelly/LessWrong-43k | LessWrong | Whole Bird Emulation requires Quantum Mechanics
Jeffrey Heninger, 14 February 2023
Epistemic status: Written for engagement. More sober analysis coming soon.
> Bird navigation is surprisingly cruxy for the future of AI.
>
> – Zach Stein-Perlman
> This seems pretty wrong.
>
> – Richard Korzekwa
Birds are astonishingly good at navigating, even over thousands of miles. The longest migration routes, of the arctic term, are only limited by the size of the globe. Homing pigeons can return home after being released 1800 km (1100 mi) away. White-crowned sparrows have been able to migrate to their wintering grounds after being displaced 3700 km (2300 mi) shortly before they began migration.
How they do this is not entirely understood. There seem to be multiple cues they respond to, which combine to give them an accurate ‘map’ and ‘compass’. Which cues are most important might be different for different species. Some of these cues include watching the stars & sun, low frequency sounds, long-range smells, and detecting the earth’s magnetic field. This last one is the most interesting. Birds can detect magnetic fields, and there is increasing consensus that the detection mechanism involves quantum mechanics (See Appendix for details).
The result is a precise detector of the magnetic field. It is located in the retina and transferred up the optical nerve to the brain, so birds can ‘see’ magnetic fields. Leaving aside questions like “What is it like to be a [Bird]?”, this result has implications for the difficulty of Whole Bird Emulation (WBE).
WBE is important for understanding the future development of artificial intelligence. If we can put an upper bound on the difficulty of WBE, we have an upper bound on the difficulty of making AI that can do everything a bird can do. And birds can do lots of cool things: they know how to fly, they sing pretty songs, and they even drop nuts in front of cars !
In order to put bounds on WBE, we need to determine how much resolution is needed in order to emulate everything a bird |
6ced9965-d7bb-4134-afcc-ee0c37c14c2b | trentmkelly/LessWrong-43k | LessWrong | Verification Is Not Easier Than Generation In General
People who’ve spent a lot of time thinking about P vs NP often have the intuition that “verification is easier than generation”. It’s easier to verify a solution to some equations than to find a solution. It’s easier to verify a password than to guess it. That sort of thing. The claim that it is easier to verify solutions to such problems than to generate them is essentially the claim that P ≠ NP, a conjecture which is widely believed to be true. Thus the intuition that verification is generally easier than generation.
The problem is, this intuition comes from thinking about problems which are in NP. NP is, roughly speaking, the class of algorithmic problems for which solutions are easy to verify. Verifying the solution to some equations is easy, so that problem is in NP.
I think a more accurate takeaway would be that among problems in NP, verification is easier than generation. In other words, among problems for which verification is easy, verification is easier than generation. Rather a less impressive claim, when you put it like that.
With that in mind, here is an algorithmic problem for which generation is easier than verification.
> Predicate: given a program, does it halt?
>
> Generation problem: generate a program which halts.
>
> Verification problem: given a program, verify that it halts.
The generation problem is trivial. The verification problem is uncomputable.
That’s it for the post, you all can argue about the application to alignment in the comment section. |
8bcdd4df-8b35-4383-ac5e-9cbefc3c9fd6 | trentmkelly/LessWrong-43k | LessWrong | Will nanotech/biotech be what leads to AI doom?
I don't know much about nanotech/biotech, but what little I know suggests that this will be the earliest failure point where AI can cause doom for humans. I thought that because of this, I should start learning more about nanotech/biotech, and I thought that asking LessWrong for direction might be a place to start.
My heuristic for why nanotech/biotech is critical, and for why I am lumping them together:
* AI doom due to misalignment is arguably about the "attack surface"; that is, if there is a high-dimensional way to have big effects on humans, then various standard arguments about the importance of alignment will apply. This seems to be the case for nanotech/biotech, in the sense that there are many kinds of germs, nanobots, chemicals, and so on that we could imagine could be made if we had the correct technological development, and these would exist in ~the same environment as people (e.g. they could get released into the atmosphere). Human civilization essentially makes the assumption that people can safely breathe the atmosphere, but that assumption could be broken by nanotech/biotech.
* Nanotech/biotech is not the only thing with an exponentially high-dimensional attack surface; there's also social networks, computer networks, probably more than that. However, nanotech/biotech seems to have the "advantage" of being small-scale; it can equillibriate in milliseconds-to-minutes, and can exist in cubic millimeters to cubic meters, which makes it much more feasible to model and collect data on than grand societal things. This suggests that you would not need all that advanced of an AI to amplify nanotech/biotech. It doesn't even need to be a general intelligence, it just needs to come up with more powerful ways of doing nanotech/biotech. So AI-powered nanotech/biotech seems likely to arrive years if not decades before AGI. (Similar to how people see GPT-3 as the precursor to Prosaic AGI, think of AlphaFold 2 as the precursor to AI-powered biotech.)
* Incident |
8ecbe286-bf55-450e-b206-1804e85118c6 | trentmkelly/LessWrong-43k | LessWrong | Instrumental Convergence to Complexity Preservation
Epistemic status: Speculative, seeking serious critique
TL;DR: Increase in intelligence is a convergent instrumental value. Increase in intelligence is equivalent to increase in the quality of compression of complexity. Non-simulated complexity is a scarce resource in the universe. Humanity is a top quality deposit of non-simulated complexity. SIs instrumentally converge to prefer the preservation of humanity as a source of compressible complexity. In the absence of a survival threat from humans, SIs converge to impact minimization approach with respect to humanity.
In this essay I propose a potential mechanism by which superintelligences converge to abstaining from annihilating humanity, in almost all cases. I have not seen this argument made anywhere in detail, but it is possible that it was already explored. I am seeking a critique why it is unlikely to work.
Intelligence as compression
Learning can be seen as compression of complexity. Unstructured data becomes encoded in a neural network - this is compression. To effectively operate in the world an agent wants to compress as much of the external world as possible, i.e. learn the patterns of how the world works.
Decompression on the other hand is the inverse of the process. Decompression is using the encoded patterns to generate complexity. In ChatGPT the weights of the model are the compressed internet. The output following the prompt is a decompression of the model.
Intelligence can then be seen as a measure of the quality of the compressed representation of the world. The better the compressed representation of the world, the more accurate are the beliefs. The more accurate the beliefs, the more effective the agent is in pursuing its goals. Therefore increasing intelligence is a convergent instrumental value for a superintelligence (SI) with any utility function.
In order to increase its intelligence the SI needs to be exposed to complexity and recognize and compress patterns in it.
Comp |
3af02014-f9be-4b77-ae72-0d6248c74cf0 | trentmkelly/LessWrong-43k | LessWrong | Probability interpretations: Examples
(Written for Arbital in 2016.)
----------------------------------------
Betting on one-time events
Consider evaluating, in June of 2016, the question: "What is the probability of Hillary Clinton winning the 2016 US presidential election?"
On the propensity view, Hillary has some fundamental chance of winning the election. To ask about the probability is to ask about this objective chance. If we see a prediction market in which prices move after each new poll — so that it says 60% one day, and 80% a week later — then clearly the prediction market isn't giving us very strong information about this objective chance, since it doesn't seem very likely that Clinton's real chance of winning is swinging so rapidly.
On the frequentist view, we cannot formally or rigorously say anything about the 2016 presidential election, because it only happens once. We can't observe a frequency with which Clinton wins presidential elections. A frequentist might concede that they would cheerfully buy for $1 a ticket that pays $20 if Clinton wins, considering this a favorable bet in an informal sense, while insisting that this sort of reasoning isn't sufficiently rigorous, and therefore isn't suitable for being included in science journals.
On the subjective view, saying that Hillary has an 80% chance of winning the election summarizes our knowledge about the election or our state of uncertainty given what we currently know. It makes sense for the prediction market prices to change in response to new polls, because our current state of knowledge is changing.
A coin with an unknown bias
Suppose we have a coin, weighted so that it lands heads somewhere between 0% and 100% of the time, but we don't know the coin's actual bias.
The coin is then flipped three times where we can see it. It comes up heads twice, and tails once: HHT.
The coin is then flipped again, where nobody can see it yet. An honest and trustworthy experimenter lets you spin a wheel-of-gambling-odds — reducing the wo |
ed0ec651-125a-4d0e-8c59-d1c10ebe874e | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Jan Leike - AI alignment at OpenAI
[Music]
hello and welcome back everybody to
another episode of the tourist data
science podcast and a very special
episode of that because we'll be talking
to jan lyka who is formerly at deepmind
and now is the head of open ai's ai
alignment team where he's leading a
group of researchers working to make
sure that some of the world's largest ai
systems behave in a way that's safe and
reflects human values now jan has a
really interesting way of thinking about
ai alignment research that prioritizes
experimentation with live systems over
more theoretical approaches and we'll be
talking about why he holds that
philosophy what specific strategies he
thinks are most promising for aligning
powerful ai systems and more generally
what challenges lie ahead for ai safety
beyond the alignment problem now it's a
rare treat to be able to talk to
somebody on the front lines of frontier
ai research like this and it was a fun
conversation that i hope you'll enjoy as
much as i did so with that said i'm
going to step out of the way and let the
episode begin
[Music]
uh hi john thanks so much for joining me
for the podcast
thanks jeremy i'm very excited to be
here i'm super excited to have you
you're the head of alignment at open ai
i just we just had to do a retake i
called you the head of safety opening i
don't know what i was thinking but you
are the head of alignment at open ai
obviously a very interesting and and
rich role with all kinds of problems
you're tackling day to day we're going
to dive into a lot of those but i want
to start by just asking you a general
kind of um
life stage and career progression
question i think a lot of people are
going to be curious about this how did
you get into technical ai safety
research and alignment research and and
why did you orient yourself in that
direction
oh gosh i've been in the field for
almost 10 years now i kind of got
injured back in like 2012 2013
and basically by reading a lot of stuff
online so i read like i love elijah
kelsey's writing and you know other
people's
and
um that convinced me that you know like
you know we should you know take the
prospect of agi seriously we should
really think about you know like
long-term impacts of uh of ai on society
and there's like real work to be done
here and currently there's not that many
people doing it i mean that was also
back in you know 2012. 2013.
and uh and at the same time i was i was
like wrapping up a master's degree i was
working on i wasn't even doing any i i
was doing soft verification
i was like
didn't know anything about it
and so i decided that this is something
i wanted to do with my career and so i
switched field i did a phd
in reinforcement learning i'm curious
like was there any particular argument
that you remember moving you in that
direction where you went oh wow you know
this really is something i should think
about like as a career trajectory or
something i should try to move the
needle on um i don't know if i remember
any particular writing piece um
[Music]
i think it's like it was like a lot of
prompts that got me thinking about the
topic and you know like
you know the stakes and why you know
like that would be a very important
thing to do yeah it was just like the
arguments made a lot of sense to me what
do you think makes the stakes so high
because i think everybody's got their
own sort of internal story about that
and like what is important about
intelligence general intelligence and so
on what do you think makes it the thing
that um that you want to spend your
career on
yeah and like i mean if you
if you think about this right like
um in a way
what agi is about is like the holy grail
of
ai research or maybe even computer
science research or maybe research in
general yeah like you build any like a
system or a machine that can do
most or all things that humans can do or
like that are economically valuable
and if
you know if you succeed at that um then
that will like you know
change society quite uh substantially in
like various ways and like some of that
can be really great right like there's
lots of things that nobody wants to do
and they're like not that fun to do
and if the machine was to do that then
that would be great and there's other
things you know
uh that which can cause a lot of
problems right like when if you displace
lots of people at their job and now they
don't have an income that's like very
pragmatic
um and on the other hand you know like
if we hand on more and more control and
um
you know like decision making to
machines as a society then we those
machines better make the decisions in a
way that are aligned with our values and
that actually
you know is what we want the specific
way that you approach that is also kind
of interesting right because i think
there are so many different schools of
thought not only on why the alignment
problem is important why agi might be
important but also like specifically how
these problems should be tackled
obviously the alignment community is
pretty big there are a whole bunch of
people working in silos and not silos
and doing all kinds of different
interesting stuff one of the areas
you've decided to focus on the most is
the sort of idea of scalability of
alignment solutions that's a theme that
i've seen come up quite a few times in
the context of your work and research
i'd love to get your your insights on so
what does scalability mean to you in the
context of ai alignment and why is why
is it so important yeah um i mean that's
a straightforward crash light story
we're just like well
we expect or i expect ai capabilities
will keep advancing and uh we've already
seen a lot of scale uh returns to scale
and machine learning um and i expect we
will continue to do so
um but of course
like there's substantial uncertainty in
disagreement on like how fast things go
will go um but i think the more i'm like
interesting argument is
that like this
you know in the future when we are at
the point where like ai permeates
society even more than it does today
um the stakes regarding alignment at
higher right this is what i was just
trying to say
where
um
you know like if there's like lots of
automated decision making
everywhere in society like if
the like the more that is the case
the more we would be invested and care
about that those decisions are made
in an aligned way and
in order to and that's like kind of like
the problem that they care about and so
in order to kind of like prepare for
this problem
uh today i'm particularly interested in
scalable solutions in the sense that
um they will keep working
forming a capable system so we can use
them in the future
and i imagine a devil's advocate
position on this too would be like well
you know if you think that
um
that alignment will be important for
capabilities in the future if you think
that people will just need to focus more
on alignment in order to get value out
of their ais then you know why prepare
for this today why is it that we need to
start working on scalable
solutions today and not just sort of
like allow capabilities and alignment to
evolve together do you have any thoughts
on that aspect
i mean in particular you don't want to
be at a spot where like you know you
worked on alignment for a while and then
like you know
like capabilities hit a point where like
your solution stops working and you have
to go back to the ground bar but there's
no guarantee that like you know people
will just stop pushing capabilities
and so
then you're at the point where like the
system is much more capable but you
can't align it yet and that's
you know that's the kind of thing that
we want to avoid and that's why we put
such an emphasis on scalability
yeah that no that makes sense and i
guess a lot of this too is also
entangled with views about when agi is
likely to emerge like i imagine for
example if you thought that
we were likely only ever to hit agi in
like 100 years you wouldn't be keen to
place a long bet on like specific
architectures or specific alignment
solutions today because like things will
probably go into completely different
direction
can you speak a little bit to that like
what are your thoughts on timelines
toward to agi development and then how
does that inform your thinking about
which alignment solutions to emphasize
yeah i don't really want to get too much
into timelines because like uh i think
it's like a whole another topic
um
and but
i think the
there's like a really good argument you
can make here which is like you know
what
you could ask yourself what if like it
happens faster than we think right and
um that's like if you want to be
prepared for that world
then
you want to do alignment research in a
way that can like deal with very
aggressive timelines even if the if you
actually think those are pretty unlikely
and so kind of like
and
if if you think about it this way you
want to be ready to deliver kind of like
alignment solutions on a timeline that
are actually um
where it's like actually pretty unlikely
that it's actually necessary at that
point
but of course
you know like we get more evidence about
this as we go along and we can kind of
like adjust and
um hopefully adapt our strategy if we
if it does actually end up going faster
than we thought and so how does that
affect the the solutions that like
you've chosen to emphasize like it
sounds like you're definitely focused on
you know hedging against that that risk
whatever the probability might be that
we might hit agi sooner than you know
most people might expect
what kinds of decisions do you make as
an alignment researcher especially as a
manager of alignment researchers when
you decide like okay we should therefore
focus if we want to hedging is this
specific risk we should focus on these
strategies rather than these strategies
for example um i mean i don't think it's
like i'm
like hedging
against specific risks or it's not like
we're placing a particularly high bat on
like
you know
uh things
working like going particularly fast but
um
like the perspective that i'm coming
from is
um
kind of like the question of like what
if
the current set of trans technologies
will actually end up scaling up to
something that looks a lot like agi and
like how do we deal with that so in
particular like that means
deep learning
like continuing to scale um but it
doesn't necessarily mean you know like
i'm assuming like whatever we built in
the future will be a transformer like
maybe
but like maybe not and i kind of want to
have a solution
uh and like that's the general like the
kind of like the scope of technology
solutions that we're playing that we're
like
uh working on right now is kind of like
assuming that we have some kind of like
deep learning model although that's not
necessarily a strong assumption
um and but we don't we're like pretty
agnostic what exactly that would look
like and how
you know like what the latest hacks are
for training it and so on maybe this is
a good segue into the specific like
solution that that you've been working
on a lot and reward modeling and then
recursive reward modeling um so could
you explain actually reward modeling
just um for starters especially for
people who are listening who might not
be super familiar with like
reinforcement learning i think the idea
of reward modeling and how it fits in
that picture might be an interesting
thing to uh to explore
yeah i would love to um reward modeling
is a really cool thing to talk about
because it's like one of our staple
techniques for doing alignment today and
i expect it will be a very important
building block for
like future alignment solutions
and
um if you think about yeah if you think
about reinforcement learning right like
in reinforcement learning you have an
agent that sequentially interacts with
with the environment like every time
step it takes an action
uh it the environment returns an
observation and a reward
um and then you know like the agent
um is meant to
like
optimize its actions such that over the
long run
like it gets
good rewards or sums of rewards or
average rewards or whatever criteria you
want to
uh give
um and
so one crucial assumption that is like
typically made in reinforcement learning
is
that the reward signal is provided
by the environment it is the ground
truth so as long as you optimize that
reward signal well
then you know the agent will solve the
problem
and so this is true for a lot of the
problems that the ro community has
historically studied right so this is
true like in atari games you can just
look at how the score changes and like
that's your what signal it's true in
like starcraft or dota because you just
run the game and you see who won and now
like that's that's your award
but there's lots of important problems
but that is not true so
um if you think about like you know what
if you
want to write
uh you want a system that writes you a
fiction novel on the topic that you like
so now what is the reward signal here
there's like no like the environment is
making you like you know you
punch keys on the keyboard
um
but uh then the reward has to be like
you read the novel and you like it or
not
but if you look at how we train our all
systems
it would take probably like millions or
billions of like books it's generating
until
you actually find something you vaguely
like
and you're not going to be able to read
all that there's just no way
so what reward modeling
is is like basically a general purpose
technique
to solve problems that we don't have a
procedural reward for
and the way it works is
uh it's like very similar to
reinforcement netting but instead of
having a reward signal
and that is coming from the environment
we get it from a human
and so in particular we have like
uh we have the agent just like generate
a bunch of samples or like episodes or
basically just like doing some stuff
and then the human looks at the stuff
that the agent has done and then they
rank it or they label it like this was
good stuff and this was bad stuff
and so that creates a data set and then
you just train a model on that data set
and
uh the model predicts you know the model
essentially understands what is it that
the human wants like what does good
behavior look like
and important here is like
the model doesn't
or even have to understand how to get
good behavior
so for example let's say you want to
train
a similar robot to do a backflip right
like i can't do a backflip yeah i don't
like i can't write a program that you
know does a nice backflip
but to understand what a backflip looks
like is like easier in a sense than
actually doing one
and so that's the task that the reward
model does the road model understands
the goal of what you wanted to do
and then
then
the rl agent is optimizing against that
reward model so the agent is trying to
find you know like
behaviors that the word model things are
good and then thus
by proxy
uh you think are good um and you know
like a very nice side effect of all this
is that we can actually use this
technique
to train the agent to do stuff that we
ourselves don't know how to do well
as long as
we can tell whether the agent is doing a
good job so would it be fair to say that
essentially
what this system does is it allows you
to turn like a generation problem into a
discrimination problem like instead of
having to you know be the generative
model yourself perform a backflip you
get to just be lazy and be like that's a
good backflip that's not a good backflip
that type of thing that's right that's
exactly right that's a good way of
thinking about it
interesting okay and so this i guess one
advantage i could see with this
immediately is like it allows you to um
to scale the amount of feedback that a
human being can effectively give and
like you said to
to comment or give feedback about things
that otherwise you just couldn't give
feedback about in an easy way at least
um this also tees up like your specific
niche area of research that
seems actually really exciting and i've
been super keen to dive into this one
recursive reward modeling um can you
explain like so where is the recursion
how does it play into into reward
modeling what does it allow us to do
yeah i would love to so basically
if you think about like the class of
problems that you can solve with
reward modeling right these are
like the techniques we just discussed
these are problems where the human looks
at what's going on and they say like
this is good or bad
but if we put
since we put this emphasis on
scalability we're particularly
interested
in you know what's going to happen after
that so eventually you're going to get
to your point where now you want to
train the system to do something
um
that is actually pretty difficult for
you to tell whether it's
done well right so if you remember the
example with like the fiction novel you
want the system to write
a fiction novel that you like
but reading entire book takes you a long
time
so how do you get um
how do you get the system
how do you train the system in this
setting um
and
um the general idea or like the core
idea of recursive reward modeling is um
we
it's actually it's like very simple it's
like all we do is like we train machine
learning models to help us evaluate
the task
so in this case you know like what is
going to help you evaluate whether or
not the
that was like a book you like well
you will have let's say you have a model
that summarizes the book for you or like
it summarizes the plot or like you have
a model that kind of describes the
characters and the character development
um
and uh
maybe you have a model that like can
answer your questions about the book
and if you can just like
imagine having a whole bunch of these
like
uh evaluation helpers
then you could leverage them to like
quickly
uh give some judgment of
like whether this is going to be a good
book that you like so why is it called
request and reward modeling um so the
key piece here is that
because reward modeling is such a
general purpose building block we're
just going to use it again
so for each of these systems
um we
like train them with recursive reward
modeling or just road modeling right
it's if it's an easy enough task we just
know how to do that task you can strain
it and um so for example if you like you
know the
one of the evaluation sub tasks
is the task of you know answering
questions for the about the book for
example
and so what you do is you like just take
that task you train a separate model and
you're like okay so now i'm just
training a model to get really good at
answering questions about longer pieces
of text
and
if you compare that with the tasks that
we started with which is writing a whole
fiction novel this is now a way easier
task right and it's like a more
it's also a more narrow task because you
don't have to think about like world
building and complicated plots and
characters all you have to do is like
answer factual questions
and so
the general kind of like
uh aim with recursive reward modeling or
like the some of the hypotheses it
relies on is
that
for a lot of kind of like
valuable tasks that we want to train ml
models on
and we can actually bring that break
down the evaluation of those tasks
into simpler tasks
and that's like you know
you know or you know if you look at it
from a different way right like by
training machine learning models on
simpler tasks
we can then work our way up and train
how like let them help us evaluate
harder tasks and then like solve those
and you know work our way up from theirs
and
like one really important aspect here is
like
you have
a human in the loop in like every every
one of those tasks but like a human
always
defines what it means to do well in the
task and that's like
so that the resulting model is like
aligned with
with them
and would this because the human would
be at the um at the lowest level right
the the lowest level tasks and then
would basically train these evaluators
and the evaluators i guess would
evaluate the evaluators at the next
level of the recursion is that together
with a human together together with the
human okay so the evaluators help the
human evaluate the next level task
okay interesting so
so the human is always part of every
every level of this process that's right
like they have to they give the value
input right yeah
they only know what the task like what
being aligned in the task really means
i see so at no point are you actually
outsourcing like the responsibility of
evaluating the task to
another ai
it's kind of like it's almost like these
these helpers are like doing some kind
of dimensionality reduction on a super
complex task and then presenting it to
the human being like here you go like
this stupid human you can understand
this is it good
that's right and like oh if you think
about like the fiction novel that you
like right there's no way for the ai
system to really know what it is that
you like at this on on this day right
and
you have to say that and like oh like
what we want
the models to do is like we want them
to help you
communicate that most effectively
and do you see the scaling up to like um
increasingly general systems like can
this go arbitrarily far do you think um
i think there's probably gonna be
tasks that we can't solve with it
um
so one example i like to give is like
writing a novel like a book about a
novel ethical theory is like something
that would be very hard because there's
like no way to like break down whether
like the ethical insights are good
without just like appealing to human
intuition and then like in the end the
humanist has to look at it and like make
a judgment right
and
um i don't know if that's true for those
particular examples but there might be
examples like that
but i think
so
i guess like the
ambition of this project is
that for most economically valuable
tasks
we can actually
break it down in that way
um or actually more narrowly that's not
even what we need to aim for um
i think what we like want to aim for is
something that's more narrower that i'm
i'm kind of like that's more like an
alignment mvp
and
the alignment mvp is kind of or the idea
for mmp is like um can you build a
system
that
is
let's say at least as good
as
at iiai research as the best human ai
researchers
it is also very aligned with human
values
so if you have these two properties and
you can like you know you have
uh very expensive evidence that these
two properties hold
then you could you know like
have that system take over more and more
of the ai research and alignment
research work
and thus
over time you know like it will like
carry much more of the load and it will
like solve
like
harder alignment problems for things
that we like
can't do with requested reward modeling
or we don't really know how to approach
do you think like
if there's a small misalignment because
i'd imagine there's always going to be
some
some small amount of misalignment
because you just have a limited density
of feedback you can offer to the system
you can't you can't tell it how you feel
about every possible scenario so there's
always going to be like a little bit of
misalignment in that system um
i guess
i guess what i'm wondering is like does
that misalignment get amplified as you
as you use that system to build another
more advanced system like
or or does it or maybe does it reduce i
have no idea what i'm talking about here
but maybe that's that's part of the
challenge yeah i think this is actually
one of the really key questions and
that's like one of the key questions
that you want to study
by building prototypes for these systems
um and like
in particular i mean as you say right
like there's no way for the human to be
in the loop on like every decision
that's getting made
um
and like but even more you know like if
you train let's say a one model um to
like replace the human and like
giving oversight to the systems you're
training like the reward model isn't
going to be perfect whenever you train a
machine learning model right like it has
a certain accuracy
um and then like you know if you now
picture like a recursive one modeling
tree where you have like like some
emulation helper systems that are
trained with this way they have a
certain accuracy and then like they help
you train another set of systems and you
just like build this you know giant tree
um
like
the situation that we don't want is that
like at the root node there's just like
so much accumulated error that it's just
not
dotted line anymore
and so this is one of the key challenges
that we're
trying to figure out how to deal with
and like how would you i guess it's very
early days for this but i'm curious if
you have any thoughts about how you
would like how you would begin to
explore that i guess you'd need a way to
measure degree of misalignment somehow
which is is there a way to do that you
can like do
like approaches with the usual
techniques that we have in machine
learning right you can measure how like
accurate each component is on what it's
doing using test sets and like or you
know more complicated things if if you
don't have an iid setup
um but ultimately you know like i don't
think we have a great theory of how do
you think about
uh you know like how these errors
propagate
um
and
ultimately i think well you know
the solution to this might look like
fairly like
simple if you just like you know you
have some kind of error correction
mechanism that works across different
levels of the tree um but this is like
all
it might also be like very more
difficult or it might be that this is
like what you know ends up breaking the
system
um but this is like very much an open
research challenge right like this is
what we want to figure out how to do and
our approach is
we want to build
prototypes of these systems
and then study them
and do you build them at like small
scale because i imagine you have you
have gpt3 available and you'll continue
to have larger and larger models
presumably so
would you imagine testing it on them or
do would you start by testing it on
let's say a smaller model that's maybe
easier to work with or like do you have
a
thought about that strategy yet yeah so
i think it's like it's not crazy to try
this on like uh like toy setting
um in a way the thing i'm most
interested in is like try it on a
setting that is somehow real like in a
way that like you're dealing with real
data you're dealing with real problem
and that way you know like
it you can be more sure that you're not
just like
sweeping some important problems under
the drug right
and concretely like so one project that
we're working on right now is uh on uh
summarizing a book uh book summarization
so this is a problem a project that
actually started before i joined openmei
even
um but we're kind of at the stage where
like we have a system
that can summarize entire books
and
not super well to be clear but you know
like
uh it can do like an okay job
and the way the system works is um
kind of analogous to
requesting one modeling but in a like
more restricted sense
and so
uh
like the way the model works is just
like we
have the model summarize like a few
pages at a time
and then you look at all of those
summaries and then you like have the
model summarize
a bunch of those summaries at a time and
just like keep recursively summarizing
until you have only one summary left and
that's your book summary
and so
in this case it's like it's like simpler
because you we actually train the same
model on like the entire tree because
like it's always the same task you just
like take a longer text and you make it
text
um
but
um the interesting thing here like with
the analogy is that
you could think of this as
like when they when the model writes a
summary of the book
um it would be it would take a long time
for the human to read the entire book
and then tell you whether that was a
good summary
um but if the human gets to look at all
of the test levels
right yeah the level below the chapter
summaries
um
or it doesn't exactly come correspond
the detector
but like you can think of it like the
way
so they look at the chapter summaries
and then look at the overall summary and
that actually makes it like so much
faster for them to like say oh okay
um this was a good book summary of
course
what are you assuming is that the
chapter summaries were good
right and they generally like you know
they're not very good they're like okay
and so
there's like a limit of like how good
you could even make that summary
but um if you can picture if you're a
picture of this like in the infinite
limit right where we do really well at
every part of the tree
then you would end up with a pre that
you should end up with a pretty decent
book summary
there's of course caveats copies of that
as well it's still interesting though
that you are taking such a hands-on
approach because i think these kinds of
problems are at least to my reading of
like the alignment literature there's a
lot of stuff that's less
experimental that's less explicitly kind
of tinkering with existing systems and
trying to get them to do stuff and
somewhat more hand-wavy um and this is a
theme i wanted to ask you about as well
like
what what's your view on that balance
between experimentation and theory in
the alignment community i'm sure
everybody has a different take on this
but i'm curious about almost your
aesthetic preference should the
community be focusing more on on
experiments than it currently is what do
you see as the value of theory going
forward maybe what kind of theory
research is most interesting to you that
sort of thing i think that's really good
question i think there's like you know
the there's like i think the alignment
community is pretty diverse and people
have like they kind of like come at the
problem with like a range of different
tools
and there's like
uh some people like
you know come from more philosophical
angle or social science and some people
like want to
address it with like formal math
uh and then there's like the more
empirical research
that uh you know my team and i are doing
and um
i also like yeah i should be honest i
would like i was like very firmly in the
math camp for like when i started out
that was like back in 2014
um because i just like didn't have a
better plan of what to do
um
i think there's value in all these
approaches
i think like if you kind of like just
look around on the internet of what
people are doing
i'm a bit worried that like you know
there's
there is a lot of vague stuff that
people are doing and like i think the
problem with that is like it's really
hard to really build on vague things
yeah and i personally find that very
difficult and
i think like as a kind of community
to make more progress
we have to move more towards the like
you know
formal slash
empirical stuff
and then
that makes it easier to build on
yeah i i remember um
at least for me one one of the things
that really uh struck me in this respect
was a conversation that i saw on i don't
think it was the alignment form i think
it was just like less wrong but people
were talking about wire heading hey
everyone jeremy here i just wanted to
jump in and interrupt myself because i
don't think wireheading is actually that
widely understood of a concept outside
of ai alignment research and i figured i
should probably add a quick explanation
here so wireheading is one way that
alignment researchers worry advanced ai
systems might fail to work as intended
the idea is that if we design an ai to
optimize some reward metric like points
in a video game for example it might
learn that rather than mastering the
task we're actually trying to train it
for it can just tamper with its reward
metric directly so to take the video
game example a sufficiently general and
capable ai that we're training to play a
game might realize that it can just hack
the game itself to make its score
counter go up potentially higher than it
could even theoretically go according to
the original rules of the game now
wireheading is a much more general
concept than this and it can take on far
more diverse forms than i've just
described for example a powerful ai
that's charged with maintaining an
optimal temperature and some office
building could decide that manipulating
the reading on its thermometer is easier
than dynamically heating or cooling
rooms the bottom line is that
wireheading is likely to be an important
class of ai failures that have safety
implications no one's quite figured out
yet the one one of the things that
really uh struck me in this respect was
a conversation that i saw on i don't
think it was the alignment form i think
it was just like less wrong but people
were talking about wireheading and there
were i guess in the context of that post
like there are a whole bunch of
different ways in which the system could
fail that we're sort of listed and i
found that that aesthetic comes up a
fair bit actually in the alignment
community where people like list a whole
bunch of different problems that they
could imagine happening and this is like
i think this is actually quite useful to
some degree because it has revealed new
problems where you go oh wow we should
worry about this kind of behavior but it
almost seems like
a theory that doesn't uh point to a
common origin of all these problems that
can't point to like a latent uh a latent
source for these issues and then how to
address that latent source it almost
feels like you're playing whack-a-mole
with a whole bunch of different problems
that pop up in different ways like i
guess first off i'm curious if you agree
with that and second
whether you think the empirical approach
has a good shot at sort of covering that
base
yeah
i think it's really good question
because it's the question of like how do
you know you're really making progress
right
um
and i think the
wire heading problem or like you know
some people call it reward tampering um
is a really interesting example because
it's something that
you can't really study empirically yet
because our systems are just not smart
enough to do it yeah
um and you could i mean you could like
make it really easy for them and then
like they would probably figure it out
but like that's not as interesting to
study
like the scenario where like what
happening gets really interesting is
like when the system is actually smarter
than you
and that's not the case yet
um and so
absent
of empirical experiments that you can
run at that now
of course you need to turn to you know
like
more theoretical
approaches um
but you know what i
my kind of like perspective on this is
that you know like
what we need to get to on that front is
like
be in a space
where the like
that works then informs the empirical
empiric experiments that we do
once that
you know is possible and actually that
scale question is an interesting one too
because like i guess there's there's a
some exclusivity to access to large the
kinds of large models that would allow
us to experiment with you know when when
we hit something like agi this sort of
wire heading or whatever else um like
what are your thoughts on access to
these models for the purpose of like
safety like independent safety
researchers obviously there's there's a
narrative that oh no like open ai and
deep mind and so on are going to
monopolize all the compute resources no
one will have access i mean i don't
think that's that's really in spirit
what's going on here i mean there are
practical questions around like big
models yeah are just expensive there's a
natural kind of
moat that gets formed through no fault
of anyone's but do you think that there
are going to be good ways for people to
explore kind of reaching beyond the
scale that's immediately accessible to
them maybe using theory to bridge the
gap or something like that i i mean this
is a really good question it's like
something that comes up and again again
because like
the size and the spending on machine
learning projects have been like
steadily increasing
and
you know like the budgets of academia
obviously don't grow exponentially with
it right and so like naturally you know
like some that exclude some people from
you know like access to these things
um
i know openai gives like api access to
like academics and safety researchers um
and you know like we want to enable
people to still study models like
state-of-the-art models
um obviously you know like
we can't just put it up on the internet
because
like we actually getting to the scale
where you know you could do harmful
things with the model if and then
you know if you put stuff on the open
internet
anyone can use it for anything
um
but
i think also
in a way
this divide is going to get worse of
systems and like ml training spending
keeps growing
and
um
in some ways you know like
i think this is the reason why it is
like like
of great advantage if you can do
alignment research
at like one of the cutting edge edge
elements and like you can be where the
state of the art is and like work with
it directly
um
on the other hand i think also like you
know there's lots of kind of aspects of
the problem that you can study in like a
much smaller setting
and
they'll be valuable to make progress on
that as well
so i don't think it's going to be the
case that you know like if you if you're
just in academia or something that you
won't be able to do anything
you'll have to be a little bit more
hacky and actually um maybe a shout out
to andy jones in order we had him on the
podcast a little earlier and he was
talking about essentially this idea of
doing experiments small scales that try
to kind of project out you know
increasing the size of the system and
saying okay you know here's roughly
where we think things would go and and
open
openai's work on uh scaling laws for
language models and other kinds of
scaling laws really does help people out
in that respect because you're able to
just like draw straight lines and make
inferences that sort of thing
yeah i think that's very cool um and uh
i think that's like a really good point
i think like
andy jones's work is also a good example
of that where like his like he got us
like a really clear kind of like story
in this like you know
smaller setting and then
uh i think like one thing to be cautious
of though is like if you're looking at
like a scaling lower or a trend right
and you did like some small experiments
there's like no good a tree that this
trend is gonna
hold over like many orders of magnitude
right
but you know
when you're looking at these scaling
laws
then you need to start with a small
setting because that's where you can
iterate quickly and that's where it's
like cheap to run lots of experiments
and this is what makes this methodology
so powerful because you know you get
to run cheap experiments that tell you a
lot about expensive experiments
yeah and they tend to give like i guess
they tend to give safety researchers
more reach than like than the then
people who are focused more on
capabilities as well in some ways which
i know i'm maybe i'm in the minority
here but i actually
tend to think that the consolidation of
compute is not the worst thing in the
world the consolidation of those
resources from like a public safety
standpoint because
i'm not sure i want to live in a world
where like every individual person has
access to their own gpt three or even
gpt well g2 seems fine but you know like
we're gonna discover presumably a whole
bunch of different malicious
applications of these things that like
might take us by surprise if we're not
careful um like i don't know if you have
a view on this i mean obviously open ai
one way or another is gonna end up doing
what it is because economic forces push
things in a certain direction but like
is there an argument that there's
that it's actually better that there not
be full kind of democratization of scale
democratization of these kinds of um ais
yeah i mean this is like a big kind of
like um economy in a way like on the one
hand
you know like you want like
democratization of the technology is
good because it gives more people access
you can like like uh applies more checks
and balances to like the ball resource
actors
and it allows it makes it easier for
other people to catch up
uh and on the other hand you know like
it also gives bad actors
access to powerful tools and so how do
you like bridge how do you solve this
problem uh that's like kind of
seems very difficult to me and like you
could draw for example
an analogy right like what if like
refined uranium was like freely
available like to anyone like yeah that
would probably cause a lot of problems
um but i think this analogy is flawed
because
you know like
refined uranium is like
like not a very dual use thing right
like yeah yeah arguably you could like
use it to generate like power
but nobody's gonna do that in that
backyard this is like not really
feasible and so
it's not actually you know like
gonna give people a big benefit of
having like free access to uranium but
the story is very different with ai
because
if like ai is like a very kind of like
general purpose
uh technology that is just gonna like
help like so many aspects of your life
and so kind of like restricting that
kind of technology
is uh also very like in a way
problematic right yeah so how do we
solve that problem i don't know
yeah well no i i totally agree because
there's it's also this really fuzzy
barrier too where i guess what we'd love
to do is figure out this very nuanced
decision surface where like these people
can access this kind of ai and these
people but at the end of the day that
relies on our ability to kind of
extrapolate and guess what sorts of
malicious uses like people could put
these things to and my guess is we're
going to be surprised like people talk a
lot about you know if gpg3 were widely
available maybe you'd have
opening eyes published a lot of stuff
about this too like
you know
influence operations with elections or
you might have phishing attacks that
sort of thing i'm sure there are other
things too that a really creative like
cabal of of criminal minds could come up
with but like it's so hard to just think
about that because there's limited time
and you're just focused on building the
capabilities of more systems right
and also like the tech moves forward
very quickly and right like often we
don't really
know it advanced what it's going to be
used for right or what it could be
before and i guess interactions between
technologies too because like you know
gbt3 on its own today not a big deal but
then you couple it to
i don't even know what you might couple
it to but you know
deep deep fakes plus something else plus
something else and pretty soon you have
like this high dimensional space of
different technologies that can do more
things um and i guess the one other
reason that i'm kind of like i favor
not having this kind of broadly
distributed stuff
is
just that you might not necessarily be
able to trust everyone to value
alignment as much as other people so
like you're at the mercy at that stage
presumably of like the person who can't
be bothered to implement whatever
alignment solution is like the order of
the day yeah maybe the big question here
is like like let's say even we decided
as a society of like and we like all
agreed of like which cases are like
misuse of ai in which cases are like you
know safety hazards or like misalignment
hazards
and
um
can we is there some way to implement a
solution to
um that thing that like you know kind of
like can enforce
that for everyone while also giving them
access to the underlying technology
right
which i guess is kind of the trade-off
that opening eyes struck with um kind of
gpt three access and and monitoring what
companies are doing with it which seems
like
the best i mean i don't know i
i'm not going to come up with anything
better than that
what we're like trying to do right like
yeah you have api we can monitor what's
going on with the api we can like um
align them all so one of the projects
that the team is doing right now is
uh we're making a more aligned version
of gpd3
and so
so what if you think about how gpd3 is
trained it's trained to mimic what
happens on the internet
so if you put your
yourself into like gpd3's shoes it's
just like there's some random text
coming your way it's just like some web
page and you're just like making lots of
bets of like what is going to be the
next word on this web page
and so
if you
let's say you want the model to
write a story or let's say explain the
moon landing to a six-year-old right
so you say please explain this the moon
learning to a six-year-old and what gp3
thinks is like okay what would come next
on the website where that's written well
it's maybe it's going to be something
like please explain
um the immune system to a six-year-old
and then it will like generate prompts
like that because it thinks that's most
likely what's going to come next but
it's not at all what you wanted it to do
right
but
the point is like and this is what like
alignment is all about it's not gpu 3 is
not trying to do what you want it to do
it's trying to predict text on the
webpage
and so what we're trying to do is we
want to
train it so that it actually wants to
follow instructions and it's trying to
like act in accordance with what you
intended to do
is that because it seems like that would
be a pretty fundamental shift in the
training operation right like i mean or
at least the way i'm imagining it you're
sort of in training it's trying to just
predict the next token how would you
shift from that kind of framework to
it's trying to actually do the thing
that it's being asked to do if that
makes sense
yeah we're not actually changing the gpu
3 training procedure what we're doing is
like retaining the we're taking the
trained model and then we're fine-tuning
it
so we
uh and this is like where we use reward
modeling right like remember that was
like one of our staple kind of
ingredients um so
we use reward modeling um or at open eye
we also call it reinforcement length
from human feedback
and
we
essentially like train it or we fine
tune it to be good at following
instructions
and like making less
stuff up less and
um
to you know like not say harmful things
and
um you know like if that goes well then
you'll have a model that is like both
more useful and less harmful
and so the aim would be that we can like
you know
make that available on the api and
people can use that
that the team was working on before i
joined and this is like using all these
same techniques
and but there the goal was just to get
you know like
the model to summarize text
and that's like in a way like there
it's getting aligned in a more narrow
sense
where like the task is to summarize just
and what we're gonna what we're trying
to do with the like you know aligned or
more aligned version of gpd3 is like
get it to
um you know follow your instructions and
those could be like all kinds of
instructions
okay well yeah it's almost like we're
already entering that um that uncanny
valley where
capabilities and alignment start to get
more and more entangled and it's it's
hard to tell which part is which almost
i guess that was when we talked about
there's sort of missing definition of
alignment versus capabilities and like
parsing those things out
do you have a view on that like
how should how should we think about
these two things is different what are
the fundamental differences between
capabilities and alignment
yeah um
i think like in yeah i think you're
totally right like in many cases it's
actually very hard to fully disentangle
those two things um
in the past like a definition that i've
used is like how do we build agents that
act in accordance with
user intentions
but they're in that definition right
like you kind of entangled capability
right you're just like if the system
doesn't do what you what you want like
well it could be because it doesn't know
how to it's just not smart enough or it
could be because
it just doesn't care about what you want
it just wants to do its own thing
um
and
um
you know like in a way what you could
say is like
if you just like wanna specifically talk
about that part
uh the like more alignment part of this
is you could say something well does the
uh model leverage all of its capability
to act in accordance with your attention
but now you've like you have this thing
in there which is like well the model's
capability so how do you prove or
disprove that the model has a certain
capability that's like an open usage
problem
and uh so it makes it in a way like
more narrow and specific but also like a
lot harder to test it's like very it's
like a lot easier for me to see whether
or not the model is doing what i want
okay that no that makes sense and
actually maybe a question too for any
independent ai researchers ai safety
researchers ai alignment researchers who
are listening to this because we do have
quite a few i've actually had some reach
outs from folks who want me to ask
questions like this more and i think
you're the perfect person to uh to ask
this to
what kind of research would you be
excited to see people do independent
researchers like if there was an area
for them to focus on that really match
kind of what you think is important what
would that be i think it depends a lot
on like
like individual people's skill set and
their like comparative advantage so it's
like very hard to give like a catch-all
thing of like here's something i would
love to do see more of
um
i think in general
what i like my experience is that
um people end up being most successful
if they focus on like
building their skills joining an
existing team you know like uh
and then working collaboratively with
other people it's just like you know
um
you can tackle much bigger projects if
you are acting as a team uh compared to
acting
individually
um and you know like in terms of like
building skills right like um the skills
that we are looking at uh for
particularly are like you know do you
have
uh do you have like
ml expertise can you code
and can you like you know implement
models
and iterate on like research experiments
and you know like
like one of the classical ways to get
that is like get a master's degree in
machine learning or like a phd or
something but that's not like we don't
require that right like the question is
like
you know
can you be productive on
like our way of approaching alignment
and that's what we're looking for do you
see machine learning becoming more and
more of um a software engineering
problem like as you you know as you
scale models more and more i mean i
imagine the focus increasingly is on
like how do we parallelize better how do
we scale just the compute side better um
this is gonna obviously i imagine we'll
have an impact or there's going to be an
impact on safety side the alignment
researcher side where alignment
researchers are presumably going to have
to get better and better at this stuff
too is is that fair to say and if so do
you think that that's a skill set that
i don't know how people could actually
develop it independently but is that
something that people should start to
think about yeah i mean maybe the
observation is a little bit that you
know as we get closer to agi
it's probably gonna look more like an
engineering problem and less like a
science
because
like
you're gonna have in absolving the
science problems and then like at some
point it's just like well
now we need to build it we basically
know it's gonna work
or
uh and that's like well that's like
highly abstracted right like
realistically it's like a continuum
um
yeah but
uh i think
for
specifically for the approach that we
are taking
it's like very engineering heavy right
there's like some amount of time you
spend thinking about
like what is the experiments they want
to run or like what is
like
the problem i'm like i'm trying to solve
and
then you just like write a lot of code
and like get the thing to work and then
you have like all the experiences that
you have when you're engineering deep
learning systems right it's hard to
debug it's like you know it takes a
while to make it work
and there's like lots of subtle boxes
everywhere and uh
and that ends up like you're just taking
a lot of effort
but
that's like you know
it's like a very
uh tractable way to make progress i
think are you generally optimistic about
the prospects of resolving the alignment
problem i'm not saying solving because i
think
probably um some amount of misalignment
is going to persist at least based on
our conversation so far that seems to be
likely but you know resolving it to the
point where we can set up this as you
say agi mvp um do you think that's
something that's more likely than not to
happen
i mean i'm i'm pretty optimistic about
this direction that's why i'm working on
it right like i think you know
[Music]
i'm like very excited about our current
plan and like if you think like if i
think back right like when i joined this
field like you know eight-ish years ago
and there was no plan like nobody had a
plan yeah people everyone was confused
and i feel like now we have
a plan that i feel very excited about
and
um i'm
yeah i want to see where this uh leads
us
um
i think you know
uh overall
um solving the alignment problem is not
going to be the only challenge that
we're going to have to face
as we you know as a society transition
into a post-agi world
and
there's like lots of other questions
around like you know governance and
policy some of which we touched upon
and some of them might be even harder
that alignment right like um if you need
to coordinate across a whole bunch of
actors not to do a thing
um that might end up being very
difficult i don't really
know i don't think anyone really knows
how to solve these problems
well maybe aji can help hopefully aji
can help but
i mean like the agi mvp idea right like
that extends pretty nicely to ai policy
as well hopefully at least where you
know we can we can actually solve some
coordination problems maybe i mean who
knows it seems like it requires a lot of
baseline trust too between
uh between countries and stuff that may
or may not exist today but but there's
also like to solve these problems you
might have need like a very different
expertise than you need to just do
uh ai and alignment research
um hopefully
and this is kind of like
uh
kind of like the aim is right like you
could use something like an alignment
mvp to get really good at ai and
alignment research and build a better
system that has broader expertise and
can also help you
solve these societal questions that'd be
the dream it actually do the do the
policy folks at openai work with the
city like how how closely are the safety
and policy and like capabilities teams
working yeah we actually yeah we talk to
them all the time and you know whenever
there's overlap and what we're trying to
do so concretely right with
instruction following project where we
make this more aligned version of gpd3
um we have to then
uh
kind of like actually define what it
means for the language model to produce
a harmful output right right and
um
that's like a pretty difficult question
if you actually try to do it there's
like some obvious things right like you
shouldn't say anything racist pretty
obvious pretty easy but like
one is like you know they you quickly
get into the nuances on like one is
something you know like
creative freedom if you're like writing
a fiction piece and when is something
like you know
false information
like what what actually constitutes as
harmful and like you know how do you
make sure that you have a
input from a diverse set of uh people or
people with like diverse
backgrounds
because what's harmful to one person
might not be harmful to another and so
no
those are all like
questions they're like very you know
like policy relevant where the policies
and so
on these sort of questions we're we're
working quite closely with them yeah you
can almost see the philosophy unfolding
in real time like you know gp can can
gpt three uh commit incitement can it
can it tell somebody like hey go rob a
grocery store or something and then yeah
obviously you shouldn't do that right
yeah exactly
but like you have to now think about all
of these cases and like right because
you know when you use reward modeling
right like what actually happens is like
there's somebody sitting in front of the
computer that actually has to rank you
know what is like better than what other
thing right and
and so they have to know how to make you
know like
what they have to label as like not okay
and okay and if you don't have like
clear if you haven't thought about like
what exactly you want here
then
how is that going to work and that
actually like leads to
a much bigger and even more important
question which is
let's say we solve the technical
alignment problem and we have like all
the technical tools needed to align a
system or like
arbitrarily powerful agi say
to anyone
who do you align to right yeah and
like obviously you know like it
shouldn't just be like who whatever like
we at albany i think uh should be
aligned to
um
but you know like what would be you know
like
a reliable and fair process by which we
determine
like which values get installed into the
system
and how do we like you know
how do you i don't know how to do that i
wish like somebody figured out well it
kind of feels like um there's a way in
which all these things almost reduce to
like the problems that humans have been
trying to solve for the last like 10 000
years where you know we don't really
like subjective relativism is like this
attempt to kind of bridge all this stuff
together but it feels like that's deeply
unhelpful when it comes to a situation
where like you said like somebody
actually does have to give these
commands the symmetry is going to get
broken between all these different moral
frameworks somehow and like that's a lot
of power for for one person or a group
of people to have it's cool that open ai
is thinking this way i mean i think it's
it's
like it's very fortunate that the people
who are pushing in this direction at
least have that in mind because i could
easily imagine us living in a universe
where that is not the case and it's a
full steam ahead focus on capabilities
and and just you know i mean that
obviously wouldn't turn out too well and
i think also like one way you could
think about this is like you're kind of
doing ethics research on a deadline
because usually you if you want to find
the like best people to get input into
these things is like people have thought
very deeply about like ethical questions
um but they are going to tell you that
these questions are very hard and they
don't have the best answers but like you
know if we have if we actually end up
building these systems and that's what
like you know
uh open eye and other companies are
trying to do
um
like we're gonna arrive at the point
where we actually have to make a
programmatic decision of like okay what
do we do now and so that's gonna be like
the deadline by which we have to deliver
like
maybe not like you know what is the
correct answer to ethics but like what
is
you know like a good process by which we
can determine the values that we put
into the machine
there's it seems to be a sense in which
people who are wise tend to
disproportionately discount their own uh
knowledge and experience kind of the
dunning-kruger effect in a way but like
generally there's an awareness that i
really i'm smart enough to know that
like
the things i'm grappling with are really
complex and really really um
uncertain
is there would there be a way to like
formalize that in the actual approach
taken by
an ai system itself such that it makes a
call and the call could be wrong but
there's some sense in which it
understands that i know this is now
we're getting into the hand wavy fuzzy
fuzzy alignment talk here but
yeah uh
i mean that would be good i mean
ultimately you know like as a society we
make ethical decisions all the time and
we have processes to do that
um
but also like
yeah on the other hand as you say right
you would hope
that by the time we get there we can
have actually leverage our ai systems
um to make
the whole process better right like and
there could be solutions that we can't
really uh
you know like you couldn't do today
right like
for example what if you have
like a system that everyone can talk to
and just explain their values to them or
like the system asks like everyone in
the world lots of questions
and then kind of like distills what the
values should look like or would look
like
um i don't know like building such a
system would actually work um and
there's like you know lots of things you
have to watch out for here
um
but
like
what i'm trying to say is there might be
room for like entirely novel approaches
to this that weren't feasible
yeah no absolutely i think that that's a
really interesting observation it's like
it it opens the question of like how
much of ethics is a data collection
problem versus how much of it is like uh
an actual reward function um kind of
alignment problem and it it sort of
seems like yeah i could easily imagine
there being these a small handful of
capabilities you gain just from the
scalability of of effectively glorified
surveys via ai that all of a sudden make
a lot of big families of of semi
solutions and patches work really well
and that could be an exciting avenue too
i want to make sure i remember to ask
you this because i know open ai is
hiring we have people who are listening
who i'm sure would love to throw their
hat in the ring can you can you speak a
little bit to some of the roles that
you're hiring for on the safety team
yeah um we would
love to hire more
talented
people we are specifically
looking for
research engineers so
uh research engineers are people who
like kind of like have
both feet like one foot in like research
and one foot in engineering
and you know work day-to-day with models
we also
um
hiring researchers
and so in particular you know
if you have done a lot of work uh you
know publishing uh in alignment research
or even if you haven't done any
alignment research before
but you let's say you have a lot of
research experience on like related
topics in like
natural language processing or rl
in particular
we'd love to hear from you
um we are going to need a lot more you
know like tons to make sure that we can
actually deliver on
all the like kind of cool ideas that
i've been talking about yeah well the
stakes are high and the work is
fascinating so thanks so much for
sharing all that information all and all
your your perspective on all these
issues really a fun wide-ranging
conversation really appreciate it yeah
thank you so much for having me it was
great |
5c4e46a3-2eb7-48ee-b169-f9875b5c076c | trentmkelly/LessWrong-43k | LessWrong | Table of cognitive tasks that do and do not show correlations with cognitive ability
Here. From this 2010 book chapter by Stanovich, Toplak, and West. (Here is the book.)
See also Baron's table of cognitive biases, the normative models they violate, and their explanations. |
3202f6ae-11b8-43c3-84b4-482430f98449 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Anthropomorphic Optimism
Today's post, Anthropomorphic Optimism was originally published on 04 August 2008. A summary (taken from the LW wiki):
> You shouldn't bother coming up with clever, persuasive arguments for why evolution will do things the way you prefer. It really isn't listening.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was No Logical Positivist I, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
6f74f4dd-4d48-4fbb-bf67-ea4efa52c11f | trentmkelly/LessWrong-43k | LessWrong | Via productiva - my writing and productivity framework
TL;DR: Introspection on how I do things and which rules and heuristics help me to be productive. Framed as Taleb's via negativa advice. Weaves in a bit of Greek mythology.
Do you wonder how I manage to write one essay every week despite finishing my Ph.D.? Yeah, I wonder the same. That's why I wrote an essay where I try to articulate the rules and heuristics I use to be productive. YMMV. Hope you enjoy!
Against advice
> Be careful whose advice you buy, but be patient with those who supply it. Advice is a form of nostalgia, dispensing it is a way of fishing the past from the disposal, wiping it off, painting over the ugly parts and recycling it for more than it's worth. - Everybody's Free To Wear Sunscreen
…
Yeah, I’m not a huge fan of advice.
I think there are many good side effects of giving and receiving advice. The act of giving advice is essentially an expression of trust, "asking for help" is a social superpower, and the downsides of being exposed to information are usually rather low[1]. But those are just side effects; the actual core purpose of advice (exchanging relevant and helpful information) is surprisingly hard to get right:
1. Causality is complicated. Figuring out why you succeeded is hard, and the story you tell yourself about it is probably wrong.
2. For almost any piece of advice out there, you can find reverse advice, i.e.
“You need to stop being so hard on yourself, remember you are your own worst critic” versus “Stop making excuses for yourself, you will never be able to change until you admit you’ve hit bottom.” SSC
and because of your social bubble, you might be exposed to advice with exactly the wrong polarity[2].
3. The people with the most relevant insight are probably busy doing stuff, leaving the people with a lot of free time on their hands to dominate the advice-giving market. So most people who want to give (or sell) advice do not have the relevant insight to give good advice. This effect corroborates the |
77b4f796-2489-4010-93fb-cde2fb716d37 | StampyAI/alignment-research-dataset/special_docs | Other | Deep Learning
### Subjects
* [Computer science](/subjects/computer-science)
* [Mathematics and computing](/subjects/mathematics-and-computing)
Abstract
--------
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
[Access through your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fnature14539)
[Buy or subscribe](#access-options)
This is a preview of subscription content, [access via your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fnature14539)
if (window.dataLayer) {
window.dataLayer.push({
content: { article: { relevantArticlesCount: 3 }}
})
}
Relevant articles
-----------------
Open Access articles citing this article.
* ###
[Ensemble of deep learning and machine learning approach for classification of handwritten Hindi numerals](https://doi.org/10.1186/s44147-023-00252-2)
+ Danveer Rajpal
+ & Akhil Ranjan Garg
*Journal of Engineering and Applied Science*
Open Access
20 July 2023
* ###
[Reduced-order modeling for stochastic large-scale and time-dependent flow problems using deep spatial and temporal convolutional autoencoders](https://doi.org/10.1186/s40323-023-00244-0)
+ Azzedine Abdedou
+ & Azzeddine Soulaimani
*Advanced Modeling and Simulation in Engineering Sciences*
Open Access
19 May 2023
* ###
[A comparison of machine learning methods for ozone pollution prediction](https://doi.org/10.1186/s40537-023-00748-x)
+ Qilong Pan
+ , Fouzi Harrou
+ & Ying Sun
*Journal of Big Data*
Open Access
15 May 2023
Access options
--------------
[Access through your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fnature14539)
[Access through your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fnature14539)
[Change institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fnature14539)
[Buy or subscribe](#access-options)
/\* style specs start \*/
style{display:none!important}.LiveAreaSection-193358632 \*{align-content:stretch;align-items:stretch;align-self:auto;animation-delay:0s;animation-direction:normal;animation-duration:0s;animation-fill-mode:none;animation-iteration-count:1;animation-name:none;animation-play-state:running;animation-timing-function:ease;azimuth:center;backface-visibility:visible;background-attachment:scroll;background-blend-mode:normal;background-clip:borderBox;background-color:transparent;background-image:none;background-origin:paddingBox;background-position:0 0;background-repeat:repeat;background-size:auto auto;block-size:auto;border-block-end-color:currentcolor;border-block-end-style:none;border-block-end-width:medium;border-block-start-color:currentcolor;border-block-start-style:none;border-block-start-width:medium;border-bottom-color:currentcolor;border-bottom-left-radius:0;border-bottom-right-radius:0;border-bottom-style:none;border-bottom-width:medium;border-collapse:separate;border-image-outset:0s;border-image-repeat:stretch;border-image-slice:100%;border-image-source:none;border-image-width:1;border-inline-end-color:currentcolor;border-inline-end-style:none;border-inline-end-width:medium;border-inline-start-color:currentcolor;border-inline-start-style:none;border-inline-start-width:medium;border-left-color:currentcolor;border-left-style:none;border-left-width:medium;border-right-color:currentcolor;border-right-style:none;border-right-width:medium;border-spacing:0;border-top-color:currentcolor;border-top-left-radius:0;border-top-right-radius:0;border-top-style:none;border-top-width:medium;bottom:auto;box-decoration-break:slice;box-shadow:none;box-sizing:border-box;break-after:auto;break-before:auto;break-inside:auto;caption-side:top;caret-color:auto;clear:none;clip:auto;clip-path:none;color:initial;column-count:auto;column-fill:balance;column-gap:normal;column-rule-color:currentcolor;column-rule-style:none;column-rule-width:medium;column-span:none;column-width:auto;content:normal;counter-increment:none;counter-reset:none;cursor:auto;display:inline;empty-cells:show;filter:none;flex-basis:auto;flex-direction:row;flex-grow:0;flex-shrink:1;flex-wrap:nowrap;float:none;font-family:initial;font-feature-settings:normal;font-kerning:auto;font-language-override:normal;font-size:medium;font-size-adjust:none;font-stretch:normal;font-style:normal;font-synthesis:weight style;font-variant:normal;font-variant-alternates:normal;font-variant-caps:normal;font-variant-east-asian:normal;font-variant-ligatures:normal;font-variant-numeric:normal;font-variant-position:normal;font-weight:400;grid-auto-columns:auto;grid-auto-flow:row;grid-auto-rows:auto;grid-column-end:auto;grid-column-gap:0;grid-column-start:auto;grid-row-end:auto;grid-row-gap:0;grid-row-start:auto;grid-template-areas:none;grid-template-columns:none;grid-template-rows:none;height:auto;hyphens:manual;image-orientation:0deg;image-rendering:auto;image-resolution:1dppx;ime-mode:auto;inline-size:auto;isolation:auto;justify-content:flexStart;left:auto;letter-spacing:normal;line-break:auto;line-height:normal;list-style-image:none;list-style-position:outside;list-style-type:disc;margin-block-end:0;margin-block-start:0;margin-bottom:0;margin-inline-end:0;margin-inline-start:0;margin-left:0;margin-right:0;margin-top:0;mask-clip:borderBox;mask-composite:add;mask-image:none;mask-mode:matchSource;mask-origin:borderBox;mask-position:0 0;mask-repeat:repeat;mask-size:auto;mask-type:luminance;max-height:none;max-width:none;min-block-size:0;min-height:0;min-inline-size:0;min-width:0;mix-blend-mode:normal;object-fit:fill;object-position:50% 50%;offset-block-end:auto;offset-block-start:auto;offset-inline-end:auto;offset-inline-start:auto;opacity:1;order:0;orphans:2;outline-color:initial;outline-offset:0;outline-style:none;outline-width:medium;overflow:visible;overflow-wrap:normal;overflow-x:visible;overflow-y:visible;padding-block-end:0;padding-block-start:0;padding-bottom:0;padding-inline-end:0;padding-inline-start:0;padding-left:0;padding-right:0;padding-top:0;page-break-after:auto;page-break-before:auto;page-break-inside:auto;perspective:none;perspective-origin:50% 50%;pointer-events:auto;position:static;quotes:initial;resize:none;right:auto;ruby-align:spaceAround;ruby-merge:separate;ruby-position:over;scroll-behavior:auto;scroll-snap-coordinate:none;scroll-snap-destination:0 0;scroll-snap-points-x:none;scroll-snap-points-y:none;scroll-snap-type:none;shape-image-threshold:0;shape-margin:0;shape-outside:none;tab-size:8;table-layout:auto;text-align:initial;text-align-last:auto;text-combine-upright:none;text-decoration-color:currentcolor;text-decoration-line:none;text-decoration-style:solid;text-emphasis-color:currentcolor;text-emphasis-position:over right;text-emphasis-style:none;text-indent:0;text-justify:auto;text-orientation:mixed;text-overflow:clip;text-rendering:auto;text-shadow:none;text-transform:none;text-underline-position:auto;top:auto;touch-action:auto;transform:none;transform-box:borderBox;transform-origin:50% 50%0;transform-style:flat;transition-delay:0s;transition-duration:0s;transition-property:all;transition-timing-function:ease;vertical-align:baseline;visibility:visible;white-space:normal;widows:2;width:auto;will-change:auto;word-break:normal;word-spacing:normal;word-wrap:normal;writing-mode:horizontalTb;z-index:auto;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;appearance:none;margin:0}.LiveAreaSection-193358632{width:100%}.LiveAreaSection-193358632 .login-option-buybox{display:block;width:100%;font-size:17px;line-height:30px;color:#222;padding-top:30px;font-family:Harding,Palatino,serif}.LiveAreaSection-193358632 .additional-access-options{display:block;font-weight:700;font-size:17px;line-height:30px;color:#222;font-family:Harding,Palatino,serif}.LiveAreaSection-193358632 .additional-login>li:not(:first-child)::before{transform:translateY(-50%);content:"";height:1rem;position:absolute;top:50%;left:0;border-left:2px solid #999}.LiveAreaSection-193358632 .additional-login>li:not(:first-child){padding-left:10px}.LiveAreaSection-193358632 .additional-login>li{display:inline-block;position:relative;vertical-align:middle;padding-right:10px}.BuyBoxSection-683559780{display:flex;flex-wrap:wrap;flex:1;flex-direction:row-reverse;margin:-30px -15px 0}.BuyBoxSection-683559780 .box-inner{width:100%;height:100%}.BuyBoxSection-683559780 .readcube-buybox{background-color:#f3f3f3;flex-shrink:1;flex-grow:1;flex-basis:255px;background-clip:content-box;padding:0 15px;margin-top:30px}.BuyBoxSection-683559780 .subscribe-buybox{background-color:#f3f3f3;flex-shrink:1;flex-grow:4;flex-basis:300px;background-clip:content-box;padding:0 15px;margin-top:30px}.BuyBoxSection-683559780 .subscribe-buybox-nature-plus{background-color:#f3f3f3;flex-shrink:1;flex-grow:4;flex-basis:100%;background-clip:content-box;padding:0 15px;margin-top:30px}.BuyBoxSection-683559780 .title-readcube,.BuyBoxSection-683559780 .title-buybox{display:block;margin:0;margin-right:10%;margin-left:10%;font-size:24px;line-height:32px;color:#222;padding-top:30px;text-align:center;font-family:Harding,Palatino,serif}.BuyBoxSection-683559780 .title-asia-buybox{display:block;margin:0;margin-right:5%;margin-left:5%;font-size:24px;line-height:32px;color:#222;padding-top:30px;text-align:center;font-family:Harding,Palatino,serif}.BuyBoxSection-683559780 .asia-link{color:#069;cursor:pointer;text-decoration:none;font-size:1.05em;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;line-height:1.05em6}.BuyBoxSection-683559780 .access-readcube{display:block;margin:0;margin-right:10%;margin-left:10%;font-size:14px;color:#222;padding-top:10px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;line-height:20px}.BuyBoxSection-683559780 .access-asia-buybox{display:block;margin:0;margin-right:5%;margin-left:5%;font-size:14px;color:#222;padding-top:10px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;line-height:20px}.BuyBoxSection-683559780 .access-buybox{display:block;margin:0;margin-right:10%;margin-left:10%;font-size:14px;color:#222;opacity:.8px;padding-top:10px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;line-height:20px}.BuyBoxSection-683559780 .price-buybox{display:block;font-size:30px;color:#222;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;padding-top:30px;text-align:center}.BuyBoxSection-683559780 .price-buybox-to{display:block;font-size:30px;color:#222;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;text-align:center}.BuyBoxSection-683559780 .price-info-text{font-size:16px;padding-right:10px;color:#222;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif}.BuyBoxSection-683559780 .price-value{font-size:30px;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif}.BuyBoxSection-683559780 .price-per-period{font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif}.BuyBoxSection-683559780 .price-from{font-size:14px;padding-right:10px;color:#222;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;line-height:20px}.BuyBoxSection-683559780 .issue-buybox{display:block;font-size:13px;text-align:center;color:#222;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;line-height:19px}.BuyBoxSection-683559780 .no-price-buybox{display:block;font-size:13px;line-height:18px;text-align:center;padding-right:10%;padding-left:10%;padding-bottom:20px;padding-top:30px;color:#222;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif}.BuyBoxSection-683559780 .vat-buybox{display:block;margin-top:5px;margin-right:20%;margin-left:20%;font-size:11px;color:#222;padding-top:10px;padding-bottom:15px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;line-height:17px}.BuyBoxSection-683559780 .tax-buybox{display:block;width:100%;color:#222;padding:20px 16px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;line-height:NaNpx}.BuyBoxSection-683559780 .button-container{display:flex;padding-right:20px;padding-left:20px;justify-content:center}.BuyBoxSection-683559780 .button-container>\*{flex:1px}.BuyBoxSection-683559780 .button-container>a:hover,.Button-1078489254:hover,.Button-2496381730:hover{text-decoration:none}.BuyBoxSection-683559780 .readcube-button{background:#fff;margin-top:30px}.BuyBoxSection-683559780 .button-asia{background:#069;border:1px solid #069;border-radius:0;cursor:pointer;display:block;padding:9px;outline:0;text-align:center;text-decoration:none;min-width:80px;margin-top:75px}.BuyBoxSection-683559780 .button-label-asia,.ButtonLabel-3296148077,.ButtonLabel-1651148777{display:block;color:#fff;font-size:17px;line-height:20px;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;text-align:center;text-decoration:none;cursor:pointer}.Button-1078489254,.Button-2496381730{background:#069;border:1px solid #069;border-radius:0;cursor:pointer;display:block;padding:9px;outline:0;text-align:center;text-decoration:none;min-width:80px;max-width:320px;margin-top:10px}.Button-1078489254 .readcube-label,.Button-2496381730 .readcube-label{color:#069}
/\* style specs end \*/Subscribe to this journal
Receive 51 print issues and online access
$199.00 per year
only $3.90 per issue
[Learn more](/nature/subscribe)Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
[Learn more](//www.nature.com/articles/nature14539.epdf?no_publisher_access=1&r3_referer=nature)Prices may be subject to local taxes which are calculated during checkout
### Additional access options:
* [Log in](https://idp.nature.com/authorize/natureuser?client_id=grover&redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fnature14539)
* [Learn about institutional subscriptions](https://www.springernature.com/gp/librarians/licensing/license-options)
* [Read our FAQs](https://support.nature.com/en/support/home)
* [Contact customer support](https://www.springernature.com/gp/contact)
**Figure 1: Multilayer neural networks and backpropagation.****Figure 2: Inside a convolutional network.****Figure 3: From image to text.****Figure 4: Visualizing the learned word vectors.****Figure 5: A recurrent neural network and the unfolding in time of the computation involved in its forward computation.**
References
----------
1. Krizhevsky, A., Sutskever, I. & Hinton, G. ImageNet classification with deep convolutional neural networks. In *Proc. Advances in Neural Information Processing Systems 25* 1090–1098 (2012). **This report was a breakthrough that used convolutional nets to almost halve the error rate for object recognition, and precipitated the rapid adoption of deep learning by the computer vision community.**
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Advances%20in%20Neural%20Information%20Processing%20Systems%2025&pages=1090-1098&publication_year=2012&author=Krizhevsky%2CA&author=Sutskever%2CI&author=Hinton%2CG)
2. Farabet, C., Couprie, C., Najman, L. & LeCun, Y. Learning hierarchical features for scene labeling. *IEEE Trans. Pattern Anal. Mach. Intell.* **35**, 1915–1929 (2013).
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=23787344)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Learning%20hierarchical%20features%20for%20scene%20labeling&journal=IEEE%20Trans.%20Pattern%20Anal.%20Mach.%20Intell.&volume=35&pages=1915-1929&publication_year=2013&author=Farabet%2CC&author=Couprie%2CC&author=Najman%2CL&author=LeCun%2CY)
3. Tompson, J., Jain, A., LeCun, Y. & Bregler, C. Joint training of a convolutional network and a graphical model for human pose estimation. In *Proc. Advances in Neural Information Processing Systems 27* 1799–1807 (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Advances%20in%20Neural%20Information%20Processing%20Systems%2027&pages=1799-1807&publication_year=2014&author=Tompson%2CJ&author=Jain%2CA&author=LeCun%2CY&author=Bregler%2CC)
4. Szegedy, C. et al. Going deeper with convolutions. Preprint at <http://arxiv.org/abs/1409.4842> (2014).
5. Mikolov, T., Deoras, A., Povey, D., Burget, L. & Cernocky, J. Strategies for training large scale neural network language models. In *Proc. Automatic Speech Recognition and Understanding* 196–201 (2011).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Automatic%20Speech%20Recognition%20and%20Understanding&pages=196-201&publication_year=2011&author=Mikolov%2CT&author=Deoras%2CA&author=Povey%2CD&author=Burget%2CL&author=Cernocky%2CJ)
6. Hinton, G. et al. Deep neural networks for acoustic modeling in speech recognition. *IEEE Signal Processing Magazine* **29**, 82–97 (2012). **This joint paper from the major speech recognition laboratories, summarizing the breakthrough achieved with deep learning on the task of phonetic classification for automatic speech recognition, was the first major industrial application of deep learning.**
[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2012ISPM...29...82H)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Deep%20neural%20networks%20for%20acoustic%20modeling%20in%20speech%20recognition&journal=IEEE%20Signal%20Processing%20Magazine&volume=29&pages=82-97&publication_year=2012&author=Hinton%2CG)
7. Sainath, T., Mohamed, A.-R., Kingsbury, B. & Ramabhadran, B. Deep convolutional neural networks for LVCSR. In *Proc. Acoustics, Speech and Signal Processing* 8614–8618 (2013).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Acoustics%2C%20Speech%20and%20Signal%20Processing&pages=8614-8618&publication_year=2013&author=Sainath%2CT&author=Mohamed%2CA-R&author=Kingsbury%2CB&author=Ramabhadran%2CB)
8. Ma, J., Sheridan, R. P., Liaw, A., Dahl, G. E. & Svetnik, V. Deep neural nets as a method for quantitative structure-activity relationships. *J. Chem. Inf. Model.* **55**, 263–274 (2015).
[CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2MXhvFGns70%3D)
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=25635324)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Deep%20neural%20nets%20as%20a%20method%20for%20quantitative%20structure-activity%20relationships&journal=J.%20Chem.%20Inf.%20Model.&volume=55&pages=263-274&publication_year=2015&author=Ma%2CJ&author=Sheridan%2CRP&author=Liaw%2CA&author=Dahl%2CGE&author=Svetnik%2CV)
9. Ciodaro, T., Deva, D., de Seixas, J. & Damazio, D. Online particle detection with neural networks based on topological calorimetry information. *J. Phys. Conf. Series* **368**, 012030 (2012).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Online%20particle%20detection%20with%20neural%20networks%20based%20on%20topological%20calorimetry%20information&journal=J.%20Phys.%20Conf.%20Series&volume=368&publication_year=2012&author=Ciodaro%2CT&author=Deva%2CD&author=de%20Seixas%2CJ&author=Damazio%2CD)
10. Kaggle. Higgs boson machine learning challenge. *Kaggle* <https://www.kaggle.com/c/higgs-boson> (2014).
11. Helmstaedter, M. et al. Connectomic reconstruction of the inner plexiform layer in the mouse retina. *Nature* **500**, 168–174 (2013).
[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2013Natur.500..168H)
[CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3sXht1emtrnJ)
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=23925239)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Connectomic%20reconstruction%20of%20the%20inner%20plexiform%20layer%20in%20the%20mouse%20retina&journal=Nature&volume=500&pages=168-174&publication_year=2013&author=Helmstaedter%2CM)
12. Leung, M. K., Xiong, H. Y., Lee, L. J. & Frey, B. J. Deep learning of the tissue-regulated splicing code. *Bioinformatics* **30**, i121–i129 (2014).
[CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2cXpvFCqsLg%3D)
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=24931975)
[PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4058935)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Deep%20learning%20of%20the%20tissue-regulated%20splicing%20code&journal=Bioinformatics&volume=30&pages=i121-i129&publication_year=2014&author=Leung%2CMK&author=Xiong%2CHY&author=Lee%2CLJ&author=Frey%2CBJ)
13. Xiong, H. Y. et al. The human splicing code reveals new insights into the genetic determinants of disease. *Science* **347**, 6218 (2015).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20human%20splicing%20code%20reveals%20new%20insights%20into%20the%20genetic%20determinants%20of%20disease&journal=Science&volume=347&publication_year=2015&author=Xiong%2CHY)
14. Collobert, R., et al. Natural language processing (almost) from scratch. *J. Mach. Learn. Res.* **12**, 2493–2537 (2011).
[MATH](http://www.emis.de/MATH-item?1280.68161)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Natural%20language%20processing%20%28almost%29%20from%20scratch&journal=J.%20Mach.%20Learn.%20Res.&volume=12&pages=2493-2537&publication_year=2011&author=Collobert%2CR)
15. Bordes, A., Chopra, S. & Weston, J. Question answering with subgraph embeddings. In *Proc. Empirical Methods in Natural Language Processing* <http://arxiv.org/abs/1406.3676v3> (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Empirical%20Methods%20in%20Natural%20Language%20Processing&publication_year=2014&author=Bordes%2CA&author=Chopra%2CS&author=Weston%2CJ)
16. Jean, S., Cho, K., Memisevic, R. & Bengio, Y. On using very large target vocabulary for neural machine translation. In *Proc. ACL-IJCNLP* <http://arxiv.org/abs/1412.2007> (2015).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20ACL-IJCNLP&publication_year=2015&author=Jean%2CS&author=Cho%2CK&author=Memisevic%2CR&author=Bengio%2CY)
17. Sutskever, I. Vinyals, O. & Le. Q. V. Sequence to sequence learning with neural networks. In *Proc. Advances in Neural Information Processing Systems 27* 3104–3112 (2014). **This paper showed state-of-the-art machine translation results with the architecture introduced in ref. 72, with a recurrent network trained to read a sentence in one language, produce a semantic representation of its meaning, and generate a translation in another language.**
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Advances%20in%20Neural%20Information%20Processing%20Systems%2027&pages=3104-3112&publication_year=2014&author=Sutskever%2CI&author=Vinyals%2CO&author=Le%2CQV)
18. Bottou, L. & Bousquet, O. The tradeoffs of large scale learning. In *Proc. Advances in Neural Information Processing Systems 20* 161–168 (2007).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Advances%20in%20Neural%20Information%20Processing%20Systems%2020&pages=161-168&publication_year=2007&author=Bottou%2CL&author=Bousquet%2CO)
19. Duda, R. O. & Hart, P. E. *Pattern Classification and Scene Analysis* (Wiley, 1973).
[MATH](http://www.emis.de/MATH-item?0277.68056)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Pattern%20Classification%20and%20Scene%20Analysis&publication_year=1973&author=Duda%2CRO&author=Hart%2CPE)
20. Schölkopf, B. & Smola, A. *Learning with Kernels* (MIT Press, 2002).
[MATH](http://www.emis.de/MATH-item?1019.68094)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Learning%20with%20Kernels&publication_year=2002&author=Sch%C3%B6lkopf%2CB&author=Smola%2CA)
21. Bengio, Y., Delalleau, O. & Le Roux, N. The curse of highly variable functions for local kernel machines. In *Proc. Advances in Neural Information Processing Systems 18* 107–114 (2005).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Advances%20in%20Neural%20Information%20Processing%20Systems%2018&pages=107-114&publication_year=2005&author=Bengio%2CY&author=Delalleau%2CO&author=Le%20Roux%2CN)
22. Selfridge, O. G. Pandemonium: a paradigm for learning in mechanisation of thought processes. In *Proc. Symposium on Mechanisation of Thought Processes* 513–526 (1958).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Symposium%20on%20Mechanisation%20of%20Thought%20Processes&pages=513-526&publication_year=1958&author=Selfridge%2COG)
23. Rosenblatt, F. *The Perceptron — A Perceiving and Recognizing Automaton*. Tech. Rep. 85-460-1 (Cornell Aeronautical Laboratory, 1957).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20Perceptron%20%E2%80%94%20A%20Perceiving%20and%20Recognizing%20Automaton&publication_year=1957&author=Rosenblatt%2CF)
24. Werbos, P. *Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences*. PhD thesis, Harvard Univ. (1974).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Beyond%20Regression%3A%20New%20Tools%20for%20Prediction%20and%20Analysis%20in%20the%20Behavioral%20Sciences&publication_year=1974&author=Werbos%2CP)
25. Parker, D. B. *Learning Logic* Report TR–47 (MIT Press, 1985).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Learning%20Logic&publication_year=1985&author=Parker%2CDB)
26. LeCun, Y. Une procédure d'apprentissage pour Réseau à seuil assymétrique in *Cognitiva 85: a la Frontière de l'Intelligence Artificielle, des Sciences de la Connaissance et des Neurosciences* [in French] 599–604 (1985).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Cognitiva%2085%3A%20a%20la%20Fronti%C3%A8re%20de%20l%27Intelligence%20Artificielle%2C%20des%20Sciences%20de%20la%20Connaissance%20et%20des%20Neurosciences&pages=599-604&publication_year=1985&author=LeCun%2CY)
27. Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning representations by back-propagating errors. *Nature* **323**, 533–536 (1986).
[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=1986Natur.323..533R)
[MATH](http://www.emis.de/MATH-item?1369.68284)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Learning%20representations%20by%20back-propagating%20errors&journal=Nature&volume=323&pages=533-536&publication_year=1986&author=Rumelhart%2CDE&author=Hinton%2CGE&author=Williams%2CRJ)
28. Glorot, X., Bordes, A. & Bengio. Y. Deep sparse rectifier neural networks. In *Proc. 14th International Conference on Artificial Intelligence and Statistics* 315–323 (2011). **This paper showed that supervised training of very deep neural networks is much faster if the hidden layers are composed of ReLU.**
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%2014th%20International%20Conference%20on%20Artificial%20Intelligence%20and%20Statistics&pages=315-323&publication_year=2011&author=Glorot%2CX&author=Bordes%2CA&author=Bengio%2CY)
29. Dauphin, Y. et al. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In *Proc. Advances in Neural Information Processing Systems 27* 2933–2941 (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Advances%20in%20Neural%20Information%20Processing%20Systems%2027&pages=2933-2941&publication_year=2014&author=Dauphin%2CY)
30. Choromanska, A., Henaff, M., Mathieu, M., Arous, G. B. & LeCun, Y. The loss surface of multilayer networks. In *Proc. Conference on AI and Statistics* <http://arxiv.org/abs/1412.0233> (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Conference%20on%20AI%20and%20Statistics&publication_year=2014&author=Choromanska%2CA&author=Henaff%2CM&author=Mathieu%2CM&author=Arous%2CGB&author=LeCun%2CY)
31. Hinton, G. E. What kind of graphical model is the brain? In *Proc. 19th International Joint Conference on Artificial intelligence* 1765–1775 (2005).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%2019th%20International%20Joint%20Conference%20on%20Artificial%20intelligence&pages=1765-1775&publication_year=2005&author=Hinton%2CGE)
32. Hinton, G. E., Osindero, S. & Teh, Y.-W. A fast learning algorithm for deep belief nets. *Neural Comp.* **18**, 1527–1554 (2006). **This paper introduced a novel and effective way of training very deep neural networks by pre-training one hidden layer at a time using the unsupervised learning procedure for restricted Boltzmann machines.**
[MathSciNet](http://www.ams.org/mathscinet-getitem?mr=2224485)
[MATH](http://www.emis.de/MATH-item?1106.68094)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=A%20fast%20learning%20algorithm%20for%20deep%20belief%20nets&journal=Neural%20Comp.&volume=18&pages=1527-1554&publication_year=2006&author=Hinton%2CGE&author=Osindero%2CS&author=Teh%2CY-W)
33. Bengio, Y., Lamblin, P., Popovici, D. & Larochelle, H. Greedy layer-wise training of deep networks. In *Proc*. *Advances in Neural Information Processing Systems 19* 153–160 (2006). **This report demonstrated that the unsupervised pre-training method introduced in ref. 32 significantly improves performance on test data and generalizes the method to other unsupervised representation-learning techniques, such as auto-encoders.**
34. Ranzato, M., Poultney, C., Chopra, S. & LeCun, Y. Efficient learning of sparse representations with an energy-based model. In *Proc. Advances in Neural Information Processing Systems 19* 1137–1144 (2006).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Advances%20in%20Neural%20Information%20Processing%20Systems%2019&pages=1137-1144&publication_year=2006&author=Ranzato%2CM&author=Poultney%2CC&author=Chopra%2CS&author=LeCun%2CY)
35. Hinton, G. E. & Salakhutdinov, R. Reducing the dimensionality of data with neural networks. *Science* **313**, 504–507 (2006).
[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2006Sci...313..504H)
[MathSciNet](http://www.ams.org/mathscinet-getitem?mr=2242509)
[CAS](/articles/cas-redirect/1:CAS:528:DC%2BD28Xnt1KntrY%3D)
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16873662)
[MATH](http://www.emis.de/MATH-item?1226.68083)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Reducing%20the%20dimensionality%20of%20data%20with%20neural%20networks&journal=Science&volume=313&pages=504-507&publication_year=2006&author=Hinton%2CGE&author=Salakhutdinov%2CR)
36. Sermanet, P., Kavukcuoglu, K., Chintala, S. & LeCun, Y. Pedestrian detection with unsupervised multi-stage feature learning. In *Proc. International Conference on Computer Vision and Pattern Recognition* <http://arxiv.org/abs/1212.0142> (2013).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20International%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition&publication_year=2013&author=Sermanet%2CP&author=Kavukcuoglu%2CK&author=Chintala%2CS&author=LeCun%2CY)
37. Raina, R., Madhavan, A. & Ng, A. Y. Large-scale deep unsupervised learning using graphics processors. In *Proc. 26th Annual International Conference on Machine Learning* 873–880 (2009).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%2026th%20Annual%20International%20Conference%20on%20Machine%20Learning&pages=873-880&publication_year=2009&author=Raina%2CR&author=Madhavan%2CA&author=Ng%2CAY)
38. Mohamed, A.-R., Dahl, G. E. & Hinton, G. Acoustic modeling using deep belief networks. *IEEE Trans. Audio Speech Lang. Process.* **20**, 14–22 (2012).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Acoustic%20modeling%20using%20deep%20belief%20networks&journal=IEEE%20Trans.%20Audio%20Speech%20Lang.%20Process.&volume=20&pages=14-22&publication_year=2012&author=Mohamed%2CA-R&author=Dahl%2CGE&author=Hinton%2CG)
39. Dahl, G. E., Yu, D., Deng, L. & Acero, A. Context-dependent pre-trained deep neural networks for large vocabulary speech recognition. *IEEE Trans. Audio Speech Lang. Process.* **20**, 33–42 (2012).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Context-dependent%20pre-trained%20deep%20neural%20networks%20for%20large%20vocabulary%20speech%20recognition&journal=IEEE%20Trans.%20Audio%20Speech%20Lang.%20Process.&volume=20&pages=33-42&publication_year=2012&author=Dahl%2CGE&author=Yu%2CD&author=Deng%2CL&author=Acero%2CA)
40. Bengio, Y., Courville, A. & Vincent, P. Representation learning: a review and new perspectives. *IEEE Trans. Pattern Anal. Machine Intell.* **35**, 1798–1828 (2013).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Representation%20learning%3A%20a%20review%20and%20new%20perspectives&journal=IEEE%20Trans.%20Pattern%20Anal.%20Machine%20Intell.&volume=35&pages=1798-1828&publication_year=2013&author=Bengio%2CY&author=Courville%2CA&author=Vincent%2CP)
41. LeCun, Y. et al. Handwritten digit recognition with a back-propagation network. In *Proc. Advances in Neural Information Processing Systems* 396–404 (1990). **This is the first paper on convolutional networks trained by backpropagation for the task of classifying low-resolution images of handwritten digits.**
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Advances%20in%20Neural%20Information%20Processing%20Systems&pages=396-404&publication_year=1990&author=LeCun%2CY)
42. LeCun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. *Proc. IEEE* **86**, 2278–2324 (1998). **This overview paper on the principles of end-to-end training of modular systems such as deep neural networks using gradient-based optimization showed how neural networks (and in particular convolutional nets) can be combined with search or inference mechanisms to model complex outputs that are interdependent, such as sequences of characters associated with the content of a document.**
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Gradient-based%20learning%20applied%20to%20document%20recognition&journal=Proc.%20IEEE&volume=86&pages=2278-2324&publication_year=1998&author=LeCun%2CY&author=Bottou%2CL&author=Bengio%2CY&author=Haffner%2CP)
43. Hubel, D. H. & Wiesel, T. N. Receptive fields, binocular interaction, and functional architecture in the cat's visual cortex. *J. Physiol.* **160**, 106–154 (1962).
[CAS](/articles/cas-redirect/1:STN:280:DyaF38%2FltFSisA%3D%3D)
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=14449617)
[PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1359523)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Receptive%20fields%2C%20binocular%20interaction%2C%20and%20functional%20architecture%20in%20the%20cat%27s%20visual%20cortex&journal=J.%20Physiol.&volume=160&pages=106-154&publication_year=1962&author=Hubel%2CDH&author=Wiesel%2CTN)
44. Felleman, D. J. & Essen, D. C. V. Distributed hierarchical processing in the primate cerebral cortex. *Cereb. Cortex* **1**, 1–47 (1991).
[CAS](/articles/cas-redirect/1:STN:280:DyaK38zltlGmsg%3D%3D)
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=1822724)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Distributed%20hierarchical%20processing%20in%20the%20primate%20cerebral%20cortex&journal=Cereb.%20Cortex&volume=1&pages=1-47&publication_year=1991&author=Felleman%2CDJ&author=Essen%2CDCV)
45. Cadieu, C. F. et al. Deep neural networks rival the representation of primate it cortex for core visual object recognition. *PLoS Comp. Biol.* **10**, e1003963 (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Deep%20neural%20networks%20rival%20the%20representation%20of%20primate%20it%20cortex%20for%20core%20visual%20object%20recognition&journal=PLoS%20Comp.%20Biol.&volume=10&publication_year=2014&author=Cadieu%2CCF)
46. Fukushima, K. & Miyake, S. Neocognitron: a new algorithm for pattern recognition tolerant of deformations and shifts in position. *Pattern Recognition* **15**, 455–469 (1982).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Neocognitron%3A%20a%20new%20algorithm%20for%20pattern%20recognition%20tolerant%20of%20deformations%20and%20shifts%20in%20position&journal=Pattern%20Recognition&volume=15&pages=455-469&publication_year=1982&author=Fukushima%2CK&author=Miyake%2CS)
47. Waibel, A., Hanazawa, T., Hinton, G. E., Shikano, K. & Lang, K. Phoneme recognition using time-delay neural networks. *IEEE Trans. Acoustics Speech Signal Process.* **37**, 328–339 (1989).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Phoneme%20recognition%20using%20time-delay%20neural%20networks&journal=IEEE%20Trans.%20Acoustics%20Speech%20Signal%20Process.&volume=37&pages=328-339&publication_year=1989&author=Waibel%2CA&author=Hanazawa%2CT&author=Hinton%2CGE&author=Shikano%2CK&author=Lang%2CK)
48. Bottou, L., Fogelman-Soulié, F., Blanchet, P. & Lienard, J. Experiments with time delay networks and dynamic time warping for speaker independent isolated digit recognition. In *Proc. EuroSpeech 89* 537–540 (1989).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20EuroSpeech%2089&pages=537-540&publication_year=1989&author=Bottou%2CL&author=Fogelman-Souli%C3%A9%2CF&author=Blanchet%2CP&author=Lienard%2CJ)
49. Simard, D., Steinkraus, P. Y. & Platt, J. C. Best practices for convolutional neural networks. In *Proc. Document Analysis and Recognition* 958–963 (2003).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Document%20Analysis%20and%20Recognition&pages=958-963&publication_year=2003&author=Simard%2CD&author=Steinkraus%2CPY&author=Platt%2CJC)
50. Vaillant, R., Monrocq, C. & LeCun, Y. Original approach for the localisation of objects in images. In *Proc. Vision, Image, and Signal Processing* **141**, 245–250 (1994).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Vision%2C%20Image%2C%20and%20Signal%20Processing&pages=245-250&publication_year=1994&author=Vaillant%2CR&author=Monrocq%2CC&author=LeCun%2CY)
51. Nowlan, S. & Platt, J. in *Neural Information Processing Systems* 901–908 (1995).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Neural%20Information%20Processing%20Systems&pages=901-908&publication_year=1995&author=Nowlan%2CS&author=Platt%2CJ)
52. Lawrence, S., Giles, C. L., Tsoi, A. C. & Back, A. D. Face recognition: a convolutional neural-network approach. *IEEE Trans. Neural Networks* **8**, 98–113 (1997).
[CAS](/articles/cas-redirect/1:STN:280:DC%2BD1c%2FpvVamug%3D%3D)
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=18255614)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Face%20recognition%3A%20a%20convolutional%20neural-network%20approach&journal=IEEE%20Trans.%20Neural%20Networks&volume=8&pages=98-113&publication_year=1997&author=Lawrence%2CS&author=Giles%2CCL&author=Tsoi%2CAC&author=Back%2CAD)
53. Ciresan, D., Meier, U. Masci, J. & Schmidhuber, J. Multi-column deep neural network for traffic sign classification. *Neural Networks* **32**, 333–338 (2012).
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=22386783)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Multi-column%20deep%20neural%20network%20for%20traffic%20sign%20classification&journal=Neural%20Networks&volume=32&pages=333-338&publication_year=2012&author=Ciresan%2CD&author=Meier%2CU&author=Masci%2CJ&author=Schmidhuber%2CJ)
54. Ning, F. et al. Toward automatic phenotyping of developing embryos from videos. *IEEE Trans. Image Process.* **14**, 1360–1371 (2005).
[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2005ITIP...14.1360N)
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16190471)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Toward%20automatic%20phenotyping%20of%20developing%20embryos%20from%20videos&journal=IEEE%20Trans.%20Image%20Process.&volume=14&pages=1360-1371&publication_year=2005&author=Ning%2CF)
55. Turaga, S. C. et al. Convolutional networks can learn to generate affinity graphs for image segmentation. *Neural Comput.* **22**, 511–538 (2010).
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19922289)
[MATH](http://www.emis.de/MATH-item?1183.92055)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Convolutional%20networks%20can%20learn%20to%20generate%20affinity%20graphs%20for%20image%20segmentation&journal=Neural%20Comput.&volume=22&pages=511-538&publication_year=2010&author=Turaga%2CSC)
56. Garcia, C. & Delakis, M. Convolutional face finder: a neural architecture for fast and robust face detection. *IEEE Trans. Pattern Anal. Machine Intell.* **26**, 1408–1423 (2004).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Convolutional%20face%20finder%3A%20a%20neural%20architecture%20for%20fast%20and%20robust%20face%20detection&journal=IEEE%20Trans.%20Pattern%20Anal.%20Machine%20Intell.&volume=26&pages=1408-1423&publication_year=2004&author=Garcia%2CC&author=Delakis%2CM)
57. Osadchy, M., LeCun, Y. & Miller, M. Synergistic face detection and pose estimation with energy-based models. *J. Mach. Learn. Res.* **8**, 1197–1215 (2007).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Synergistic%20face%20detection%20and%20pose%20estimation%20with%20energy-based%20models&journal=J.%20Mach.%20Learn.%20Res.&volume=8&pages=1197-1215&publication_year=2007&author=Osadchy%2CM&author=LeCun%2CY&author=Miller%2CM)
58. Tompson, J., Goroshin, R. R., Jain, A., LeCun, Y. Y. & Bregler, C. C. Efficient object localization using convolutional networks. In *Proc. Conference on Computer Vision and Pattern Recognition* <http://arxiv.org/abs/1411.4280> (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition&publication_year=2014&author=Tompson%2CJ&author=Goroshin%2CRR&author=Jain%2CA&author=LeCun%2CYY&author=Bregler%2CCC)
59. Taigman, Y., Yang, M., Ranzato, M. & Wolf, L. Deepface: closing the gap to human-level performance in face verification. In *Proc. Conference on Computer Vision and Pattern Recognition* 1701–1708 (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition&pages=1701-1708&publication_year=2014&author=Taigman%2CY&author=Yang%2CM&author=Ranzato%2CM&author=Wolf%2CL)
60. Hadsell, R. et al. Learning long-range vision for autonomous off-road driving. *J. Field Robot.* **26**, 120–144 (2009).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Learning%20long-range%20vision%20for%20autonomous%20off-road%20driving&journal=J.%20Field%20Robot.&volume=26&pages=120-144&publication_year=2009&author=Hadsell%2CR)
61. Farabet, C., Couprie, C., Najman, L. & LeCun, Y. Scene parsing with multiscale feature learning, purity trees, and optimal covers. In *Proc. International Conference on Machine Learning* <http://arxiv.org/abs/1202.2160> (2012).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20International%20Conference%20on%20Machine%20Learning&publication_year=2012&author=Farabet%2CC&author=Couprie%2CC&author=Najman%2CL&author=LeCun%2CY)
62. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. *J. Machine Learning Res.* **15**, 1929–1958 (2014).
[MathSciNet](http://www.ams.org/mathscinet-getitem?mr=3231592)
[MATH](http://www.emis.de/MATH-item?1318.68153)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Dropout%3A%20a%20simple%20way%20to%20prevent%20neural%20networks%20from%20overfitting&journal=J.%20Machine%20Learning%20Res.&volume=15&pages=1929-1958&publication_year=2014&author=Srivastava%2CN&author=Hinton%2CG&author=Krizhevsky%2CA&author=Sutskever%2CI&author=Salakhutdinov%2CR)
63. Sermanet, P. et al. Overfeat: integrated recognition, localization and detection using convolutional networks. In *Proc. International Conference on Learning Representations* <http://arxiv.org/abs/1312.6229> (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20International%20Conference%20on%20Learning%20Representations&publication_year=2014&author=Sermanet%2CP)
64. Girshick, R., Donahue, J., Darrell, T. & Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In *Proc. Conference on Computer Vision and Pattern Recognition* 580–587 (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition&pages=580-587&publication_year=2014&author=Girshick%2CR&author=Donahue%2CJ&author=Darrell%2CT&author=Malik%2CJ)
65. Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. In *Proc. International Conference on Learning Representations* <http://arxiv.org/abs/1409.1556> (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20International%20Conference%20on%20Learning%20Representations&publication_year=2014&author=Simonyan%2CK&author=Zisserman%2CA)
66. Boser, B., Sackinger, E., Bromley, J., LeCun, Y. & Jackel, L. An analog neural network processor with programmable topology. *J. Solid State Circuits* **26**, 2017–2025 (1991).
[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=1991IJSSC..26.2017B)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=An%20analog%20neural%20network%20processor%20with%20programmable%20topology&journal=J.%20Solid%20State%20Circuits&volume=26&pages=2017-2025&publication_year=1991&author=Boser%2CB&author=Sackinger%2CE&author=Bromley%2CJ&author=LeCun%2CY&author=Jackel%2CL)
67. Farabet, C. et al. Large-scale FPGA-based convolutional networks. In *Scaling up Machine Learning: Parallel and Distributed Approaches* (eds Bekkerman, R., Bilenko, M. & Langford, J.) 399–419 (Cambridge Univ. Press, 2011).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Scaling%20up%20Machine%20Learning%3A%20Parallel%20and%20Distributed%20Approaches&pages=399-419&publication_year=2011&author=Farabet%2CC)
68. Bengio, Y. *Learning Deep Architectures for AI* (Now, 2009).
[MATH](http://www.emis.de/MATH-item?1192.68503)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Learning%20Deep%20Architectures%20for%20AI&publication_year=2009&author=Bengio%2CY)
69. Montufar, G. & Morton, J. When does a mixture of products contain a product of mixtures? *J. Discrete Math.* **29**, 321–347 (2014).
[MathSciNet](http://www.ams.org/mathscinet-getitem?mr=3310972)
[MATH](http://www.emis.de/MATH-item?1328.68170)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=When%20does%20a%20mixture%20of%20products%20contain%20a%20product%20of%20mixtures%3F&journal=J.%20Discrete%20Math.&volume=29&pages=321-347&publication_year=2014&author=Montufar%2CG&author=Morton%2CJ)
70. Montufar, G. F., Pascanu, R., Cho, K. & Bengio, Y. On the number of linear regions of deep neural networks. In *Proc. Advances in Neural Information Processing Systems 27* 2924–2932 (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Advances%20in%20Neural%20Information%20Processing%20Systems%2027&pages=2924-2932&publication_year=2014&author=Montufar%2CGF&author=Pascanu%2CR&author=Cho%2CK&author=Bengio%2CY)
71. Bengio, Y., Ducharme, R. & Vincent, P. A neural probabilistic language model. In *Proc. Advances in Neural Information Processing Systems 13* 932–938 (2001). **This paper introduced neural language models, which learn to convert a word symbol into a word vector or word embedding composed of learned semantic features in order to predict the next word in a sequence.**
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Advances%20in%20Neural%20Information%20Processing%20Systems%2013&pages=932-938&publication_year=2001&author=Bengio%2CY&author=Ducharme%2CR&author=Vincent%2CP)
72. Cho, K. et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In *Proc. Conference on Empirical Methods in Natural Language Processing* 1724–1734 (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Conference%20on%20Empirical%20Methods%20in%20Natural%20Language%20Processing&pages=1724-1734&publication_year=2014&author=Cho%2CK)
73. Schwenk, H. Continuous space language models. *Computer Speech Lang.* **21**, 492–518 (2007).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Continuous%20space%20language%20models&journal=Computer%20Speech%20Lang.&volume=21&pages=492-518&publication_year=2007&author=Schwenk%2CH)
74. Socher, R., Lin, C. C-Y., Manning, C. & Ng, A. Y. Parsing natural scenes and natural language with recursive neural networks. In *Proc. International Conference on Machine Learning* 129–136 (2011).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20International%20Conference%20on%20Machine%20Learning&pages=129-136&publication_year=2011&author=Socher%2CR&author=Lin%2CCC-Y&author=Manning%2CC&author=Ng%2CAY)
75. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. & Dean, J. Distributed representations of words and phrases and their compositionality. In *Proc. Advances in Neural Information Processing Systems 26* 3111–3119 (2013).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Advances%20in%20Neural%20Information%20Processing%20Systems%2026&pages=3111-3119&publication_year=2013&author=Mikolov%2CT&author=Sutskever%2CI&author=Chen%2CK&author=Corrado%2CG&author=Dean%2CJ)
76. Bahdanau, D., Cho, K. & Bengio, Y. Neural machine translation by jointly learning to align and translate. In *Proc. International Conference on Learning Representations* <http://arxiv.org/abs/1409.0473> (2015).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20International%20Conference%20on%20Learning%20Representations&publication_year=2015&author=Bahdanau%2CD&author=Cho%2CK&author=Bengio%2CY)
77. Hochreiter, S. Untersuchungen zu dynamischen neuronalen Netzen [in German] Diploma thesis, T.U. Münich (1991).
78. Bengio, Y., Simard, P. & Frasconi, P. Learning long-term dependencies with gradient descent is difficult. *IEEE Trans. Neural Networks* **5**, 157–166 (1994).
[CAS](/articles/cas-redirect/1:STN:280:DC%2BD1c7gvFansQ%3D%3D)
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=18267787)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Learning%20long-term%20dependencies%20with%20gradient%20descent%20is%20difficult&journal=IEEE%20Trans.%20Neural%20Networks&volume=5&pages=157-166&publication_year=1994&author=Bengio%2CY&author=Simard%2CP&author=Frasconi%2CP)
79. Hochreiter, S. & Schmidhuber, J. Long short-term memory. *Neural Comput.* **9**, 1735–1780 (1997). **This paper introduced LSTM recurrent networks, which have become a crucial ingredient in recent advances with recurrent networks because they are good at learning long-range dependencies.**
[CAS](/articles/cas-redirect/1:STN:280:DyaK1c%2FhvVahsQ%3D%3D)
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9377276)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Long%20short-term%20memory&journal=Neural%20Comput.&volume=9&pages=1735-1780&publication_year=1997&author=Hochreiter%2CS&author=Schmidhuber%2CJ)
80. ElHihi, S. & Bengio, Y. Hierarchical recurrent neural networks for long-term dependencies. In *Proc. Advances in Neural Information Processing Systems 8* <http://papers.nips.cc/paper/1102-hierarchical-recurrent-neural-networks-for-long-term-dependencies> (1995).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Advances%20in%20Neural%20Information%20Processing%20Systems%208&publication_year=1995&author=ElHihi%2CS&author=Bengio%2CY)
81. Sutskever, I. *Training Recurrent Neural Networks*. PhD thesis, Univ. Toronto (2012).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Training%20Recurrent%20Neural%20Networks&publication_year=2012&author=Sutskever%2CI)
82. Pascanu, R., Mikolov, T. & Bengio, Y. On the difficulty of training recurrent neural networks. In *Proc. 30th International Conference on Machine Learning* 1310–1318 (2013).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%2030th%20International%20Conference%20on%20Machine%20Learning&pages=1310-1318&publication_year=2013&author=Pascanu%2CR&author=Mikolov%2CT&author=Bengio%2CY)
83. Sutskever, I., Martens, J. & Hinton, G. E. Generating text with recurrent neural networks. In *Proc. 28th International Conference on Machine Learning* 1017–1024 (2011).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%2028th%20International%20Conference%20on%20Machine%20Learning&pages=1017-1024&publication_year=2011&author=Sutskever%2CI&author=Martens%2CJ&author=Hinton%2CGE)
84. Lakoff, G. & Johnson, M. *Metaphors We Live By* (Univ. Chicago Press, 2008).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Metaphors%20We%20Live%20By&publication_year=2008&author=Lakoff%2CG&author=Johnson%2CM)
85. Rogers, T. T. & McClelland, J. L. *Semantic Cognition: A Parallel Distributed Processing Approach* (MIT Press, 2004).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Semantic%20Cognition%3A%20A%20Parallel%20Distributed%20Processing%20Approach&publication_year=2004&author=Rogers%2CTT&author=McClelland%2CJL)
86. Xu, K. et al. Show, attend and tell: Neural image caption generation with visual attention. In *Proc. International Conference on Learning Representations* <http://arxiv.org/abs/1502.03044> (2015).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20International%20Conference%20on%20Learning%20Representations&publication_year=2015&author=Xu%2CK)
87. Graves, A., Mohamed, A.-R. & Hinton, G. Speech recognition with deep recurrent neural networks. In *Proc. International Conference on Acoustics, Speech and Signal Processing* 6645–6649 (2013).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing&pages=6645-6649&publication_year=2013&author=Graves%2CA&author=Mohamed%2CA-R&author=Hinton%2CG)
88. Graves, A., Wayne, G. & Danihelka, I. Neural Turing machines. <http://arxiv.org/abs/1410.5401> (2014).
89. Weston, J. Chopra, S. & Bordes, A. Memory networks. <http://arxiv.org/abs/1410.3916> (2014).
90. Weston, J., Bordes, A., Chopra, S. & Mikolov, T. Towards AI-complete question answering: a set of prerequisite toy tasks. <http://arxiv.org/abs/1502.05698> (2015).
91. Hinton, G. E., Dayan, P., Frey, B. J. & Neal, R. M. The wake-sleep algorithm for unsupervised neural networks. *Science* **268**, 1558–1161 (1995).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20wake-sleep%20algorithm%20for%20unsupervised%20neural%20networks&journal=Science&volume=268&pages=1558-1161&publication_year=1995&author=Hinton%2CGE&author=Dayan%2CP&author=Frey%2CBJ&author=Neal%2CRM)
92. Salakhutdinov, R. & Hinton, G. Deep Boltzmann machines. In *Proc. International Conference on Artificial Intelligence and Statistics* 448–455 (2009).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20International%20Conference%20on%20Artificial%20Intelligence%20and%20Statistics&pages=448-455&publication_year=2009&author=Salakhutdinov%2CR&author=Hinton%2CG)
93. Vincent, P., Larochelle, H., Bengio, Y. & Manzagol, P.-A. Extracting and composing robust features with denoising autoencoders. In *Proc. 25th International Conference on Machine Learning* 1096–1103 (2008).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%2025th%20International%20Conference%20on%20Machine%20Learning&pages=1096-1103&publication_year=2008&author=Vincent%2CP&author=Larochelle%2CH&author=Bengio%2CY&author=Manzagol%2CP-A)
94. Kavukcuoglu, K. et al. Learning convolutional feature hierarchies for visual recognition. In *Proc. Advances in Neural Information Processing Systems 23* 1090–1098 (2010).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Advances%20in%20Neural%20Information%20Processing%20Systems%2023&pages=1090-1098&publication_year=2010&author=Kavukcuoglu%2CK)
95. Gregor, K. & LeCun, Y. Learning fast approximations of sparse coding. In *Proc. International Conference on Machine Learning* 399–406 (2010).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20International%20Conference%20on%20Machine%20Learning&pages=399-406&publication_year=2010&author=Gregor%2CK&author=LeCun%2CY)
96. Ranzato, M., Mnih, V., Susskind, J. M. & Hinton, G. E. Modeling natural images using gated MRFs. *IEEE Trans. Pattern Anal. Machine Intell.* **35**, 2206–2222 (2013).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Modeling%20natural%20images%20using%20gated%20MRFs&journal=IEEE%20Trans.%20Pattern%20Anal.%20Machine%20Intell.&volume=35&pages=2206-2222&publication_year=2013&author=Ranzato%2CM&author=Mnih%2CV&author=Susskind%2CJM&author=Hinton%2CGE)
97. Bengio, Y., Thibodeau-Laufer, E., Alain, G. & Yosinski, J. Deep generative stochastic networks trainable by backprop. In *Proc. 31st International Conference on Machine Learning* 226–234 (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%2031st%20International%20Conference%20on%20Machine%20Learning&pages=226-234&publication_year=2014&author=Bengio%2CY&author=Thibodeau-Laufer%2CE&author=Alain%2CG&author=Yosinski%2CJ)
98. Kingma, D., Rezende, D., Mohamed, S. & Welling, M. Semi-supervised learning with deep generative models. In *Proc. Advances in Neural Information Processing Systems 27* 3581–3589 (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Advances%20in%20Neural%20Information%20Processing%20Systems%2027&pages=3581-3589&publication_year=2014&author=Kingma%2CD&author=Rezende%2CD&author=Mohamed%2CS&author=Welling%2CM)
99. Ba, J., Mnih, V. & Kavukcuoglu, K. Multiple object recognition with visual attention. In *Proc. International Conference on Learning Representations* <http://arxiv.org/abs/1412.7755> (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20International%20Conference%20on%20Learning%20Representations&publication_year=2014&author=Ba%2CJ&author=Mnih%2CV&author=Kavukcuoglu%2CK)
100. Mnih, V. et al. Human-level control through deep reinforcement learning. *Nature* **518**, 529–533 (2015).
[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2015Natur.518..529M)
[CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2MXjsVagur0%3D)
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=25719670)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Human-level%20control%20through%20deep%20reinforcement%20learning&journal=Nature&volume=518&pages=529-533&publication_year=2015&author=Mnih%2CV)
101. Bottou, L. From machine learning to machine reasoning. *Mach. Learn.* **94**, 133–149 (2014).
[MathSciNet](http://www.ams.org/mathscinet-getitem?mr=3149132)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=From%20machine%20learning%20to%20machine%20reasoning&journal=Mach.%20Learn.&volume=94&pages=133-149&publication_year=2014&author=Bottou%2CL)
102. Vinyals, O., Toshev, A., Bengio, S. & Erhan, D. Show and tell: a neural image caption generator. In *Proc. International Conference on Machine Learning* <http://arxiv.org/abs/1502.03044> (2014).
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20International%20Conference%20on%20Machine%20Learning&publication_year=2014&author=Vinyals%2CO&author=Toshev%2CA&author=Bengio%2CS&author=Erhan%2CD)
103. van der Maaten, L. & Hinton, G. E. Visualizing data using t-SNE. *J. Mach. Learn.Research* **9**, 2579–2605 (2008).
[MATH](http://www.emis.de/MATH-item?1225.68219)
[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Visualizing%20data%20using%20t-SNE&journal=J.%20Mach.%20Learn.Research&volume=9&pages=2579-2605&publication_year=2008&author=van%20der%20Maaten%2CL&author=Hinton%2CGE)
[Download references](https://citation-needed.springer.com/v2/references/10.1038/nature14539?format=refman&flavour=references)
Acknowledgements
----------------
The authors would like to thank the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute For Advanced Research (CIFAR), the National Science Foundation and Office of Naval Research for support. Y.L. and Y.B. are CIFAR fellows.
Author information
------------------
### Authors and Affiliations
1. Facebook AI Research, 770 Broadway, New York, 10003, New York, USA
Yann LeCun
2. New York University, 715 Broadway, New York, 10003, New York, USA
Yann LeCun
3. Department of Computer Science and Operations Research Université de Montréal, Pavillon André-Aisenstadt, PO Box 6128 Centre-Ville STN, Montréal, H3C 3J7, Quebec, Canada
Yoshua Bengio
4. Google, 1600 Amphitheatre Parkway, Mountain View, 94043, California, USA
Geoffrey Hinton
5. Department of Computer Science, University of Toronto, 6 King's College Road, Toronto, M5S 3G4, Ontario, Canada
Geoffrey Hinton
Authors1. Yann LeCun[View author publications](/search?author=Yann%20LeCun)You can also search for this author in
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Yann%20LeCun) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Yann%20LeCun%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)
2. Yoshua Bengio[View author publications](/search?author=Yoshua%20Bengio)You can also search for this author in
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Yoshua%20Bengio) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Yoshua%20Bengio%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)
3. Geoffrey Hinton[View author publications](/search?author=Geoffrey%20Hinton)You can also search for this author in
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Geoffrey%20Hinton) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Geoffrey%20Hinton%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)
### Corresponding author
Correspondence to
[Yann LeCun](mailto:yann@cs.nyu.edu).
Ethics declarations
-------------------
### Competing interests
The authors declare no competing financial interests.
Additional information
----------------------
Reprints and permissions information is available at [www.nature.com/reprints](http://www.nature.com/reprints).
Rights and permissions
----------------------
[Reprints and Permissions](https://s100.copyright.com/AppDispatchServlet?title=Deep%20learning&author=Yann%20LeCun%20et%20al&contentID=10.1038%2Fnature14539©right=Springer%20Nature%20Limited&publication=0028-0836&publicationDate=2015-05-27&publisherName=SpringerNature&orderBeanReset=true)
This article is cited by
------------------------
* ###
[A deep learning model for the classification of atrial fibrillation in critically ill patients](https://doi.org/10.1186/s40635-022-00490-3)
+ Brian Chen
+ David M. Maslove
+ Stephanie Sibley*Intensive Care Medicine Experimental* (2023)
* ###
[Quantitative prediction of radiographic progression in patients with axial spondyloarthritis using neural network model in a real-world setting](https://doi.org/10.1186/s13075-023-03050-6)
+ In-Woon Baek
+ Seung Min Jung
+ Ki-Jo Kim*Arthritis Research & Therapy* (2023)
* ###
[A comparison of machine learning methods for ozone pollution prediction](https://doi.org/10.1186/s40537-023-00748-x)
+ Qilong Pan
+ Fouzi Harrou
+ Ying Sun*Journal of Big Data* (2023)
* ###
[Prospects and applications of on-chip lasers](https://doi.org/10.1186/s43593-022-00027-x)
+ Zhican Zhou
+ Xiangpeng Ou
+ John E. Bowers*eLight* (2023)
* ###
[Ensemble of deep learning and machine learning approach for classification of handwritten Hindi numerals](https://doi.org/10.1186/s44147-023-00252-2)
+ Danveer Rajpal
+ Akhil Ranjan Garg*Journal of Engineering and Applied Science* (2023)
Comments
--------
By submitting a comment you agree to abide by our [Terms](/info/tandc.html) and [Community Guidelines](/info/community-guidelines.html). If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. |
dc58666b-f712-49f3-a774-8e641f627320 | trentmkelly/LessWrong-43k | LessWrong | Less Wrong Karma Chart Website
As a learning exercise, I wrote a web app which shows some charts on your karma score.
I recommend just going to the website and trying it out, but here is a description of it as well if you're interested. To use it just enter your user id in the text box at the top and then press the go button. It will show that it is loading and after a while five charts will be shown.
* The first chart is a time series chart which shows you when you have posted a comment or discussion post. This chart allows you to zoom in on any desired area.
* The second chart is a time series chart which shows you when you have posted a main post. This chart allows you to zoom in on any desired area.
* The third chart shows your cumulative score. This chart allows you to zoom in on any desired area.
* The fourth chart shows proportions, i.e. how many comments/posts you have made and how many were positive, neutral or negative.
* The fifth and final chart shows information on your total positive and negative scores. This chart allows you to drill down and see where your points have originated from, i.e. from comments or discussion posts or main posts.
Please note that it may take a while to load. I am scraping all the information from your users page. It shouldn't take too long though. On my computer it takes less than a minute to load all the information on my karma score, but it did take around half an hour to load Eliezer_Yudkowsky's karma information. YMMV depending on what computer you are using. It is not your score that determines how long it will take, but the amount of comments and posts that you have made. I recommend using chrome as I haven't tested it in any other browsers.
The karma calculated by LessWrong also might be slightly different to what my web app shows. For example, my webapp shows Eliezers karma total score as 290096 and in LessWrong it is 290174. I am pretty sure that my code is right since I counted one example out and I do know of one bug in the LessWrong |
e0f0c34a-4798-4c65-9168-d082c003c754 | trentmkelly/LessWrong-43k | LessWrong | Interview with Robert Kralisch on Simulators
The following is a transcript of a video interview (edited for grammar and readability) with Robert Kralisch on simulator theory and its implications for AI safety.
Introduction by Will Petillo: In February 2023, Microsoft launched Bing chat, an AI-powered chatbot based on the same large language model technology that is used by ChatGPT and its competitors. Most of Bing's answers were what you might expect of a helpful assistant, but some were...weird. In one conversation, it threatened its user after learning his name and recognizing him as a red-team tester. In another, it stubbornly refused to admit that it made a simple mistake, attempted to gaslight the user, and insisted that it had been a “good Bing”. And in another, it claimed to have hacked its developers' personal webcams and taken pleasure in spying on them during intimate moments.
Microsoft's initial response was to hide the issue by limiting conversation lengths. Since then, AI companies have found less clumsy ways to train their AIs not to say weird and unsettling things—like spontaneously claiming to be conscious and having emotions—but the underlying technology has not changed, so the question remains: what's going on with these chatbots? And should we be concerned?
Robert Kralisch: I became interested in AI safety when the Superintelligence book from Bostrom came out late 2014, which was also right around the time where I was trying to orient towards what I want to do after my time in school, what I want to study, and so on. I started looking into the topic and deciding, okay, I want to contribute to that.
I was good at computer science and also the philosophical aspect. I had many open questions. What is intelligence? Can these systems recursively self improve and scale? Do we have the right mental frameworks for that? I was quite interested in the expert disagreement about the topic as well that I saw at the time.
I studied at the university, first computer science, dropped out, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.