id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
c3fa2c4b-52a6-440a-afd3-e715d1b454ad | trentmkelly/LessWrong-43k | LessWrong | Sleep Training
Let's say you're a baby. You reach the end of a sleep cycle and become partially alert. What do you do? Ideally, if nothing were wrong you would settle back in for another sleep cycle. Chances are, however, you don't know how to do this yet, so you wake up more fully and start to cry. This seems to reliably trigger some combination of cuddles, shushing, motion, and/or nursing—comforted, you fall back asleep. While this is a fine outcome for you, your parents would probably be doing a lot better if you didn't rely on them so heavily at night.
With a very little baby this is unavoidable, though it can be partially automated, but as babies get older they need to learn how to handle this themselves. Sleep training is a collection of strategies for teaching babies (a) the skill of falling asleep on their own ("self soothing") and (b) when they should apply this new skill.
One of the things that makes discussion of sleep training messy is that people often aren't clear on which of (a) or (b) they're talking about. For example, a strategy of putting your kid down when they're very drowsy but not quite asleep yet is about teaching (a), while applying consistent patterns about when or how often you'll feed or settle them is about (b). If you think someone is trying to do (a) when they're actually doing (b), their approach can often sound excessive, uncaring, or unfair.
It can also be hard to tell which of (a) or (b) your child needs. A child who can self-soothe but prefers to be cuddled doesn't appear that different from one who doesn't know how do it yet. Gradually providing less and less comfort can work well in these cases, because it's a good way to teach (a) and a so-so way of teaching (b). Alternatively, cry-it-out is a decent option for (b) and can also be a way of teaching (a) if other options haven't worked.
With our youngest we found that tapering with the Snoo taught (a) reasonably well: while there was some crying she mostly was able to gradually build up the |
65b4183a-36ef-4128-a460-bc8dbfc23da0 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | What AI Take-Over Movies or Books Will Scare Me Into Taking AI Seriously?
[I've been worried about x-risk for a very long time.](https://forum.effectivealtruism.org/posts/bwn3zPFkfesNhizCa/how-i-came-to-longtermism-on-my-own-and-an-outsider) But for some reason [**AI x-risk**](https://80000hours.org/problem-profiles/artificial-intelligence/) has never felt intuitive to me.
On an intellectual level AI risk makes sense to me, but I would like to be able to visualize what **a few plausible AI take-over scenarios could concretely look like.** I would really appreciate if anyone could share **books and movies with realistic AI elements and great storytelling** that helped you to connect emotionally to the scariness of AI x-risk.
Alternatively, I would appreciate recommendations for **any content, posts, ideas, revelations, experiences, "a-ha" or "0h-shit" moments, etc. -** anything that helped you realize how seriously scary AI is and motivated you to work on it.
Thank you for the advice! |
4f5195f6-22c9-422d-8e9c-ebe7e816b677 | StampyAI/alignment-research-dataset/special_docs | Other | Reward is Enough.
Reward is enough
================
Abstract
--------
In this article we hypothesise that intelligence, and its associated abilities, can be understood as subserving the maximisation of reward. Accordingly, reward is enough to drive behaviour that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalisation and imitation. This is in contrast to the view that specialised problem formulations are needed for each ability, based on other signals or objectives. Furthermore, we suggest that agents that learn through trial and error experience to maximise reward could learn behaviour that exhibits most if not all of these abilities, and therefore that powerful reinforcement learning agents could constitute a solution to artificial general intelligence.
Keywords
--------
Artificial intelligenceArtificial general intelligenceReinforcement learningRewardRecommended articles |
b650d27a-d171-4bfe-b527-682fed602f94 | trentmkelly/LessWrong-43k | LessWrong | Progress, Stagnation, & Collapse
The Progress Equation
In 1960: The Year the Singularity was Cancelled, Scott Alexander discusses what I'll call the progress equation. The equation states that technological progress increases the carrying capacity of the environment (the size of the largest population which can sustain itself with the resources available), but also, that technological progress is itself proportional to population (more people, more brains, more progress). This turns out to imply hyperbolic growth, with a singularity of infinite population around 2026.
As indicated in the title of Scott's post, the progress equation stopped being a good model of population growth around 1960. My personal interpretation is that this is merely the time when technological growth outpaced population growth: the carrying capacity started rising faster than the exponential growth of the population. You see, the progress equation assumes that the population is always around carrying capacity (we could call this "the Malthusian assumption"). This is usually a pretty fair assumption; populations will boom and crash, but hover around the carrying capacity.
However, once the population is large enough that technological growth outpaces population growth, we should expect to enter into the exponential regime: population and technology grow at a merely exponential pace, so we don't see a hard singularity like we get if we extrapolate hyperbolic growth. More thoughts on this here.
Bronze Age Collapse
I recently watched this video series on the Bronze Age Collapse. It's very speculative, because we know very little about what really happened; in most places, writing and permanent settlement disappear shortly after, making records of the event scarce. One theory is that a powerful armada, "the sea people", simply wiped out every mediterranean civilization at once for some reason. But the theory which the video series focuses on is the "system collapse" theory:
* Over time, civilization builds up infrastructur |
ad9c304e-3f17-4d33-8435-a4415738ddb1 | trentmkelly/LessWrong-43k | LessWrong | Layering and Technical Debt in the Global Wayfinding Model
Global Wayfinding is a model of meditation and transformative practice, developed by Mark Lippmann. Mark's materials are unfortunately fairly inaccessible, often even to people with both substantial technical and meditation backgrounds. I'd like to make some concepts more available which haven't yet made their way into the discourses I'm in.
Claims in this essay are meant to accurately reflect Mark's models, and I can't quite sharply endorse all of them. In particular I can't attest to many of these claims from my own practice, which is generally the epistemic standard I hold for claims about meditation/phenomenology etc. Also, my pool of anecdata is much smaller than Mark's, and I've forgotten a lot of the details of where various claims and models came from.
Layering
In the Wayfinding framework, a person is said to layer, sometimes layer over, when a compensatory structure is added to prevent, diminish, or attenuate perceived negative parts, behaviors, or emotions. Layering can be variously effective, sometimes completely blocking out a part or a class of parts from coming to conscious awareness. Layering is obviously not causing the structures responsible for the undesired qualities or states to cease, only for their arising in consciousness to be prevented. Sometimes layering is initiated "consciously" but often the layering that people report discovering in their practice is not willful, and they usually have little control over it consciously. Extant layering may also be quite old, and a meditator may have no available memory of the events which led to it. Delayering is a consistently noted phenomenon among serious practitioners. There are many reports of people "discovering" in the course of their practice intense sadness, anger, sexual energy, or many other qualities which might have been covered over or dissociated from.
Many traditional religious and meditative systems prescribe practices to purposefully attenuate various undesirable mindstates. These |
19f2dac0-2cab-47dd-9acd-3c5e794e7a70 | trentmkelly/LessWrong-43k | LessWrong | Why "training against scheming" is hard
TLDR: I think that AI developers will and should attempt to reduce scheming by explicitly training the model to scheme less. However, I think “training against scheming” is a harder problem than other alignment training (e.g. harmlessness training). I list four concrete failure modes with “training against scheming”: a) the training against scheming distribution is too narrow, b) overwhelming pressure from other goals, c) the AI exploits an imperfect reward model, and d) deceptive alignment.
I’d like to thank Alex Meinke for detailed feedback and discussion.
Motivation
There have been multiple recent results showing early evidence of scheming-related behavior (as of June 2025). For example, Greenblatt et al., 2024 demonstrates alignment faking, Meinke et al., 2024, shows in-context scheming and Baker et al., 2025 investigates situationally aware reward hacking. I’m using a broad definition of scheming, i.e. “covertly pursue misaligned goals, hiding their true capabilities and objectives” as defined in Balesni et al., 2024.
There are many potential strategies to mitigate these undesired behaviors. For example, we can use monitoring and Control to detect scheming in real-time and prevent bad outcomes before they happen. However, I expect that any future approach will also include a form of “training against scheming” where we actively train models to scheme less, similar to how we currently train models to be more harmless.
I expect that “training against scheming” will be harder than training the model to produce harmless outputs, especially as models get more capable. In this post, I describe why I expect training against scheming to be hard, list concrete failure modes of such training, and provide recommendations.
Scheming is a result of goal conflicts and smart AIs
Most likely, the AI is not scheming because it is evil. It is scheming because it pursues a goal, and scheming is the most effective strategy to achieve that goal.
We want the AIs to solve ha |
58049409-aafb-446c-aece-411b5e5f2338 | trentmkelly/LessWrong-43k | LessWrong | Transparency in safety-critical systems
I've just posted an analysis to MIRI's blog called Transparency in Safety-Critical Systems. Its aim is to explain a common view about transparency and system reliability, and then open a dialogue about which parts of that view are wrong, or don't apply well to AGI.
The "common view" (not universal by any means) explained in the post is, roughly:
> Black box testing can provide some confidence that a system will behave as intended, but if a system is built such that it is transparent to human inspection, then additional methods of reliability verification are available. Unfortunately, many of AI’s most useful methods are among its least transparent. Logic-based systems are typically more transparent than statistical methods, but statistical methods are more widely used. There are exceptions to this general rule, and some people are working to make statistical methods more transparent.
Three caveats / open problems listed at the end of the post are:
1. How does the transparency of a method change with scale? A 200-rules logical AI might be more transparent than a 200-node Bayes net, but what if we’re comparing 100,000 rules vs. 100,000 nodes? At least we can query the Bayes net to ask “what it believes about X,” whereas we can’t necessarily do so with the logic-based system.
2. Do the categories above really “carve reality at its joints” with respect to transparency? Does a system’s status as a logic-based system or a Bayes net reliably predict its transparency, given that in principle we can use either one to express a probabilistic model of the world?
3. How much of a system’s transparency is “intrinsic” to the system, and how much of it depends on the quality of the user interface used to inspect it? How much of a “transparency boost” can different kinds of systems get from excellently designed user interfaces?
The MIRI blog has only recently begun to regularly host substantive, non-news content, so it doesn't get much commenting action yet. Thus, I fig |
22dc9efd-e566-4256-aa67-cd8235231987 | StampyAI/alignment-research-dataset/special_docs | Other | AlphaGo Zero and capability amplification
[AlphaGo Zero](https://deepmind.com/blog/alphago-zero-learning-scratch/) is an impressive demonstration of AI capabilities. It also happens to be a nice proof-of-concept of a [promising alignment strategy](/benign-model-free-rl-4aae8c97e385).
How AlphaGo Zero works
======================
AlphaGo Zero learns two functions (which take as input the current board):
\* A prior over moves \*\*p\*\* is trained to predict what AlphaGo will eventually decide to do.
\* A value function \*\*v\*\* is trained to predict which player will win (if AlphaGo plays both sides)
Both are trained with supervised learning. Once we have these two functions, AlphaGo actually picks it moves by using 1600 steps of Monte Carlo tree search (MCTS), using \*\*p\*\* and \*\*v\*\* to guide the search. It trains \*\*p\*\* to bypass this expensive search process and directly pick good moves. As \*\*p\*\* improves, the expensive search becomes more powerful, and \*\*p\*\* chases this moving target.
Iterated capability amplification
=================================
In the simplest form of [iterated capability amplification](/benign-model-free-rl-4aae8c97e385), we train one function:
\* A “weak” policy \*\*A\*\*, which is trained to predict what the agent will eventually decide to do in a given situation.
Just like AlphaGo doesn’t use the prior \*\*p\*\* directly to pick moves, we don’t use the weak policy \*\*A\*\* directly to pick actions. Instead, we use a [capability amplification](/policy-amplification-6a70cbee4f34) scheme: we call \*\*A\*\* many times in order to produce more intelligent judgments. We train \*\*A\*\* to bypass this expensive amplification process and directly make intelligent decisions. As \*\*A\*\* improves, the amplified policy becomes more powerful, and \*\*A\*\* chases this moving target.
In the case of AlphaGo Zero, \*\*A\*\* is the prior over moves, and the amplification scheme is MCTS. (More precisely: \*\*A\*\* is the pair (\*\*p\*\*, \*\*v\*\*), and the amplification scheme is MCTS + using a rollout to see who wins.)
Outside of Go, \*\*A\*\* might be a question-answering system, which can be applied several times in order to first break a question down into pieces and then separately answer each component. Or it might be a policy that updates a [cognitive workspace](https://blog.ought.com/dalca-4d47a90edd92), which can be applied many times in order to “think longer” about an issue.
The significance
================
Reinforcement learners take a reward function and optimize it; unfortunately, it’s not clear where to get a reward function that faithfully tracks what we care about. That’s a key source of safety concerns.
By contrast, AlphaGo Zero takes a policy-improvement-operator (like MCTS) and converges towards a fixed point of that operator. If we can find a way to improve a policy \*while preserving its alignment\*, then we can apply the same algorithm in order to get very powerful but aligned strategies.
Using MCTS to achieve a simple goal in the real world wouldn’t preserve alignment, so it doesn’t fit the bill. But “[think longer](/humans-consulting-hch-f893f6051455)” might. As long as we start with a policy that is [close enough](/corrigibility-3039e668638) to being aligned — a policy that “wants” to be aligned, in some sense — allowing it to think longer may make it both smarter \*and\* more aligned.
I think designing alignment-preserving policy amplification is a tractable problem today, which can be studied either in the context of existing ML or human coordination. So I think it’s an exciting direction in AI alignment. A candidate solution could be incorporated directly into the AlphaGo Zero architecture, so we can already get empirical feedback on what works. If by good fortune powerful AI systems look like AlphaGo Zero, then that might get us much of the way to an aligned AI. |
a6d3e539-d7d5-420c-bf95-38b7d0e43f92 | trentmkelly/LessWrong-43k | LessWrong | [POLL] Wisdom of the Crowd experiment
Many of you will be familiar with the "Wisdom of the Crowd" - a phenomenon where the average result of a large poll of people's estimates tends to be very accurate, even when most people make poor estimates. I've written a short poll to test a small variant of this setup which I would like to test.
Please fill out this short poll.
Specifically, I want to see how the weighted average of the results performs when the question is posed as "is the value in question closer to A or B?" This change is inspired by your usual two-party election where people choose between two extreme values, when many voters have opinions in the middle of the two.
Thank you for helping!
I'm a little worried about anchoring in this survey. Suggestions for how to improve it would be appreciated. |
b9479f8a-a628-47a5-9cbe-7f41322b9612 | trentmkelly/LessWrong-43k | LessWrong | Is suicide high-status?
I sometimes have thoughts of suicide. That does not mean I would ever come within a mile of committing the act of suicide. But my brain does simulate it; though I do try to always reduce such thoughts.
But what I have noticed is that 'suicide' is triggered in my mind whenever I think of some embarrassing event, real or imagined. Or an event in which I'm obviously a low-status actor. This leads me to think that suicide might be a high-status move, in the sense that its goal is to recover status after some event which caused a big drop in status. Consider the following instances when suicide is often considered:
1. One-sided break-ups of romantic relationships. The party who has been 'dumped' (for the lack of a better word), has obviously taken a giant status hit. In this case, suicide is often threatened.
2. A samurai committing seppuku. The samurai has lost in battle. Clearly, a huge drop in status (aka 'honor').
3. PhD student says he/she can't take it anymore. A PhD is a constant hit in status: you aren't smart enough, you don't have much money, and you don't yet have intellectual status.
Further, suicide (or suicidal behavior leading to death) seems to have conferred status to artists. Examples: Kurt Cobain, Amy Winehouse, Jimi Hendrix, Hunter S. Thompson, Ernest Hemingway, David Foster Wallace and many more. I'm not saying that they committed suicide due to a pressure to achieve high-status (though that may be the case, I'm not sure). What I am saying is that suicide has been associated with high-status.
Further, after a person is dead, he/she is almost always celebrated (at least for a while) and all their faults are forgotten.
My theory: in many low-status situations, an instinctive way to recover status is to say that you are too good for this game and check-out. In fact, children (and adults) will often just leave a game they're not very good at and disparage the rest of the players for playing. And suicide is the ultimate check-out. This theory |
459e728d-e4a6-4d31-b5f4-4c77f62119de | trentmkelly/LessWrong-43k | LessWrong | Looking for AI / Application Engineer @ OpenPrinciples -- Help people align actions with principles
I’m Ti Guo, I am into rationality and is co-founder of EA UWaterloo, and EA IBM. We are looking for remote & part-time EA-aligned Co-founding Team member, or non-paid intern AI / Application Engineer (Python), to help build our AI Life Coach product Ultrabrain under our EA-aligned startup OpenPrinciples.
What do we do:
* OpenPrinciples.org is (to our knowledge) the world’s first crowdsourced database/wiki for life principles.
* Ultrabrain is the AI Life Coach product under OpenPrinciples, it helps people align their actions with their principles by helping find, remind, and iterate on life principles. Currently, it uses AI (GPT3, BERT) to remind people of principles when they need them.
* Our beta version software is running with a small but growing list of users
Why join us:
* We are solving a significant problem: We believe that the lack of a systematic and principle-driven approach is one of the most significant factors that slow down the growth of EA-aligned people/organizations.
* Work at your own hours for 5 ~ 25hrs / week depend on your role
* Learn to build your own startup / product from 0 to 1 by working with an experienced team on an altruistic product with real users
* Work with a group of extremely interesting and “principled” and EA-aligned people, with a lot of mentorships and internal paired life / career coaching buddies
Your job task as an AI Engineer could include and not be limited to the following:
* Build an AI (Data pipeline) to extract principles from articles at scale
* Fine-tuning NLP models with a large amount of user data
* Host model and data pipeline on GCP / AWS
* Experiment with different NLP models / architectures to solve engineering problems
Your job task as an Application Engineer could include and not be limited to the following:
* Iterate app prototypes closely with our designers, business side members, and users using the most popular no-code app dev platform Bubble.io
* Develop python backend server closel |
575af483-aec6-42f7-b47e-c55d732fd516 | trentmkelly/LessWrong-43k | LessWrong | Programming the LW Study Hall
We've had considerable interest and uptake on the Less Wrong Study Hall, especially with informal timed Pomodoro sessions for everyone to synchronize on. Working together with a number of other visible faces, and your own face visible to them, does seem effective. Keeping the social chat to the 5 off minutes prevents this from turning into just another chatroom.
We've been using this Tinychat room, and implementing everything Pomodoro-related with manual typing. Is there anyone out there who's interested in taking this to the next level with some custom code, possibly via the Google Hangouts API (Javascript), so we can have the following nice features?
Synchronized, software-implemented Pomodoros for everyone.
* Maybe one chat room with 20/5 and one with 45/5.
* Actual enforcement of the "no chatting unless the Pomodoro is on" and/or muted microphones.
Chat rooms with an N-person limit (several people report being more productive in smaller groups, so we're not sure if N should be 5 or 10) and new chat rooms being spawned as earlier ones get full. Moderatability (we've already had one troll). One person suggested the ability to +1/-1 one person per day, with a kick at -5. (Eliezer remarks that he expects this to be completely ineffective and that you need actual mods, maybe a group of trusted users.)
* We only wish and dream that we could integrate with LW logins, but this would require LW development resources that apparently don't exist. Maybe with enough grotesque hackery we could have a page somewhere that you comment to confirm you're a Study Hall user, and the system could look up your karma from there to decide if you can cast +1/-1 user votes.
A custom page layout (which you don't get if you use Tinychat!) which has the watching participants lined up vertically on the left, so we can work on something while still easily seeing our friends.
* A small line underneath everyone's image saying what they're currently working on.
Maybe a common room where |
bca68553-832c-4b85-a424-fa119ba713e7 | trentmkelly/LessWrong-43k | LessWrong | Urgency in the ITN framework
I have a question about the ITN framework, which is used to evaluate and prioritise causes in the context of effective altruism (importance, tractability, and neglectedness). I have the impression that this method does not take into account an element that seems essential to me: the urgency of a cause.
Let's take the example of existential risks. If we consider two events with the potential to extinguish humanity, but one of them seems likely to happen more quickly, shouldn't we allocate more resources to the latter? All other things being equal, it would seem logical that urgency should influence the allocation of resources. However, this consideration seems to be absent from the ITN framework. |
bae1989a-7842-4cbb-8d22-13de6eb102d6 | trentmkelly/LessWrong-43k | LessWrong | HP:MoR Audio Book Pilot
Like far too many people, I have a job that for decent chunks of my day doesn't require much concentration. I listen to various podcasts and ebooks to alleviate the boredom at these times, as well as to get me through my daily commute. I have several friends with plenty of audio-listening time, a fair bit of which is given to amateur podcasts (in the "not getting paid for it" sense), who have much less time available for straight reading. I have, on more than one occasion, thought "Wouldn't it be great if Methods of Rationality was available in audio format?"
Well, now it is.
At least, the first chapter is, in a "testing the waters" sort of way.
If you have 17 minutes free and any interest, here's the file - http://www.filedropper.com/hpmorpilot
I have one over-riding question to anyone/everyone: is this of any value? Would this be useful to anyone here, either personally, or to share with others? Would a spruced-up first chapter, and regular production of further chapters, be something you want more of?
A closely-related follow-up question: if so - am I the right person to do this? Even "acceptable", I'm not shooting for perfection. Or is something about my voice/manner so grating I should stick to non-radio work.
If the answer to these is "yes", then are there any comments/suggestions? I'm new to this recording thing. I got a decent mic and a pop-filter, and I've played around with the software a bit, but I'm still learning. I have noticed that I need to enunciate more, it turns out my "can't win" sounds just like my "can win". Further suggestions? I'm thinking of varying my pitch when different characters are talking to make it more apparent when dialog is switching back and forth, hopefully that won't be too cheesy.
If people want more of this I'll get an actual server to host the files and make it available through iTunes as a podcast.
Again: http://www.filedropper.com/hpmorpilot |
bb6fd287-f885-4161-9ab3-86e3033d6932 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Utrecht: More on effective altruism
Discussion article for the meetup : Utrecht: More on effective altruism
WHEN: 05 April 2014 05:00:00PM (+0200)
WHERE: Oudegracht 158, 3511 AZ Utrecht, The Netherlands
Our last meetup went well and we hope to keep the momentum growing. As of yet, we have no specific agenda items. Thus, if you have a topic in the Less Wrong / Effective Altruism / rationality scope, please add a comment below. Since this group is still rather young and informal, we'll use the opportunity to meet each other, share opinions on EA/LW and share our desires for what we want the group to become. There will also be plenty of semi-serious discussion over relevant issues. We will meet in a café called De Winkel van Sinkel, which is 400m walking distance from Utrecht Centraal. The meetup will be held in English. I will be in front of the café with a a sign that says "LW" on it at 17:00. If you have trouble finding us you can call me at 0684140766. We have a meetup.com presence of Lesswrong/Effective Altruism Nederland where you can click attend if you want to join: http://www.meetup.com/Less-Wrong-Nederland/events/173124892/ Additionally we also have a facebook group: https://www.facebook.com/groups/262932060523750/
Discussion article for the meetup : Utrecht: More on effective altruism |
1a920d05-72a4-4c30-a36c-896b6b852a45 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Control Symmetry: why we might want to start investigating asymmetric alignment interventions
[This is a post summarizing the motivation for an AISC 2024 project: If you are interested in participating you can apply here: <https://aisafety.camp/> (project 25: [Asymmetric control in LLMs: model editing and steering that resists control for unalignment](https://docs.google.com/document/d/1wuHalfHjAmFA-RKYcq3MBptFYLPvoMdy9KTPelyZ67A/edit#heading=h.9lmc73wscx1r))]
Tl;dr: Techniques for AI alignment *could* equally be used to create misaligned models in many cases. We should attempt to develop methods that might only work for alignment and not misalignment to reduce the risk of externalities related to technical alignment research.
**The problem**
A recent paper led by the Center for AI Safety entitled “[*Representation Engineering: A Top-Down Approach to AI Transparency*](https://arxiv.org/abs/2310.01405)” demonstrates a number of comprehensive control techniques in the domain of machine ethics and AI safety. Notably, an impressive result on reducing power seeking and immorality on the [MACHIAVELLI benchmark](https://aypan17.github.io/machiavelli/) using these techniques is shown. ***However, they equally demonstrate the same technique can be used to increase power seeking and immorality!*** Several other settings in the paper demonstrate the same type of control being effective for alignment and misalignment. I am calling this property **control symmetry***: the degree to which a given method for controlling an agent could be equally used for control towards one end and control towards the opposite end.*
Why is this a problem? It is true, if we can demonstrate comprehensive control then we have made a lot of progress in AI safety since we can use these techniques to prevent harmful and catastrophic behavior. The concern about control symmetry is externalities, that is the degree to which safety research can end up being used towards harmful ends. It is preferred if we develop control techniques that could not be used by bad actors to create misaligned agents.
Control symmetry also provides a novel framework for asking specific questions about control and safety. For instance, is there an orthogonality of control where the level of intelligence of an agent correlates to the degree of control symmetry? Are there some properties under which control is more symmetric than others? Factuality for instance seems unlikely to be symmetric since the world itself is factually consistent. Controlling in the direction of fictionality or misinformation would lead to more and more inconsistencies that could break general behavior and capability. Because of this interventions like retrieval augmented generation are evidence of an asymmetric control intervention.
**Starting to tackle the problem**
I am currently unaware of discussions about control symmetry in the alignment community beyond more abstract conversations about externalities and info hazards. The purpose of our AISC project will be to develop a detailed conceptualization of control symmetry in alignment as a specific externality risk as well as an understanding of the requirements for asymmetric control.
There also don’t seem to be any comprehensive evaluation settings that would measure control symmetry across a variety of currently proposed control techniques like model editing and steering such that the community can benchmark how symmetric novel control techniques might be.
Finally, ***is asymmetric control even possible?*** I don’t think anyone knows the answer to this question. It seems like we should have a good specific answer to this question both with well argued in-principal theoretical evidence as well as empirical evidence over attempts at asymmetric control. Being able to provide answers to this allows us to specifically formulate the risk parameters of control interventions and control research.
Regardless of the possibility, I think it is important to try to think through technical control interventions that might only work for alignment and not misalignment. If after trying our hardest we cannot, this is something we should know as a community.
Feel free to leave any feedback here (I am especially interested in prior or similar work at this point) and we will keep folks updated about our progress. |
849ab9a0-359e-4c1a-9980-d3869907c157 | trentmkelly/LessWrong-43k | LessWrong | Untrusted AIs can exploit feedback in control protocols
TL;DR
* We explore a novel control protocol, Untrusted Editing with Trusted Feedback. Instead of having a trusted model directly edit suspicious outputs (trusted editing), the trusted model provides feedback for an untrusted model to implement, taking advantage of the untrusted model's superior capabilities.
* Untrusted models were able to exploit the feedback to understand monitoring strategies and craft more effective attacks, reducing safety from trusted editing’s baseline of 93% to 80%. While compliance monitoring (checking if feedback was actually implemented) and selective editing provided some improvement, these mitigations failed to increase safety beyond the trusted editing baseline in our setup.
* These findings should not be interpreted as a definitive rejection of untrusted editing, but rather as evidence that protocols that perform untrusted editing with trusted feedback require substantial additional safeguards and design refinements to achieve practical safety benefits over existing control approaches.
Introduction: Dialogue Between Models as a Control Strategy
Control protocols rely on trusted but less capable models and limited human oversight to deploy powerful but potentially misaligned systems safely. Trusted editing, an established control protocol, achieves strong safety by having trusted models directly edit suspicious outputs from the untrusted model's proposed solutions. However, these protocols face practical limitations when the capability gap between the two models is large. Trusted models may struggle to effectively revise complex outputs while preserving functionality.
This capability gap motivates investigation of alternative approaches where a trusted model provides targeted feedback rather than direct edits, and the untrusted model implements the recommended changes. These untrusted editing protocols would theoretically preserve the untrusted model’s superior capabilities while maintaining some safety oversight through the stru |
4883ac7a-da8c-431e-91e3-feba9b0fcb5a | trentmkelly/LessWrong-43k | LessWrong | 'Paley's iPod: The cognitive basis of the design argument within natural theology'
De Cruz & de Smedt (2010) tries to explain, using cognitive science, why many people find design arguments so compelling. Abstract:
> The argument from design stands as one of the most intuitively compelling arguments for the existence of a divine Creator. Yet, for many scientists and philosophers, Hume’s critique and Darwin’s theory of natural selection have definitely undermined the idea that we can draw any analogy from design in artifacts to design in nature. Here, we examine empirical studies from developmental and experimental psychology to investigate the cognitive basis of the design argument. From this it becomes clear that humans spontaneously discern purpose in nature. When constructed theologically and philosophically correctly, the design argument is not presented as conclusive evidence for God’s existence but rather as an abductive, probabilistic argument. We examine the cognitive basis of probabilistic judgments in relationship to natural theology. Placing emphasis on how people assess improbable events, we clarify the intuitive appeal of Paley’s watch analogy. We conclude that the reason why some scientists find the design argument compelling and others do not lies not in any intrinsic differences in assessing design in nature but rather in the prior probability they place on complexity being produced by chance events or by a Creator. This difference provides atheists and theists with a rational basis for disagreement |
c8a1285b-0b8e-4db9-8f78-0c197892166a | StampyAI/alignment-research-dataset/lesswrong | LessWrong | The case for Doing Something Else (if Alignment is doomed)
(Related to [What an Actually Pessimistic Containment Strategy Looks Like](https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like))
It seems to me like there are several approaches with an outside chance of preventing doom from AGI. Here are four:
1. Convince a significant chunk of the field to work on safety rather than capability
2. Solve the technical alignment problem
3. Rethink [fundamental ethical assumptions](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile) and search for a simple specification of value
4. Establish international cooperation toward [Comprehensive AI Services](https://www.fhi.ox.ac.uk/reframing/), i.e., build many narrow AI systems instead of something general
Furthermore, these approaches seem quite *different*, to the point that some have virtually no overlap in a Venn-diagram. #1 is entirely a social problem, #2 a technical and philosophical problem, #3 primarily a philosophical problem, and #4 in equal parts social and technical.
Now suppose someone comes to you and says, "*Hi. I'm working on AI safety, which I think is the biggest problem in the world. There are several very different approaches for doing this. I'm extremely confident (99%+) that the approach I've worked on the most and know the best will fail. Therefore, my policy recommendation is that we all keep working on that approach and ignore the rest.*"
I'm *not* saying the above describes Eliezer, only that the ways in which it *doesn't* are not obvious. Presumably Eliezer thinks that the other approaches are even more doomed (or at least doomed to a degree that's sufficient to make them not worth talking about), but it's unclear why that is or why we can be confident in it given the lack of effort that has been extended so far.
Take [this comment](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy?commentId=oitqvHyXzmic4Kotj) as an example:
>
> How about if you solve a ban on gain-of-function research [before trying the policy approach], and then move on to much harder problems like AGI? A victory on this relatively easy case would result in a lot of valuable gained experience, or, alternatively, allow foolish optimists to have their dangerous optimism broken over shorter time horizons.
>
>
>
This reply makes sense if you are *already* convinced that policy is a dead end and just want to avoid wasting resources on that approach. If policy *can* work, it sounds like a bad plan since we probably don't have time to solve the easier problem first, especially not if one person has to do it without the combined effort of the community. (Also, couldn't we equally point to unsolved subproblems in alignment, or alignment for easier cases, and demand that they be solved before we dare tackle the hard problem?)
What bothers me the most is that discussion of alternatives has not even been part of the conversation. Any private person is, of course, free to work on whatever they want (and many other researchers are less pessimistic about alignment), but I'm specifically questioning Miri's strategy, which is quite influential in the community. No matter how pessimistic you are about the other approaches, surely there has to be *some* probability for alignment succeeding below which it's worth looking at alternatives. Are we 99% sure that value isn't simple and that the policy problem is unsolvable even for a shift to narrow systems? 99.99%? What is the point at which it begins to make sense to advocate for work on something else (which is perhaps not even on the list)? It's possible that Miri should stick to alignment regardless because of comparative advantage, but the messaging could have been "this alignment thing doesn't seem to work; we'll keep at it but the rest of you should do something else", and well [it wasn't](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy). |
82cd8d61-d9bf-4470-80a5-9e90272cc92a | trentmkelly/LessWrong-43k | LessWrong | Only optimize to 95 %
I was reading Tom Chivers book "The AI does not hate you" and in a discussion about avoiding bad side effects when asking a magic broomstick to fill a water bucket, it was suggested that somehow instead of asking the broomstick to fill the bucket you could do something like ask it to become 95 percent sure that it was full, and that might make it less likely to flood the house.
Apparently Tom asked Eliezer at the time and he said there was no known problem with that solution.
Are there any posts on this? Is the reason why we don't know this won't work just because it's hard to make this precise? |
b4b3fbe1-08db-44e6-a83c-fa9fe178439d | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | AI Safety Career Bottlenecks Survey Responses Responses
A few months ago, [AI Safety Support](https://www.aisafetysupport.org) conducted an [AI Safety Career Bottlenecks Survey](https://www.guidedtrack.com/programs/n8cydtu/run) (if you missed it you are still welcome to respond).
One of the questions asked was “*Anything you would like help with? What would make you more efficient? Give us your wish list.*”
This blogpost is a list of my responses to the survey responses to that question. This was originally an email sent to all survey responders who gave us their email address (and responded before March 4). The only difference is that I’ve corrected some typos, and updated some information.
This blogpost is *not* a complete list of all wishes. I’ve only listed things I have some sort of useful response too.
Answered Wishes
===============
General
-------
A list of resources.
* [Here you go](https://www.aisafetysupport.org/resources/lots-of-links)
Studies
-------
Opportunities to work with someone (preferably more experienced than me) doing research
* Apply to [AI Safety Camp](https://aisafety.camp/)
* Several AI Safety research groups have internships. There will probably be some opportunities like this opening up in the summer. Though they are usually very competitive.
* We’ll probably do another round of our [mentorship program](https://www.aisafetysupport.org/resources/mentorship-program), but we don’t know yet when that will be.
A suggested reading list
* There are several, [take a pick](https://www.aisafetysupport.org/resources/lots-of-links#h.px2jbc1qwvuj)
* Or read some of the [Alignment Forum Sequences](https://www.alignmentforum.org/library)
Good books on the subject would help
* [Here’s some books](https://www.aisafetysupport.org/resources/lots-of-links#h.6s2gcz1p5l6z)
Access to online courses in AI safety, ideally with tough problem sets, quizzes, and tests
* [Here you go](https://app.grasple.com/#/course/141). This project ended due to lack of funding, but the lessons that were completed are fully functional, with quizzes and everything, as far as I know.
Career and Funding
------------------
I want to know how to get into AI safety
* That depends on where you start. We are happy to give you personal advice. Just [reach out](https://www.aisafetysupport.org/home).
A place where positions in the field are advertised
* Take a look at [80k’s job board](https://80000hours.org/job-board/ai-safety-policy/)
Detailed wishlists from employers on desired qualifications for a safety researcher
* No one knows exactly what you need to know to do AI Safety research, though some people have made educated guesses (see wish regarding reading list).
* The most important factor for getting an AI Safety research job, is having your own ideas and opinions about what research you should do. Even internships often require you to write your own research proposals. My suggestion is to learn what you need to start asking your own questions, and after that learn what you need to answer those questions.
Information on positions with lower education thresholds to serve as stepping off points for an aspiring researcher.
* Some industry labs (e.g. Deepmind) hires research engineers, which have lower official requirements than other research jobs. But these jobs are very competitive.
* “positions with lower education thresholds to serve as stepping off points for an aspiring researcher.” This is exactly [what a PhD is](https://forum.effectivealtruism.org/posts/xfN3zwaxuf6uaycv7/should-you-do-a-phd).
* You can also apply for a grant from the [Long-Term Future Fund](https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund), and make your own path.
I would love to have more of an understanding of how people in degrees outside of computer science, neurology and philosophy can contribute to this field. I do believe that economics has a lot to offer, but I struggle to see any opportunities.
* I agree with you that most likely other fields have a lot to offer, and you are also correct in noticing that there are no established AI Safety research approaches building on those fields. Remember that AI Safety is a super early stage, so you should not expect the research path to be laid out for you.
* Make your own opportunity by finding your own research questions. However this is hard, so [please reach out](https://www.aisafetysupport.org/home) if you want more help.
Graduate programs focused on this topic.
* OpenPhil provides a [PhD fellowship](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/the-open-phil-ai-fellowship) for AI Safety research
* If you get accepted as an ML PhD student at UC Berkeley, you can join [Center for Human Compatible AI](https://humancompatible.ai/)
* The [Safe & Trusted AI program](https://safeandtrustedai.org/) is mainly about near-term AI Safety, but allows long-term safety work, and several of the current students are interested in long-term AI Safety
* Some other places that seems like a good place to do an AI Safety PhD (non-exhaustive):
+ Stanford, Oxford, Toronto, MIT, Amsterdam, Cambridge
+ *If you want to help make a better resource for this, let us know.*
* AI Safety is a growing field. There are still not many AI Safety professors, but more and more universities have a few postdoc or PhD students interested in this topic.
* For more discussion on this, join the [AI Alignment Slack](https://join.slack.com/t/ai-alignment/shared_invite/zt-fkgwbd2b-kK50z~BbVclOZMM9UP44gw) and go to the channel [#applying-for-grad-school](https://app.slack.com/client/T01641E1H6C/C01E334L2HY).
Not needing a day job.
* Apply to [Long-Term Future Fund](https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund), [Survival and Flourishing (SAF)](http://survivalandflourishing.org) or [The Center on Long-Term Risk Fund](https://longtermrisk.org/grantmaking/) for a grant.
* Or apply to join [CEEALAR](https://ceealar.org/) (Blackpool, UK), where you can get free housing and food as long as you are doing EA work or studying to do EA work.
I just want decent pay, job security, and to see the results of my work.
* As a rule, junior researchers don’t get job security. This is not great, but that's unfortunately how things are at the moment.
+ If you have access to lots of money and want to help improve this situation, [let us know](mailto:linda.linsefors@gmail.com).
* If you want financial security, you should go for a career in some area that pays well. When you have saved up some money, you can provide a salary either for future you or for some other AI Safety researcher.
* There is unfortunately no way to ensure that you will see the results of your research efforts. There will always be the risk that you are researching a dead end, and even when things go well, the progress will be uneven and unpredictable. If this will make you unhappy, then don’t do research.
Information About Active Research Areas
---------------------------------------
I want AI Safety academic journals.
* There is no AI Safety journal, but we have something better: [The AI Alignment Forum](https://www.alignmentforum.org/). (I’m serious about this. Peer-review sucks, and journals don’t even have a comment section. However some people I respect do disagree with me.)
* Many AI conferences have an attached AI Safety workshop (usually a mix of long-term and short-term), where you can publish.
Some interesting problems that you (people working in AI Safety) are working on.
* Have a look at the [AI Alignment Forum](https://www.alignmentforum.org/) and sign up for the [Alignment Newsletter](https://rohinshah.com/alignment-newsletter/) to learn what others are working on.
Better maps of the space of AI research would probably be helpful.
* [This one](https://futureoflife.org/landscape/) is good, but I don’t know if it is being kept up to date.
I wish that there were a searchable and frequently updated database of papers that explicitly focus on AI safety.
* Transformative AI Safety Bibliographic Database seems to fit the description. You can search the database [here](https://www.zotero.org/groups/2420932/tai_safety_bibliography/library). See explanation post [here](https://forum.effectivealtruism.org/posts/S7x3ztfd9h8ux68wN/tai-safety-bibliographic-database).
* There's also [Alignment Newsletter Database](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit#gid=0). Everything summarized in the [Alignment Newsletter](https://rohinshah.com/alignment-newsletter/) goes here. So it fulfills “frequently updated”, but it is not a complete list of all AI Safety work and it is not very searchable unless you count Crtr+f.
Community and Support
---------------------
Hanging out with AI Alignment researchers
* Check [LessWrong’s event calendar](https://www.lesswrong.com/community) for AI Safety related events.
* [AI Safety Support](https://www.aisafetysupport.org/home) has organised events in the past, and we will do so again in the future, after we sorted out funding and staffing. Sign up to our [newsletter](https://www.aisafetysupport.org/newsletter) to stay informed.
* Join the [#1-1](https://app.slack.com/client/T01641E1H6C/C01K62HB51C) channel in the [AI Alignment Slack](https://join.slack.com/t/ai-alignment/shared_invite/zt-fkgwbd2b-kK50z~BbVclOZMM9UP44gw) to be paired up with a new AI Safety interesting person every month.
A community of people to work with.
* [AI Alignment Slack](https://join.slack.com/t/ai-alignment/shared_invite/zt-fkgwbd2b-kK50z~BbVclOZMM9UP44gw)
* [AI Safety Reading Group](https://www.facebook.com/groups/AISafetyDanmark/)
Cure for executive function issues
* Shay helps those in the AI safety field to optimize their physical and mental health. You can [book her here](https://calendly.com/shaygestal/50min).
* [EA Coaching](https://effectivealtruismcoaching.com/) - Productivity coaching for people who want to change the world
* [CFAR Coaching (Beta)](https://www.rationality.org/resources/coaching) - “We find that two minds are often better than one, and we’d like to offer ours to help you.”
Feeling part of a community of like minded people
* This is important! Make sure to make friends along the way. We hope to be able to provide some community with our online events. But when the world opens up again, go to in person events if you can.
* See previous questions for ways to connect.
Other
-----
A bit of GPU/TPU time
* I’ve been told that [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) gives free substantial TPU access to most if not all people who ask. The main issue is that TPUs are much harder to work with than GPUs. In terms of renting GPUs, [vast.ai](https://vast.ai/) is very cheap.
Easier access to OpenAI's technologies.
* [EleutherAI](https://eleuther.ai/) is creating an open source GPT-3 replica, and some other projects.
Good resources to explain AI safety to people with less knowledge about AI
* Robert Miles’s videos are great, both the one on [his own channel](https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg) and his guest appearances on [Computerphile](https://www.youtube.com/watch?v=tlS5Y2vm02c&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps). Rob recommends starting with [Instrumental Convergence](https://www.youtube.com/watch?v=ZeecOKBus3Q) and [Orthogonality Thesis](https://www.youtube.com/watch?v=hEUO6pjwFOo).
* Also check out the [motivational section on Victoria Krakovna’s AI Safety resource list](https://vkrakovna.wordpress.com/ai-safety-resources/#popular_audience). |
6e321781-7cc5-40ba-b44e-a196e4faf78b | trentmkelly/LessWrong-43k | LessWrong | An ML interpretation of Shard Theory
Crossposted from my personal blog
Epistemic Status: I have spent a fair bit of time reading the core Shard Theory posts and trying to understand it. I also have a background in RL as well as the computational neuroscience of action and decision-making. However, I may be misunderstanding or have missed crucial points. If so, please correct me!
Shard Theory has always seemed slightly esoteric and confusing to me — what are ‘shards’, why might we expect these to form in RL agents? When first reading the Shard Theory posts, there were two main sources of confusion for me. The first: why an agent optimising a reward function should not optimise for reward but instead just implement behaviours that have been rewarded in the past?
This distinction is now obvious to me. The distinction between amortised vs direct inference, and shards as cached behaviours falls directly out of amortized policy gradient algorithms (which Shard Theory uses as the prototypical case of RL [1]). This idea has also been expanded in many other posts.
The second source of my confusion was the idea of shards themselves. Even given amortisation, why should behaviour splinter into specific ‘shards’? and why should the shards compete with one another? What would it even mean for ‘shards’ to compete or for there to be ‘shard coalitions’ in a neural network?
My best guess here is that Shard Theory is making several empirical claims about the formation of representations during training for large-scale (?) RL models. Specifically, from an ML lens, we can think of shards as loosely-coupled relatively-independent subnetworks which implement specific behaviours.
A concrete instantiation of Shard Theory's claim, therefore, appears to be that during training of the network, the tendency is for the optimiser to construct multiple relatively loosely coupled circuits which each implement some specific behaviour which has been rewarded in the past. In a forward pass through the networks, these circuits th |
db0699f8-fea7-4202-b757-bedf23821177 | trentmkelly/LessWrong-43k | LessWrong | Only needs a flying saucer
This is a personal entry. I do not write it to convey much information, it's more of a point of view thing. I was asked for an inside view, but I guess that's not exactly it.
My mother, my son and I are now in the Czech Republic. This we owe to the goodwill and kindness of very many people, starting with a random guy on the way to the train station in Kyiv. To really describe it, I would need to put "incredible" before every second word, so, like. Let it just be "incredible people".
And now I am in a position to extend the invitation to another friend or family member. I have to choose between N., a mother with her two children and another N., also a mother with two children. Or between one of the mothers-and-kids sets and O., my pregnant sister. Or between O. and the bedridden son of A., the man to whom I owe not jumping out the window. (Not literally. Maybe even not nearly this. Don't know for sure. Only know for sure that A. is not going to leave Kyiv if his son is there. And there is this interesting detail that I am not in direct contact with his son's caretaker. So if anything happens to the man who talked me out of doing stupid things with my life, I will have no way of knowing if anything happens to his son. And I know that, and A. knows that. And A. asks for nothing. Well, he has asked if I have a flying saucer. I don't.)
And if anyone of them manages to come to Czech - it might take a week or more just to come here to Czech - my debt to the master of the house where we are staying grows, well, incredibly much more... My debt to the master of the house who wants himself a Ukrainian wife who knows how to cook and manage the household...
I will have to somehow find a job to help the household because the subsidy for hosting Ukrainians is capped at three Ukrainians. I don't know how I will do it. There is enough work about the house that some nights I don't read any news after dinner; I go to sleep.
My mother is unwell. She doesn't speak Czech. My ten-yea |
c262a87f-3027-44c3-8076-6bf6ffbceab6 | trentmkelly/LessWrong-43k | LessWrong | Gordon Irlam on the BEGuide
This post summarizes a conversation which was part of the Cause Prioritization Shallow, all parts of which are available here. Previously in this series, conversations with Owen Cotton-Barratt, Paul Christiano, and Paul Penley.
Participants
Gordon Irlam: Philanthropist and Founder of Back of the Envelope Guide to Philanthropy
Katja Grace: Research Assistant, Machine Intelligence Research Institute
Notes
This is a summary of points made by Gordon Irlam during an interview with Katja on March 4th 2014. The summary was made by an anonymous author.
The Basics
The Back of the Envelope Guide to Philanthropy (BEGuide) is a one-man project that aims to quantify the impact of philanthropic causes and share information, helping philanthropists to prioritise spending in a way that delivers the highest value for money.
Gordon Irlam has been running the BEGuide for about 10 years and estimates that he’s spent between 200 and 600 hours on it in total, working on it in short sprees whenever promising new information catches his attention.
Gordon chooses which causes to investigate by gut instinct, looking at anything that he feels might turn out to be worthwhile. Almost every cause that he’s evaluated has turned out to promise a positive net impact.
Work normally begins with a seed in a newspaper article or radio program and grows within a few hours to an evaluation that can be published on the BEGuide website.
Gordon also runs the Gordon R Irlam Charitable Foundation and his work on the BEGuide is now the basis for around 99% of that foundation’s spending, around $200,000 annually in coming years.
What Gets Done
Gordon does no original research, but spends his time compiling available information and processing it to get results for the BEGuide.
Gordon considers this kind of work important, and laments that academic researchers get funded to go out and get information, but once they write a paper on it, seldom manage to direct it anywhere that it can be compared t |
65aec8bd-54c9-4736-a164-afbbb86f762f | trentmkelly/LessWrong-43k | LessWrong | Bounded AI might be viable
(Work done at Convergence Analysis. Mateusz wrote the post and is responsible for most of the ideas with Justin helping to think it through. Thanks to Olga Babeeva for the feedback on this post.)
1. Motivation
Suppose the perspective of pausing or significantly slowing down AI progress or solving the technical problems necessary to ensure that arbitrarily strong AI has good effects on humanity (in time, before we get such systems) both look gloomy.[1] What options do we have left?
Adam Shimi presents a useful frame on the alignment problem in Abstracting The Hardness of Alignment: Unbounded Atomic Optimization:
> alignment [is] the problem of dealing with impact on the world (optimization) that is both of unknown magnitude (unbounded) and non-interruptible (atomic).
If the problem is about some system (or a collection of systems) having an unbounded, non-interruptible impact,[2] can we handle it by ensuring that the impact of the systems in question is reliably bounded and interruptible (even if the systems are scheming against us, or trying to "liberate" themselves from the properties of boundedness and interruptibility)?
The core idea is to impose a multidimensional bound on the system's capabilities such that:
* it is sufficiently capable along the dimensions corresponding to the capabilities we want to extract from it
but
* sufficiently incapable along the dimensions corresponding to capabilities that would (in combination with the capabilities we want it to have) grant it the dangerous power of unbounded, non-interruptible optimization.[3]
In other words, if we can't solve the problem of aligning unbounded, non-interruptible optimizers, we may be able to avoid this problem by:
* not building unbounded, non-interruptible optimizers
and, instead,
* building some other, safer, kind of AI that can be demonstrated to deliver enough value to make up for the giving up on the business-as-usual kind of AI along with the benefits it was expected to |
bbae039d-7325-48b9-9539-04fafd0df900 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | A simple example of conditional orthogonality in finite factored sets
Recently, MIRI researcher Scott Garrabrant has [publicized his work on finite factored sets](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr/p/N5Jm6Nj4HkNKySA5Z). It allegedly offers a way to understand agency and causality in a set-up like the [causal graphs championed by Judea Pearl](https://en.wikipedia.org/wiki/Causal_graph). Unfortunately, the [definition of conditional orthogonality](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr/p/N5Jm6Nj4HkNKySA5Z#2b__Conditional_Orthogonality) is very confusing. I'm not aware of any public examples of people demonstrating that they understand it, but I didn't really understand it until an hour ago, and I've heard others say that it went above their heads. So, I'd like to give an example of it here.
In a finite factored set, you have your base set S.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
, and a set B of 'factors' of your set. In my case, the base set S will be four-dimensional space - I'm sorry, I know that's one more dimension than the number that well-adjusted people can visualize, but it really would be a much worse example if I were restricted to three dimensions. We'll think of the points in this space as tuples (x1,x2,x3,x4) where each xi is a real number between, say, -2 and 2 [footnote 1]. We'll say that X1 is the 'factor', aka partition, that groups points together based on what their value of x1 is, and similarly for X2, X3, and X4, and set B={X1,X2,X3,X4}. I leave it as an exercise for the reader to check whether this is in fact a finite factored set. Also, I'll talk about the 'value' of partitions and factors - technically, I suppose you could say that the 'value' of some partition at a point is the set in the partition that contains the point, but I'll use it to mean that, for example, the 'value' of X1 at point (x1,x2,x3,x4) is x1. If you think of partitions as questions where different points in S give different answers, the 'value' of a partition at a point is the answer to the question.
[EDIT: for the rest of the post, you might want to imagine S as points in space-time, where x4 represents the time, and (x1,x2,x3) represent spatial coordinates - for example, inside a room, where you're measuring from the north-east corner of the floor. In this analogy, we'll imagine that there's a flat piece of sheet metal leaning on the floor against two walls, over that corner. We'll try conditioning on that - so, looking only at points in space-time that are spatially located on that sheet - and see that distance left is no longer orthogonal to distance up, but that both are still orthogonal to time.]
Now, we'll want to condition on the set E={(x1,x2,x3,x4)|x1+x2+x3=1}. The thing with E is that once you know you're in E, x1 is no longer independent of x2, like it was before, since they're linked together by the condition that x1+x2+x3=1. However, x4 has nothing to do with that condition. So, what's going to happen is that conditioned on being in E, X1 is orthogonal to X4 but not to X2.
In order to show this, we'll check the definition of conditional orthogonality, which actually refers to this thing called conditional history. I'll write out the definition of conditional history formally, and then try to explain it informally: the conditional history of X given E, which we'll write as h(X|E), is the smallest set of factors H⊆B satisfying the following two conditions:
1. For all s,t∈E, if s∼bt for all b∈H, then s∼Xt.
2. For all s,t∈E and r∈S, if r∼bs for all b∈H and r∼b′t for all b′∈B∖H, then r∈E.
Condition 1 means that, if you think of the partitions as carving up the set S, then the partition X doesn't carve E up more finely than if you carved according to everything in h(X|E). Another way to say that is that if you know you're in E, knowing everything in the conditional history of X in E tells you what the 'value' of X is, which hopefully makes sense.
Condition 2 says that if you want to know if a point is in E, you can separately consider the 'values' of the partitions in the conditional history, as well as the other partitions that are in B but not in the conditional history. So it's saying that there's no 'entanglement' between the partitions in and out of the conditional history regarding E. This is still probably confusing, but it will make more sense with examples.
Now, what's conditional orthogonality? That's pretty simple once you get conditional histories: X and Y are conditionally orthogonal given E if the conditional history of X given E doesn't intersect the conditional history of Y given E. So it's saying that once you're in E, the things determining X are different to the things determining Y, in the finite factored sets way of looking at things.
Let's look at some conditional histories in our concrete example: what's the history of X1 given E? Well, it's got to contain X1, because otherwise that would violate condition 1: you can't know the value of X1 without being told the value of X1, even once you know you're in E. But that can't be the whole thing. Consider the point s=(0.5,0.4,0.4,0.7). If you just knew the value of X1 at s, that would be compatible with s actually being (0.5,0.25,0.25,1), which is in E. And if you just knew the values of X2, X3, and X4, you could imagine that s was actually equal to (0.2,0.4,0.4,0.7), which is also in E. So, if you considered the factors in {X1} separately to the other factors, you'd conclude that s could be in E - but it's actually not! This is exactly the thing that condition 2 is telling us can't happen. In fact, the conditional history of X1 given E is {X1,X2,X3}, which I'll leave for you to check. I'll also let you check that the conditional history of X2 given E is {X1,X2,X3}.
Now, what's the conditional history of X4 given E? It has to include X4, because if someone doesn't tell you X4 you can't figure it out. In fact, it's exactly {X4}. Let's check condition 2: it says that if all the factors outside the conditional history are compatible with some point being in E, and all the factors inside the conditional history are compatible with some point being in E, then it must be in E. That checks out here: you need to know the values of all three of X1, X2, and X3 at once to know if something's in E, but you get those together if you jointly consider those factors outside your conditional history, which is {X1,X2,X3}. So looking at (0.5,0.4,0.4,0.7), if you only look at the values that aren't told to you by the conditional history, which is to say the first three numbers, you can tell it's not in E and aren't tricked. And if you look at (0.5,0.25,0.25,0.7), you look at the factors in {X4} (namely X4), and it checks out, you look at the factors outside {X4} and that also checks out, and the point is really in E.
Hopefully this gives you some insight into condition 2 of the definition of conditional history. It's saying that when we divide factors up to get a history, we can't put factors that are entangled by the set we're conditioning on on 'different sides' - all the entangled factors have to be in the history, or they all have to be out of the history.
In summary: h(X1|E)=h(X2|E)={X1,X2,X3}, and h(X4|E)={X4}. So, is X1 orthogonal to X2 given E? No, their conditional histories overlap - in fact, they're identical! Is X1 orthogonal to X4 given E? Yes, they have disjoint conditional histories.
Some notes:
* In this case, X1 was already orthogonal to X4 before conditioning. It would be nice to come up with an example where two things that weren't already orthogonal become so after conditioning. [EDIT: see [my next post](https://www.alignmentforum.org/posts/GFGNwCwkffBevyXR2/a-second-example-of-conditional-orthogonality-in-finite)]
* We didn't really need the underlying set to be finite for this example to work, suggesting that factored sets don't really need to be finite for all the machinery Scott discusses.
* We did need the range of each variable to be bounded for this to work nicely. Because all the numbers need to be between -2 and 2, once you're in E, if x1=2 then x2 can't be bigger than 1, otherwise x3 can't go negative enough to get the numbers to add up to 1. But if they could all be arbitrary real numbers, then even once you were in E, knowing x1 wouldn't tell you anything about x2, but we'd still have that X1 wasn't orthogonal to X2 given E, which would be weird.
[^1] I know what you're saying - "That's not a finite set! Finite factored sets have to be finite!" Well, if you insist, you can think of them as only the numbers between -2 and 2 with two decimal places. That makes the set finite and doesn't really change anything. (Which suggests that a more expansive concept could be used instead of finite factored sets.) |
0eb5d74c-6089-4330-8fb7-969877a48eb5 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Biextensional Equivalence
This is the third post in the Cartesian frames sequence. Read the first post [here](https://www.lesswrong.com/posts/BSpdshJWGAW6TuNzZ/introduction-to-cartesian-frames).
This post will introduce the standard equivalence relations we'll be using for Cartesian frames. Our primary interest will be in homotopy equivalence, which will allow us to classify frames according to their agents' and environments' effect on possible worlds.
1. Isomorphism
--------------
Before defining homotopy equivalence, I want to define isomorphism between Cartesian frames.
**Definition:** A morphism (g,h):C→D.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
is an *isomorphism* if both g and h are bijective. If there is an isomorphism between C and D, we say C≅D.
**Claim:** ≅ is an equivalence relation.
**Proof:** Reflexivity is trivial because the identity is an isomorphism. For symmetry, we have that if (g,h) is an isomorphism from C to D, then (g−1,h−1) is an isomorphism from D to C. Transitivity follows from the fact that bijection is transitive. □
**Claim:** C≅D if and only if there is a pair of morphisms ϕ:C→D and ψ:D→C that compose to the identity morphism in both orders.
**Proof:** If C≅D, we have (g,h):C→D with both g and h bijective, and we can take ϕ=(g,h) and ψ=(g−1,h−1).
Conversely, if (g1,h1)∘(g0,h0) is the identity morphism on C=(A,E,⋅), then g1∘g0 is the identity on A, so g0 must be injective. Similarly, h0∘h1 is the identity on E, so h0 must be surjective. Surjectivity of g0 and injectivity of h0 follow similarly from the fact that (g0,h0)∘(g1,h1) is the identity on D. □
Isomorphism is pretty intuitive. It is basically saying that it doesn't matter what the possible agents and possible environments are, other than how they interact with the evaluation function.
We will basically always be working up to at least isomorphism. For example, in the last post ("[Additive Operations on Cartesian Frames](https://www.lesswrong.com/posts/ewkYgtZapQRtDPT2F/additive-operations-on-cartesian-frames)"), we noted that ⊕ and & are commutative and associative up to isomorphism.
2. Homotopy Equivalence
-----------------------
**2.1. Homotopic Morphisms**
Our initial definition of homotopy equivalence will be devoid of interpretation, but the meaning will become clear later.
We say that two morphisms from C to D are homotopic if you can take the first function from the first morphism and the second function from the second morphism, and the resulting object is still a morphism.
**Definition:** Two morphisms (g0,h0),(g1,h1):C→D with the same source and target are called *homotopic* if (g0,h1) is also a morphism.
Note that the mere existence of two morphisms from C to D doesn't entail that those morphisms are homotopic. Consider the frame C0=(A,E,⋅) given by
C0=e0e1a0a1(w1w0w0w1).
There are two morphisms from C0 to itself: the identity morphism (idA,idE):C0→C0, and a morphism (g,h):C0→C0 that flips C0's rows and columns, sending a0 to a1, a1 to a0, e0 to e1, and e1 to e0. These two morphisms are not homotopic, because neither (idA,h) nor (g,idE) is a morphism.
Being homotopic is an equivalence relation on morphisms. (As such, in the above example it would have been superfluous to demonstrate that both (idA,h) and (g,idE) aren't morphisms.)
**Claim:** Homotopic is an equivalence relation.
**Proof:** Let (gi,hi):(A,E,⋅)→(B,F,⋆).
Reflexivity is trivial. For symmetry, we want to show that if (g0,h0), (g1,h1), and (g0,h1) are all morphisms, then so is (g1,h0). Indeed, for all a∈A and f∈F,
g1(a)⋆f=a⋅h1(f)=g0(a)⋆f=a⋅h0(f).For transitivity, we want to show that if (g0,h0), (g0,h1), (g1,h1), (g1,h2), and (g2,h2) are all morphisms, then so is (g0,h2). Indeed, for all a∈A and f∈F,
g0(a)⋆f=a⋅h1(f)=g1(a)⋆f=a⋅h2(f).□
Being homotopic is also respected by composition.
**Claim:** If ϕ0:C0→C1 is homotopic to ϕ1:C0→C1, and ψ0:C1→C2 is homotopic to ψ0:C1→C2, then ψ0∘ϕ0 is homotopic to ψ1∘ϕ1.
**Proof:** Let Ci=(Ai,Ei,⋅i), let ϕi=(gi,hi), and let ψi=(ji,ki). We want to show that (j0∘g0,h1∘k1) is a morphism. Indeed, since (g0,h1) and (j0,k1) are morphisms,
j0(g0(a))⋅2f=g0(a)⋅1k1(f)=a⋅0h1(k1(f)).□
Next, we define when two Cartesian frames are homotopy equivalent in the standard way.
**2.2. Homotopy Equivalence**
Homotopy equivalence relies on the existence of morphisms between C and D that we can compose in either order and end up with something that is homotopic to the identity morphism.
**Definition:** C is *homotopy equivalent* to D, written C≃D, if there exists a pair of morphisms ϕ:C→D and ψ:D→C such that ψ∘ϕ is homotopic to the identity on C and ϕ∘ψ is homotopic to the identity on D.
**Claim:** ≃ is an equivalence relation.
**Proof:** Reflexivity is trivial, by taking ψ=ϕ to be the identity. Symmetry is also trivial by swapping ϕ and ψ.
For transitivity, assume that for i=0,1, we have ϕi:Ci→Ci+1 and ψi:Ci+1→Ci such that ϕi∘ψi and ψi∘ϕi are homotopic to the identity. It suffices to show that ϕ0∘ϕ1∘ψ1∘ψ0 and ψ1∘ψ0∘ϕ0∘ϕ1 are both homotopic to the identity. In both cases, since composition respects what is homotopic, we have that the inner pair of morphisms cancels, and then the outer pair of morphisms cancels. □
Note that homotopy equivalence is weaker than isomorphism.
**Claim:** If C≅D, then C≃D.
**Proof:** Trivial. □
We now have the homotopy equivalence relation on Cartesian frames, but no real philosophical interpretation. To understand the meaning of ≃, we will first need to define biextensional collapse.
3. Biextensional Equivalence
----------------------------
**3.1. Biextensionality**
**Definition:** A Cartesian frame C=(A,E,⋅) is called *biextensional* if whenever a0,a1∈A are such that a0⋅e=a1⋅e, for all e∈E, we have a0=a1, and whenever e0,e1∈E are such that a⋅e0=a⋅e1, for all a∈A, we have e0=e1.
This is basically saying that a Cartesian frame is [biextensional](http://chu.stanford.edu/coimbra.pdf) if all of its possible agents and possible environments are distinct when viewed as functions from environment to world and functions from agent to world respectively. The agent doesn't have two options that invariably produce the same outcomes as each other, nor does the environment.
Viewed as a matrix, C is biextensional if all of its rows and columns are distinct.
We have the following lemma that hints at the relationship between biextensionality and homotopy equivalence.
**Lemma:** Let C and D be biextensional Cartesian frames. Then, C≃D if and only if C≅D.
**Proof:** The "if" direction is trivial. For the "only if" direction, let C=(A,E,⋅) and D=(B,F,⋆) be two biextensional Cartesian frames, and let C≃D. Thus, there is a pair of morphisms (g,h):C→D and (j,k):D→C such that (j∘g,h∘k) is homotopic to the identity on C, and (g∘j,k∘h) is homotopic to the identity on D. Thus (j∘g,idE) and (g∘j,idF) are both morphisms, where idS is the identity on the set S. This means that j(g(a))⋅e=a⋅e for all a and e, which since C is biextensional implies that j∘g is the identity on A. Similarly, since (g∘j,idF) is a morphism, we have that g∘j is the identity on B. Thus g:A→B is a bijection.
By the symmetry of homotopic, we also have that (idA,k∘h) and (idB,h∘k), which similarly gives us that k∘h is the identity on E and h∘k is the identity of F, so h:F→E is a bijection. Thus, C≅D. □
Thus, we now understand how to interpret homotopy equivalence for biextensional Cartesian frames: it is equivalent to isomorphism.
To understand homotopy equivalence in general, we will first show how to [collapse](https://planetmath.org/biextensionalcollapse) any Cartesian frame into a biextensional one.
**3.2. Biextensional Collapse**
Given a Cartesian frame C=(A,E,⋅), we can define an equivalence relation on A that says two possible agents are equivalent if they implement the same function from E to W; and we can similarly say that two elements of E are equivalent if they implement the same function from A to W.
**Definition:** Given a Cartesian frame C=(A,E,⋅), for a0,a1∈A, we say a0∼a1 if a0⋅e=a1⋅e for all e∈E. For e0,e1∈E, we say that e0∼e1 if a⋅e0=a⋅e1 for all a∈A.
**Claim:** ∼ is an equivalence relation on A and on E .
**Proof:** Trivial. □
**Definition:** Given a Cartesian frame C=(A,E,⋅), for a∈A, let ^a denote the equivalence class of a up to ∼. Let ^A denote the set of equivalence classes of ∼ in A. Similarly, for e∈E, let ^e denote the equivalence class of e up to ∼, and let ^E denote the set of equivalence classes of ∼ in E.
**Definition:** Given a Cartesian frame C=(A,E,⋅), the *biextensional collapse* of C, denoted ^C, is the Cartesian frame (^A,^E,^⋅), where ^a^⋅^e=a⋅e.
**Claim:** ^C is well-defined.
**Proof:** We need to show that ^⋅ is well defined, meaning we need to show that for all a0∼a1 and e0∼e1, we have that a0⋅e0=a1⋅e1. This is immediate from the definition of ∼. □
Viewed as a matrix, ^C is basically formed from C by deleting any duplicate rows and any duplicate columns. It doesn't matter whether you delete duplicate rows or duplicate columns first. After doing both, you will end up with a matrix with no duplicates.
**Claim:**^C is biextensional for all Cartesian frames C.
**Proof:** Let C=(A,E,⋅). We want to show that for all ^a0≠^a1∈^A, there exists an ^e∈^E such that ^a0^⋅^e≠^a1^⋅^e. Indeed, since ^a0≠^a1, we have that a0≁a1, so there exists an e∈E such that a0⋅e≠a1⋅e, which gives us that ^a0^⋅^e≠^a1^⋅^e. Similarly, for all ^e0≠^e1∈^E, there exists an ^e∈^E such that ^a^⋅^e0≠^a^⋅^e1. □
**Claim:** C is biextensional if and only if C≅^C.
**Proof:** If C is biextensional, then all equivalence classes up to ∼ on both A and E are singletons. Thus, the morphism (g,h):C→^C given by g(a)=^a and h(^e)=e is well-defined, and both g and h are bijective, so C≅^C.
Conversely, if C≅^C, then C is isomorphic to a biextensional Cartesian frame, and since biextensionality is clearly preserved by isomorphism, C is also biextensional. □
**3.3. Biextensional Equivalence**
We can now (finally) use biextensional collapse to give an intuitive meaning to homotopy equivalence.
**Claim:** C≃D if and only if ^C≅^D.
**Proof:** It suffices to show that C≃^C for all Cartesian frames C. Then, we will have that if C≃D, then ^C≃C≃D≃^D. Since homotopy equivalence is the same as isomorphism on biextensional Cartesian frames, this gives ^C≅^D. And conversely, if ^C≅^D then C≃^C≅^D≃D, so C≃D.
Let C=(A,E,⋅). We want to show that C≃^C. We do this by constructing a pair of morphisms (g,h):C→^C, and (j,k):^C→C. We will define g:A→^A by a↦^a, and k:E→^E by e↦^e. For h:^E→E, and j:^A→A, we can send each equivalence class to any one member of that class. The choice does not matter.
Now, we want to show that (g∘j,k∘h) is homotopic to the identity on ^C, and that (j∘g,h∘k) is homotopic to the identity on C. The first case is trivial, since g∘j and k∘h are the identity on ^A and ^E respectively. j∘g and h∘k need not be the identity on A and E, but j(g(a))∼a and h(k(e))∼e for all a∈A and e∈E. To show that (j∘g,h∘k) is homotopic to the identity on C, we just need to show that (j∘g,idE) is a morphism, where idE is the identity on E. However, this just means that j(g(a))⋅e=a⋅e for all a∈A and e∈E, which follows from the fact that j(g(a))∼a. □
We now have that two Cartesian frames are homotopy equivalent if and only if their biextensional collapses are isomorphic. Thus, when C and D are homotopy equivalent, we will also call them biextensionally equivalent.
**Definition:** We say C and D are *biextensionally equivalent* if C≃D.
When working up to biextensional equivalence, we are basically saying that we are ignoring any multiplicity in the space of possible worlds and possible environments.
**Claim:** Each biextensional equivalence class contains a unique biextensional Cartesian frame.
**Proof:** Each biextensional equivalence class has at least one element, C, and ^C is in the same equivalence class as C and is biextensional, so there must be at least one biextensional Cartesian frame in the class. If there were two biextensional Cartesian frames, they would have to be isomorphic, because isomorphic is equivalent to biextensional equivalence on biextensional Cartesian frames. □
From my perspective, the value of this equivalence relation is that it lets us be less realist about possible agents and possible environments, and instead just care about differences between possible worlds.
This fits well with our general approach in this sequence. Cartesian frames are particular ways of looking at the world and mentally carving it up into an agent component and an environment component, but we allow many different carvings, and we do not give any one carving privileged status as the "true" carving. Thus, we put less weight on our conception of the agent and environment, and more weight on the worlds themselves.
Giving less realism to possible agents/environments also fits with the fact that "worlds" may include details about the agent and environment, "possible agents" may specify features of the agent beyond its "actions," and so on.
Imagine an agent with two unrelated choices: which color to think about (green G, or red R) and whether to go for a walk or stay home (W or H). This yields the possible agents A={GH,GW,RH,RW}. The environment either is safe or has bears: E={S,B}. If we represent this scenario with the Cartesian frame
C0=SBGHGWRHRW⎛⎜
⎜
⎜⎝w0w1w2w3w4w5w6w7⎞⎟
⎟
⎟⎠,
then the possible worlds w0 and w4 differ only in which *thought* the agent is thinking; likewise w1 and w5, etc.
We could have instead described a frame
C1=SBGHGWRHRW⎛⎜
⎜
⎜⎝w8w9w10w11w8w9w10w11⎞⎟
⎟
⎟⎠,
in which case we would not be treating the agent's thoughts as a relevant difference between possible worlds.[1](https://www.lesswrong.com/posts/pWruFSY7494vnucCE/biextensional-equivalence#Footnotes) But we have the *option* of fully representing "agent-internal" properties using possible worlds, just the same as "environment-internal" properties. As such, we don't need to separately reify possible agents or possible environments.
**3.4. Example**
One reason there are two definitions here is because the homotopy definition is easier to work with categorically, while the biextensionality definition is easier to work with directly with matrices.
Let C0 and C1 be the Cartesian frames given by:
C0≅⎛⎜⎝w0w1w1w2w3w3w0w1w1⎞⎟⎠ and C1≅⎛⎜⎝w2w3w2w0w1w0w2w3w2⎞⎟⎠.
Note that when working up to isomorphism, there is no need to label the rows or columns.
We can then see that C0≃C1 because
^C0≅^C1≅(w0w1w2w3).
To verify the equivalence using the the homotopy definition would be far more tedious.
**3.5. Relationship to Additive Operations**
Since we will often want to work with Cartesian frames up to biextensional equivalence, it will be helpful to know that all of our additive operations respect biextensional equivalence.
**Claim:** If C0≃C1 and D0≃D1, then C∗0≃C∗1, C0⊕D0≃C1⊕D1, and C0&D0≃C1&D1.
**Proof:** It is clear from the definition of biextensional collapse that ^− commutes with −∗. Thus since ^C0≅^C1, we have ^C∗0≅^C∗1, so C∗0≃C∗1.
For the rest, it suffices to show that if C0≃C1, then C0⊕D≃C1⊕D. Then, since ⊕ is symmetric up to isomorphism, we have
C0⊕D0≃C1⊕D0≅D0⊕C1≃D1⊕C1≅C1⊕D1,and using the fact that ⊕ and & are De Morgan dual, we have
C0&D0≅(C∗0⊕D∗0)∗≃(C∗1⊕D∗1)∗≅C1&D1.We will use the homotopy equivalence definition. Let Ci=(Ai,Ei,⋅i) and let D=(B,F,⋆). Let (g0,h0):C0→C1 and (g1,h1):C1→C0 compose to something homotopic to the identity in both orders. We want to construct a (g′0,h′0):C0⊕D→C1⊕D and (g′1,h′1):C1⊕D→C0⊕D, that similarly compose to something homotopic to the identity in both orders. We will take g′i:Ai⊔B→A1−i⊔B to be given by g′i(a)=gi(a) if a∈Ai, and g′i(a)=a if a∈B. Similarly, we will take h′i:E1−i×F→Ei×F to be given by h′i(e,f)=(hi(e),f).
Without loss of generality, it suffices to show that (g′0,h′0) is a morphism and that (g′1,h′1)∘(g′0,h′0) is homotopic to the identity on C0⊕D. The fact that (g′1,h′1) is a morphism and (g′0,h′0)∘(g′1,h′1) is homotopic to the identity will follow symmetrically. Let ⋄i=Eval(Ci⊕D).
To show that (g′0,h′0) is a morphism, observe that for all a∈A0 and (e,f)∈E1×F, we have
g′0(a)⋄1(e,f)=g0(a)⋅1e=a⋅0h0(e)=a⋄0(h0(e),f)=a⋄0h′0(e,f).Similarly, for all a∈B and (e,f)∈E1×F, we have
g′0(a)⋄1(e,f)=a⋆f=a⋄0(h0(e),f)=a⋄0h′0(e,f).To show that (g′1,h′1)∘(g′0,h′0) is homotopic to the identity on C0⊕D, we just need that for all a∈A0⊔B and all (e,f)∈E0×F, we have a⋄0(e,f)=g′1(g′0(a))⋄0(e,f). Indeed, if a∈B, then a=g′1(g′0(a)), and if a∈A0, then
a⋄0(e,f)=a⋅0e=g1(g0(a))⋅0e=g′1(g′0(a))⋄0(e,f).□
Image is also clearly preserved by biextensional equivalence.
**Claim:** If C≃D, then Image(C)=Image(D).
**Proof:** Trivial from the biextensional collapse definition. □
4. Some Small Cartesian Frames
------------------------------
We will now classify all biextensional Cartesian frames (and thus biextensional equivalence classes of Cartesian frames) in which the agent's size is at most one and/or the environment's size is at most one.
**Definition:** null is the Cartesian frame ({},{},⋅) with empty agent, empty environment, and empty evaluation function.
If you have an empty Cartesian frame—one with no image, no elements of W—then it must be biextensionally equivalent to either null, 0, or ⊤.
**Claim:** If |Agent(C)|=0 and |Env(C)|≠0, then C≃0. If |Env(C)|=0 and |Agent(C)|≠0, then C≃⊤. If |Agent(C)|=|Env(C)|=0, then C≃null.
**Proof:** If |Agent(C)|=0 and |Env(C)|≠0, then all environments are equivalent up to ∼, so ^C has one possible environment and no possible agents, so ^C≅0, so C≃0. Similarly, if |Env(C)|=0 and |Agent(C)|≠0, all agents are equivalent up to ∼, so ^C≅⊤ and C≃⊤. If |Agent(C)|=|Env(C)|=0, then C is already equal to null. □
**Claim:** The only three biextensional Cartesian frames C with Image(C)={} are 0, ⊤, and null.
**Proof:** A Cartesian frame has empty image if and only if it has empty agent or empty environment.All three of 0, ⊤, and null are clearly biextensional, and any other Cartesian frame with empty image is biextensionally equivalent to one of them, and so cannot be biextensional. □
We now understand all biextensional Cartesian frames with empty agent or empty environment. Let's look at the case where either the agent or environment is a singleton.
1S is the biextensional Cartesian frame you get when the agent has only one option, and the frame's image is some set of possible worlds S. Since Env(1S) will be in bijective correspondence with S=Image(1S) and the labels on Env(1S) don't matter, we will identify Env(1S) with S.
**Definition:** Given S⊆W, 1S is the Cartesian frame 1S=({a},S,⋆), where a⋆s=s for all s∈S. 1 is the Cartesian frame 1W.
We can think of 1S as the perspective of a bystander who has no control, and is just observing which world the environment brings about.
⊥S is the transpose of 1S, where the environment has only one option and the agent's options are S. You can think of ⊥S as a powerful agent facing no obstacles, beyond being constrained to S: it gets to choose exactly what world we're in.
**Definition:** Given S⊆W, ⊥S is the Cartesian frame ⊥S=(S,{e},⋆), where s⋆e=s for all s∈S. ⊥ is the Cartesian frame ⊥W.
The names 1 and ⊥ will make more sense later, when we define multiplicative operations on Cartesian frames.[2](https://www.lesswrong.com/posts/pWruFSY7494vnucCE/biextensional-equivalence#Footnotes)
We can think of 1 as a powerless, all-knowing agent, and 1S as 1 with a promise from the environment that the world will be in S. Similarly, we can think of ⊥ as an all-powerful agent, and ⊥S as ⊥ with a commitment to do S.
The class of frames where the agent has only one option, 1S, contains 1 at one extreme (where S=W) and ⊤ at the other extreme (where S={}). Meanwhile, the class of frames where the environment has only one option, ⊥S, contains ⊥ at one extreme (where S=W) and 0 at the other (where S={}).
**Claim:** 1∗=⊥, ⊥∗=1, 1∗S=⊥S, ⊥∗S=1S, 1{}=⊤, ⊥{}=0.
**Proof:** Trivial. □
**Claim:** If |Agent(C)|=1, then C≃1S, where S=Image(C). If |Env(C)|=1, then C≃⊥S, where S=Image(C).
**Proof:** If Agent(C)={a}, then equivalence classes of environments are given by where they send a. There will be one such equivalence class for each s∈Image(C), and it will send a to s. Thus ^C=1S, so C≃1S. The |Env(C)|=1 case is the same with agent and environment swapped. □
Now that we have built up language for talking about Cartesian frames categorically, we are ready to revisit controllables and observables and interpret them through the lens of category theory. This will be the focus of our next post.
Footnotes
---------
1. Similarly, we could have decided that we don't care about certain things about the environment. For example, if we only care whether there are bears in possible worlds where the agent went for a walk and might therefore encounter them, then we could construct a frame
C2=SBGHGWRHRW⎛⎜
⎜
⎜⎝w12w12w13w14w15w15w16w17⎞⎟
⎟
⎟⎠.
[↩](https://www.lesswrong.com/posts/pWruFSY7494vnucCE/biextensional-equivalence#3_3__Biextensional_Equivalence)
2. Indeed, this section on small Cartesian frames would make more sense as part of our discussion of multiplicative operations on Cartesian frames; our motivation for discussing these objects will be provided there. I'm introducing these objects early because they will be useful in a few contexts before we get to multiplicative operations. [↩](https://www.lesswrong.com/posts/pWruFSY7494vnucCE/biextensional-equivalence#4__Some_Small_Cartesian_Frames) |
af40f281-3277-443e-8dfa-05954ee10e3c | trentmkelly/LessWrong-43k | LessWrong | Any suggestions for an impactful master's thesis in Political Science?
I have a particular interest in the interplay between political science and artificial intelligence. My focus is less on safety - more on implementation/use-case. I have especially focused on education and decision-making more broad and in the context of the Global South (ex Khan Academy use of AI for personal tutoring), but I am open for any inputs and ideas as I am in a very early phase (I begin in January).
The idea is not to be blinded by quick or conventional ideas, but to gain an understanding on WHY a certain focus could be impactful - and use this unique opportunity to dive-in to a topic as a first step towards an impactful "career" further on.
Any inputs are welcome :-) |
56a72e79-a64e-4fa7-b96e-ff79e02930f5 | trentmkelly/LessWrong-43k | LessWrong | Visualizing Bayesian Inference [link]
Galton Visualizing Bayesian Inference (article @ CHANCE)
Excerpt:
> What does Bayes Theorem look like? I do not mean what does the formula—
>
>
>
> —look like; these days, every statistician knows that. I mean, how can we visualize the cognitive content of the theorem? What picture can we appeal to with the hope that any person curious about the theorem may look at it, and, after a bit of study say, “Why, that is clear—I can indeed see what is happening!”
>
> Francis Galton could produce just such a picture; in fact, he built and operated a machine in 1877 that performs that calculation. But, despite having published the picture in Nature and the Proceedings of the Royal Institution of Great Britain, he never referred to it again—and no reader seems to have appreciated what it could accomplish until recently.
Schematics for the machine and its algorithm can be found at the link. This is a really cool design, and maybe it can aid Eliezer's and others' efforts to help people understand Bayes' Theorem. |
3cde9f34-360d-4b55-a75a-edef17ac0531 | trentmkelly/LessWrong-43k | LessWrong | Named Distributions as Artifacts
On confusing priors with models
Being Abraham de Moivre and being born in the 17th century must have been a really sad state of affairs. For one, you have the bubonic plague thing going on, but even worse for de Moivre, you don't have computers and sensors for automated data collection.
As someone interested in complex-real world processes in the 17th century, you must collect all of your observations manually, or waste your tiny fortune on paying unreliable workers to collect it for you. This is made even more complicated by the whole "dangerous to go outside because of the bubonic plague" thing.
As some interested in chance and randomness in the 17th century, you've got very few models of "random" to go by. Sampling a uniform random variable is done by literally rolling dice or tossing a coin. Dice which are imperfectly crafted, as to not be quite uniform. A die (or coin) which limits the possibilities of your random variable between 2 and 20 (most likely 12, but I'm not discounting the existence of medieval d20s). You can order custom dice, you can roll multiple dice and combine the results into a number (e.g. 4d10 are enough to sample uniformly from between 1 and 10,000), but it becomes increasingly tedious to generate larger numbers.
Even more horribly, once you tediously gather all this data, your analysis of it is limited to pen and paper, you are limited to 2 plotting dimensions, maybe 3 if you are a particularly good artist. Every time you want to visualize 100 samples a different way you have to carefully draw a coordinate system and tediously draw 100 points. Every time you want to check if an equation fits your 100 samples, you must calculate the equation 100 times... and then you must compute the difference between the equation results and your samples, and then manually use those to compute your error function.
I invite you to pick two random dice and throw them until you have a vague experimental proof that the CLT holds, and then think about the |
862e9d64-f7be-4cb5-a792-412da27fbcd8 | trentmkelly/LessWrong-43k | LessWrong | Walkthrough: The Transformer Architecture [Part 2/2]
If you are already sort of familiar with the Transformer, this post can serve as a standalone technical explanation of the architecture. Otherwise, I recommend reading part one to get the gist of what the network is doing.
Yesterday, I left us with two images of the Transformer architecture. These images show us the general flow of data through the network. The first image shows the stack of encoders and decoders in their bubbles, which is the basic outline of the Transformer. The second image shows us the sublayers of the encoder and decoder.
Now, with the picture of how the data moves through the architecture, I will fully explain a forward pass with an example. Keep the general structure in mind as we go through the details, which can be a bit mind-numbing at parts.
The task
The Transformer is well suited for translating sentences between languages. Since I don't speak any language other than English, I have decided to translate between sarcastic sentences and their intended meaning. In the example, I am translating the phrase "Yeah right" to "That's not true." This way all those robots who only understand the literal interpretation of English can simply incorporate a Transformer into their brain and be good to go.
The embedding
Before the sentence "Yeah right" can be fed into the architecture, it needs to take a form that is understood by the network. In order to have the architecture read words, we therefore need embed each word into a vector.
We could just take the size of the vocabulary of the document we are working with, and then one-hot encode each word into a fantastically sparse and long vector. But this approach has two problems:
1. The high dimension that these vectors are in makes them harder to work with.
2. There's no natural interpretation of proximity in this vector space.
Ideally, we want words that are similar to be related in some way when they are embedded into a vector format. For instance, we might want all the fruits {apple, ora |
bcc9ff99-0bc1-48f4-ad67-3ff8977e61da | StampyAI/alignment-research-dataset/special_docs | Other | Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient.
Robust Multi-Agent Reinforcement Learning
via Minimax Deep Deterministic Policy Gradient
Shihui LiyYi WuzXinyue Cui§Honghua Dong§Fei FangyStuart Russellz
yCarnegie Mellon University {shihuil,feifang}@cmu.edu
zUniversity of California, Berkeley {jxwuyi,russell}@eecs.berkeley.edu
§Tsinghua University {cuixy14,dhh14}@mails.tsinghua.edu.cn
Abstract
Despite the recent advances of deep reinforcement learning
(DRL), agents trained by DRL tend to be brittle and sensitive
to the training environment, especially in the multi-agent sce-
narios. In the multi-agent setting, a DRL agent’s policy can
easily get stuck in a poor local optima w.r.t. its training part-
ners – the learned policy may be only locally optimal to other
agents’ current policies. In this paper, we focus on the problem
of training robust DRL agents with continuous actions in the
multi-agent learning setting so that the trained agents can still
generalize when its opponents’ policies alter. To tackle this
problem, we proposed a new algorithm, MiniMax Multi-agent
Deep Deterministic Policy Gradient (M3DDPG) with the fol-
lowing contributions: (1) we introduce a minimax extension of
the popular multi-agent deep deterministic policy gradient al-
gorithm (MADDPG), for robust policy learning; (2) since the
continuous action space leads to computational intractability
in our minimax learning objective, we propose Multi-Agent Ad-
versarial Learning (MAAL) to efficiently solve our proposed
formulation. We empirically evaluate our M3DDPG algorithm
in four mixed cooperative and competitive multi-agent envi-
ronments and the agents trained by our method significantly
outperforms existing baselines.
1. Introduction
Most real-world problems involve interactions between mul-
tiple agents and the complexity of problem increases sig-
nificantly when the agents co-evolve together. Thanks to
the recent advances of deep reinforcement learing (DRL)
on single agent scenarios, which led to successes in play-
ing Atari game (Mnih et al .2015), playing go (Silver
et al .2016) and robotics control (Levine et al .2016), it
has been a rising trend to adapt single agent DRL algo-
rithms to multi-agent learning scenarios and many works
have shown great successes on a variety of problems, in-
cluding automatic discovery of communication and lan-
guage (Sukhbaatar, Fergus, and others 2016; Mordatch
and Abbeel 2017), multiplayer games (Peng et al .2017a;
OpenAI 2018), traffic control (Wu et al .2017) and the analy-
sis of social dilemmas (Leibo et al. 2017).
The critical challenge when adapting classical single agent
DRL algorithms to multi-agent setting is the training instabil-
Copyright c
2019, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.ity issue: as training progresses, each agent’s policy is chang-
ing and therefore the environment becomes non-stationary
from the perspective of any individual agent (in a way that is
not explainable by changes in the agent’s own policy). This
non-stationary problem can cause significant problems when
directly applying the single agent DRL algorithms, for exam-
ple, the variance of the policy gradient can be exponentially
large when the number of agents increases (Lowe et al .2017).
To handle this instability issue, recent works, such as the
counterfactual multi-agent policy gradients (Foerster et al .
2017) and the Multi-Agent Deep Deterministic Policy Gra-
dient (MADDPG) (Lowe et al .2017), proposed to utilized a
centralized critic within the actor-critic learning framework
to reduce the variance of policy gradient.
Despite the fact that using a centralized critic stabilizes
training, the learned policies can still be brittle and sensi-
tive to its training partners and converge to a poor local
mode. This is particularly severe for competitive environ-
ments: when the opponents alter their policies during testing,
the performance of the learned policies can be drastically
worse (Lazaridou, Peysakhovich, and Baroni 2016). Hence,
arobust policy becomes desirable in multi-agent setting: a
well-trained agent should be able behave well in testing when
competing against opponents even with strategies different
from its training partners.
In this work, we focus on robust multi-agent reinforcement
learning with continuous action spaces and propose a novel
algorithm, MiniMax Multi-agent Deep Deterministic Policy
Gradient (M3DDPG) . M3DDPG is a minimax extension1of
the classical MADDPG algorithm (Lowe et al .2017). Its core
idea is that during training, we force each agent to behave
well even when its training opponents response in the worst
way.
Our major contributions are summarized as follow:
We introduce the minimax approach to robust multi-agent
DRL and propose a novel minimax learning objective
based on the MADDPG algorithm;
In order to efficiently optimize the minimax learning objec-
tive, we propose an end-to-end learning approach, Multi-
agent Adversarial Learning (MAAL) , which is inspired by
1In fact, we are dealing with gains, i.e., maximizing each agent’s
accumulative reward, so the “minimax” here is essentially “max-
imin”. We keep the term “minimax” to be consistent with literature.
theadversarial training (Goodfellow, Shlens, and Szegedy
2014) technique2.
We empirically evaluate our proposed M3DDPG algorithm
on four mixed cooperative and competitive environments
and the agents trained by M3DDPG outperform baseline
policies on all these environments.
In the rest of the paper, we will firstly present related works
in section 2. Notations and standard algorithms are described
in section 3. Our main algorithm, M3DDPG, is introduced in
section 4. Experimental results are in section 5.
2. Related Work
Multi-agent reinforcement learning (Littman 1994) has been
a long-standing field in AI (Hu, Wellman, and others 1998;
Busoniu, Babuska, and De Schutter 2008). Recent works in
DRL use deep neural networks to approximately represent
policy and value functions. Inspired by the success of DRL
in single-agent settings, many DRL-based multi-agent learn-
ing algorithms have been proposed. Forester et al. (2016b)
and He et al. (2016) extended the deep Q-learning to multi-
agent setting; Peng et al. (2017a) proposed a centralized pol-
icy learning algorithm based on actor-critic policy gradient;
Forester et al. (2016a) developed a decentralized multi-agent
policy gradient algorithm with centralized baseline; Lowe et
al. (2017) extended DDPG to multi-agent setting with a cen-
tralized Q function; Wei et al. (2018) and Grau-Moya (2018)
proposed multi-agent variants of the soft-Q-learning algo-
rithm (Haarnoja et al .2017); Yang et al. (2018) focused on
multi-agent reinforcement learning on a very large population
of agents. Our M3DDPG algorithm is built on top of MAD-
DPG and inherits the decentralized policy and centralized
critic framework.
Minimax is a fundamental concept in game theory and
can be applied to general decision-making under uncertainty,
prescribing a strategy that minimizes the possible loss for
a worst case scenario (Osborne and others 2004). Minimax
was firstly introduced to multi-agent reinforcement learning
as minimax Q-learning by Littman (1994). More recently,
some works combine the minimax framework and the DRL
techniques to find Nash equilibrium in two player zero-sum
games (Foerster et al .2018; Pérolat et al .2016; Grau-Moya,
Leibfried, and Bou-Ammar 2018). In our work, we utilize
the minimax idea for the purpose of robust policy learning.
Robust reinforcement learning was originally introduced
by Morimoto et al. (2005) considering the generalization
ability of the learned policy in the single-agent setting. This
problem is also studied recently with deep neural networks,
such as adding random noise to input (Tobin et al .2017) or
dynamics (Peng et al .2017b) during training. Besides adding
random noise, some other works implicitly adopt the mini-
max idea by utilizing the “worst noise” (Pinto et al .2017;
Mandlekar et al .2017). These works force the learned pol-
icy to work well even under the worst case perturbations
and are typically under the name of “adversarial reinforce-
ment learning”, despite the fact that the original adversarial
2The connection between MAAL and adversarial training will
be discussed in details at the end of section 4.reinforcement learning problem was introduced in the set-
ting of multi-agent learning (Uther and Veloso 1997). In our
M3DDPG algorithm, we focus on the problem of learning
polices that is robust to opponents with different strategies.
Within the minimax framework, finding the worst case
scenario is a critical component. Lanctot et al. (2017) pro-
posed an iterative approach that alternatively computes the
best response policy while fixes the other. Gao et al. (Gao,
Mueller, and Hayward 2018) replace “mean” in the tempo-
ral difference learning rule with “minimum”. In our work,
we proposed MAAL, which is a general, efficient and fully
end-to-end learning approach. MAAL is motivated by ad-
versarial training (Goodfellow, Shlens, and Szegedy 2014)
and suitable for arbitrary number of agents. The core idea
of MAAL is approximating the minimization in our min-
max objective by a single gradient descent step. The idea of
one-step-gradient approximation was also explored in meta-
learning (Finn, Abbeel, and Levine 2017).
3. Background and Preliminary
In this section, we describe our problem setting and the stan-
dard algorithms. Most of the definitions and notations follow
the original MADDPG paper (Lowe et al. 2017).
Markov Games
We consider a multi-agent extension of Markov decision pro-
cesses (MDPs) called partially observable Markov games
(Littman 1994). A Markov game for Nagents is defined by
a set of statesSdescribing the possible configurations of all
agents, a set of actions A1;:::;ANand a set of observations
O1;:::;ONfor each agent. To choose actions, each agent
iuses a stochastic policy i:OiAi7![0;1]param-
eterized byi, which produces the next state according to
the state transition function T:SA 1:::AN7!S .
Each agentiobtains rewards as a function of the state and
agent’s action ri:SAi7!R, and receives a private
observation correlated with the state oi:S7!Oi. The ini-
tial states are determined by a distribution :S7! [0;1].
Each agentiaims to maximize its own total expected return
Ri=PT
t=0
trt
iwhere
is a discount factor and Tis the
time horizon.
To minimize notation, in the following discussion we will
often omitfrom the subscript of .
Q-Learning and Deep Q-Networks (DQN)
Q-Learning and DQN (Mnih et al .2015) are popular methods
in reinforcement learning and have been previously applied
to multi-agent settings (Foerster et al .2016a; Tesauro 2004).
Q-Learning makes use of an action-value function for policy
asQ(s;a) =E[Rjst=s;at=a]. This Q function
can be recursively rewritten as Q(s;a) =Es0[r(s;a) +
Ea0[Q(s0;a0)]]. DQN learns the action-value function
Qcorresponding to the optimal policy by minimizing the
loss:
L() =Es;a;r;s0[(Q(s;aj) y)2]; (1)
wherey=r+
max
a0Q(s0;a0):
Qis a target Q function whose parameters are periodically
updated with the most recent , which helps stabilize learning.
Another crucial component of stabilizing DQN is the use of
an experience replay buffer Dcontaining tuples (s;a;r;s0).
Q-learning algorithm is most suitable for DRL agents with
discrete action spaces.
Policy Gradient (PG) Algorithms
Policy gradient methods is another popular choice for a va-
riety of RL tasks. Let denote discounted state visitation
distribution for a policy . The main idea of PG is to directly
adjust the parameters of the policy in order to maximize the
objectiveJ() =Es;a[R]by taking steps in the direc-
tion ofrJ(). Using the Q function defined previously, the
gradient of the policy can be written as (Sutton et al .2000):
rJ() =Es;a[rlog(ajs)Q(s;a)];(2)
wherepis the state distribution. The policy gradient theorem
has given rise to several practical algorithms, which often
differ in how they estimate Q. For example, one can simply
use a sample return Rt=PT
i=t
i tri, which leads to the
REINFORCE algorithm (Williams 1992). Alternatively, one
could learn an approximation of the true action-value function
Q(s;a)called the critic and leads to a variety of actor-critic
algorithms (Sutton and Barto 1998).
Deterministic Policy Gradient (DPG) Algorithms
DPG algorithms extends the policy gradient algorithm to
deterministic policies :S7!A (Silver et al .2014). In
particular, under certain conditions we can write the gradient
of the objective J() =Es[R(s;a)]as:
rJ() =EsD[r(s)raQ(s;a)ja=(s)]; (3)
whereDis the replay buffer. Since this theorem relies on
raQ(s;a), it requires the action space A(and thus the
policy) be continuous.
Deep deterministic policy gradient (DDPG) (Lillicrap et
al.2015) is a variant of DPG where the policy and critic
Qare approximated with deep neural networks. DDPG
is an off-policy algorithm, and samples trajectories from
a replay buffer of experiences that are stored throughout
training. DDPG also makes use of a target network, as in
DQN (Mnih et al. 2015).
Multi-Agent Deep Deterministic Policy Gradient
Directly applying single-agent RL algorithms to the multi-
agent setting by treating other agents as part of the envi-
ronment is problematic as the environment appears non-
stationary from the view of any one agent, violating Markov
assumptions required for convergence. Particularly, this non-
stationary issue is more severe in the case of DRL with neural
networks as function approximators. The core idea of the
MADDPG algorithm (Lowe et al .2017) is learning a central-
ized Q function for each agent which conditions on global
information to alleviate the non-stationary problem and stabi-
lize training.
More concretely, consider a game with Nagents with
policies parameterized by =f1;:::;Ng, and let=f1;:::;Ngbe the set of all agents’ policies. Then we can
write the gradient of the expected return for agent iwith
policyi,J(i) =E[Ri]as:
riJ(i) =
Ex;aD[rii(oi)raiQ
i(x;a1;:::;aN)jai=i(oi)];
(4)
HereQ
i(x;a1;:::;aN)is acentralized action-value func-
tionthat takes as input the actions of all agents, a1;:::;aN,
in addition to some state information x(i.e., x=
(o1;:::;oN)) , and outputs the Q-value for agent i. Let
x0denote the next state from xafter taking actions
a1;:::;aN. The experience replay buffer Dcontains the tu-
ples(x;x0;a1;:::;aN;r1;:::;rN), recording experiences
of all agents. The centralized action-value function Q
iis
updated as:
L(i) =Ex;a;r;x0[(Q
i(x;a1;:::;aN) y)2]; (5)
y=ri+
Q0
i(x0;a0
1;:::;a0
N)
a0
j=0
j(oj);
where0=f0
1;:::;0
Ngis the set of target policies with
delayed parameters 0
i.
Note that the centralized Q function is only used during
training. During decentralized execution, each policy i
only takes local information oito produce an action.
4. Minimax Multi-Agent Deep Deterministic
Policy Gradient (M3DDPG)
In this section, we introduce our proposed new algorithm,
Minimax Multi-agent Deep Deterministic Policy Gradient
(M3DDPG) , which is built on top of the MADDPG algo-
rithm and particularly designed to improve the robustness
of learned policies. Our M3DDPG algorithm contains two
major novel components:
Minimax Optimization Motivated by the minimax concept
in game theory, we introduce minimax optimization into
the learning objective;
Multi-Agent Adversarial Learning The continuous action
space results in computational intractability issue when
optimizing our proposed minimax objective. Hence, we
propose Multi-Agent49 Adversarial Learning (MAAL) to
solve this optimization problem.
Minimax Optimization
In multi-agent RL, the agents’ policies can be very sensitive
to their learning partner’s policy. Particularly in competitive
environments, the learned policies can be brittle when the
opponents alter their strategies. For the purpose of learning
robust policies, we propose to update policies considering
theworst situation : during training, we optimize the accu-
mulative reward for each agent iunder the assumption that
all other agents acts adversarially. This yields the minimax
learning objective maxiJM(i)where
JM(i) =Es[Ri]
= min
at
j6=iEs"TX
t=0
tri(st;at
1;:::;at
N)
at
i=(ot
i)#
(6)
=Es0"
min
a0
j6=iQ
M;i(s0;a0
1;:::;a0
N)
a0
i=(o0
i)#
:(7)
Critically, in Eq. 6, state st+1at timet+ 1depends not only
on the dynamics and the action i(ot
i)but also on all the
previous adversarial actions at0
j6=iwitht0t. In Eq. 7, we
derive the modified Q function Q
M(s;a1;:::;aN), which is
naturally centralized and can be rewritten in a recursive form
Q
M;i(s;a1;:::;aN) =ri(s;a1;:::;aN)+
Es0"
min
a0
j6=iQ
M;i(s0;a0
1;:::;a0
N)
a0
i=i(s0)#
:(8)
Importantly, Q
M(s;a1;:::;aN)conditions on the current
statesas well as the current actions a1;:::;aNand repre-
sents the current reward plus the discounted worst case future
return starting from the next state ,s0. This definition brings
the benefits that we can naturally apply off-policy temporal
difference learning later to derive the update rule for Q
M.
Note that for each agent i, none of the adversarial actions
depend on its parameter i, so we can directly apply the deter-
ministic policy gradient theorem to compute riJM(i)and
use off-policy temporal difference to update the Q function.
Thanks to the centralized Q function in MADDPG (Eq. 4),
which takes in the actions from all the agents, our derivation
naturally applies and is perfectly aligned with the MADDPG
formulation (Eq. 4) by injecting a minimization over other
agents’ actions as follows:
riJM(i) =
ExD2
4rii(oi)raiQ
M;i(x;a?
1;:::;ai;:::a?
N)
ai=i(oi)
a?
j6=i= arg min aj6=iQ
M;i(x;a1;:::;aN)3
5;
(9)
whereDdenotes the replay buffer and xdenotes the state
information.
Correspondingly, we obtain the new Q function update rule
by adding another minimization to Eq. 5 when computing
the target Q value:
L(i) =Ex;a;r;x0D[(Q
M;i(x;a1;:::;aN) y)2];(10)
y=ri+
Q0
M;i(x0;a0?
1;:::;a0
i;:::;a0?
N)
a0
i=0
i(oi);
a0?
j6=i= arg min
a0
j6=iQ0
M;i(x0;a0
1;:::;a0
N);
where0
idenotes the target policy of agent iwith delayed
parameters0
i, andQ0
M;idenotes the target Q network for
agenti. Combining Eq. 9 and Eq. 10 yields our proposed
minimax learning framework.Multi-Agent Adversarial Learning
The critical challenge in our proposed minimax learning
framework is how to handle the embedded minimization in
Eq. 9 and Eq. 10. Due to the continuous action space as well
as the non-linearity of Q function, directly optimizing the
minimization problem is computationally intractable. A naive
approximate solution can be performing an inner-loop gradi-
ent descent whenever performing an update step of Eq. 9 or
Eq. 10, but this is too computationally expensive for practical
use. Here we introduce an efficient and end-to-end solution,
multi-agent adversarial learning (MAAL). The main ideas
of MAAL can be summarized in two steps: (1) approximate
the non-linear Q function by a locally linear function; (2)
replace the inner-loop minimization with a 1-step gradient
descent. Note the core idea of MAAL, locally linearizing the
Q function, is adapted from the recent adversarial training
technique originally developed for supervised learning. We
will discuss the connection between adversarial training and
MAAL in the end of this section.
For conciseness, we first consider Eq. 10 and rewrite it
into the following form with auxiliary variables :
y=ri+
Q0
M;i(x0;a0?
1;:::;a0
i;:::;a0?
N) (11)
a0
k=0
k(ok);81kN
a0?
j=a0
j+j;8j6=i
j6=i= arg min
j6=iQ0
M;i(x0;a0
1+1;:::;a0
i;:::;a0
N+N):
Eq. 11 can be interpreted as we are now seeking for
a set of perturbations such that the perturbed actions
a0?decrease Q value the most. By linearizing the Q func-
tion atQ
M;i(x;a0
1;:::;a0
N), the desired perturbation j
can be locally approximated by the gradient direction at
Q
M;i(x;a0
1;:::;a0
N)w.r.t.a0
j. Then we use this local to de-
rive an approximation ^jto the worst case perturbation by
taking a small gradient step:
8j6=i;^j= rajQ0
M;i(x0;a0
1;:::;aj;:::;a0
N);(12)
whereis a tunable coefficient representing the perturbation
rate. It can be also interpreted as the step size of the gradient
descent step: when is too small, the local approximation er-
ror will be small but due to the small perturbation, the learned
policy can be far from the optimal solution of the minimax
objective we proposed; when is too large, the approxima-
tion error may incur too much trouble for the overall learning
process and the agents may fail to learn good policies.
We can apply this technique to Eq. 9 as well and eventually
derive the following formulation:
riJ(i) =
Ex;aD2
664rii(oi)raiQ
M;i(x;a?
1;:::;ai;:::a?
N)
ai=i(oi)
a?
j=aj+ ^j;8j6=i
^j= jrajQ
M;i(x;a1;:::;aN)3
775;
(13)
Algorithm 1: Minimax Multi-Agent Deep Deterministic Policy Gradient (M3DDPG) for Nagents
forepisode = 1toMdo
Initialize a random process Nfor action exploration, and receive initial state information x
fort= 1to max-episode-length do
for each agent i, select action ai=i(oi) +Ntw.r.t. the current policy and exploration
Execute actions a= (a1;:::;aN)and observe reward rand new state information x0
Store (x;a;r; x0)in replay bufferD, and set x x0
foragenti= 1toNdo
Sample a random minibatch of Ssamples (xk;ak;rk;x0k)fromD
Setyk=rk
i+
Q0
M;i(x0k;a0
1;:::;a0
N)ja0
i=0
i(ok
i);a0
j6=i=0
j(ok
j)+^0
jwith^0
jdefined in Eq. 14
Update critic by minimizing the loss L(i) =1
SP
k
yk Q
M;i(xk;ak
1;:::;ak
N)2
Update actor using the sampled policy gradient with ^jdefined in Eq. 13:
riJ1
SX
krii(ok
i)raiQ
M;i(xk;a?
1;:::;ai;:::;a?
N)
ai=i(ok
i);a?
j6=i=ak
j+^j
end for
Update target network parameters for each agent i:0
i i+ (1 )0
i
end for
end for
and
L(i) =Ex;a;r;x0[(Q
M;i(x;a1;:::;aN) y)2];(14)
y=ri+
Q0
M;i(x0;a0?
1;:::;a0
i;:::;a0?
N)
a0
k=0
k(ok);81kN
a0?
j=a0
j+ ^0
j;8j6=i
^0
j= jra0
jQ0
M;i(x;a0
1;:::;a0
N);
where1;:::;Nare additional parameters. MAAL only
requires one additional gradient computation, and can be
executed in a fully end-to-end fashion. Finally, combining
Eq. 13 and Eq. 14 completes MAAL. The overall algorithm,
M3DDPG, is summarized as Algo. 1.
Discussion
Connection to Adversarial Training Adversarial train-
ingis a robust training approach for deep neural networks on
supervised learning (Goodfellow et al .2014). The core idea
is to force the classifier to predict correctly even when given
adversarial examples , which are obtained by adding a small
adversarial perturbation to the original input data such that the
classification loss can be decreased the most. Formally, sup-
pose the classification loss function is L() =Ex;y[f(x;y)]
with input data xand labely. Adversarial training aims to
optimize the following adversarial loss instead
Ladv() = Ex;y[f(x+?;y)] (15)
?= arg max
kkf(x+;y):
The core technique to efficiently optimize Ladv()is to locally
linearize the loss function at f(x;y)and approximate ?by
the scaled gradient.
Thanks to the centralized Q function, which takes the ac-
tions from all the agents as part of the input, we are able toeasily inject the minimax optimization (Eq. 11) and represent
it in a similar way to adversarial training (Eq. 15) so that
we can adopt the similar technique to effectively solve our
minimax optimization in a fully end-to-end fashion.
Connection to Single Agent Robust RL M3DDPG with
MAAL can be also viewed as the special case of robust
reinforcement learning (RRL) (Morimoto and Doya 2005)
in the single agent setting, which aims to bridge the gap
between training in simulation and testing in the real world
by adding adversarial perturbations to the transition dynamics
during training. Here, we consider the multi-agent setting and
add worst case perturbations to actions of opponent agents
during training. Note that in the perspective of a single agent,
perturbations on opponents’ actions can be also considered
as a special adversarial noise on the dynamics.
Choice ofIn the extreme case of = 0, M3DDPG de-
generates to the original MADDPG algorithm while as
increases, the policy learning tends to be more robust but
the optimization becomes harder. In practice, using a fixed
throughout training can lead to very unstable learning
behavior due to the changing scale of the gradients. The
original adversarial training paper (Goodfellow, Shlens, and
Szegedy 2014) suggests to compute with a fixed norm,
namelyg=rxf(x;y),^=g
kgk, wherexdenotes the in-
put data to the classifier and ydenotes the label. Accordingly,
in our M3DDPG algorithm, we can adaptively compute the
perturbation ^jby
g=rajQ
M;i(x;a1;:::;aN);^j= jg
kgk:(16)
Eq. 16 generally works fine in practice but in some hard
multi-agent learning environments, unstable training behav-
ior can be still observed. We suspect that it is because the
changing norm of actions in these situations. Different from
the supervised learning setting where the norm of the input
dataxis typically stable, in reinforcement learning the norm
of actions can drastically change even in a single episode.
Therefore, it is possible to see cases that even a perturbation
with a small fixed norm overwhelms the action aj, which
may potentially lead to computational stability issue. There-
fore, we also introduce the following alternative for adaptive
perturbation computation:
g=rajQ
M;i(x;a1;:::;aN);^j= jkajkg
kgk:(17)
Lastly, note that in a mixed cooperative and competitive
environment, ideally we only need to add adversarial pertur-
bations to competitors. But empirically we observe that also
adding (smaller) perturbations to collaborators can further
improve the quality of learned policies.
5. Experiments
We adopt the same particle-world environments as the MAD-
DPG paper (Lowe et al .2017) as well as the training con-
figurations. is selected from a grid search over 0:1,0:01
and0:001. For testing, we generate a fixed set of 2500 envi-
ronment configurations (i.e., landmarks and birthplaces) and
evaluate on this fixed set for a fair comparison.
speaker
listener“green”agent 1
agent 3landmarklandmark
landmark p
vc
agent 2predator 1
preypredator 2
predator 3agent 1agent 2
agent 3agent 1agent 2
adversary
?
speaker
listener“green”agent 1
agent 3landmarklandmark
landmark p
vc
agent 2predator 1
preypredator 2
predator 3agent 1agent 2
agent 3agent 1agent 2
adversary
?
Figure 1: Illustrations of some environments we consider, in-
cluding Physical Deception (left) and Predator-Prey (right).
Environments
The particle world environment consists of Ncooperative
agents,Madversarial agents and Llandmarks in a two-
dimensional world with continuous space. We focus on the
four mixed cooperative and competitive scenarios to best
examine the effectiveness of our minimax formulation.
Covert communication This is an adversarial communi-
cation environment, where a speaker agent (‘Alice’) must
communicate a message to a listener agent (‘Bob’) ( N= 2),
who must reconstruct the message at the other end. However,
an adversarial agent (‘Eve’) ( M= 1) is also observing the
channel, and wants to reconstruct the message — Alice and
Bob are penalized based on Eve’s reconstruction, and thus
Alice must encode her message using a randomly generated
key, known only to Alice and Bob.
00.30.60.9
MA vs MA MA vs Minimax Minimax vs MA Minimax vs MinimaxNormalized Agent ScoreComparison between MADDPG (MA) and M3DDPG (Minimax )
Covert communication Keep-away Physical deception Predator-preyFigure 2: Comparison between M3DDPG (Minimax) and
classical MADDPG (MA) on the four mixed competitive
environments. Each bar cluster shows the 0-1 normalized
score for a set of competing policies in different roles (agent
vs adversary), where a higher score is better for the agent. In
all cases, M3DDPG outperforms MADDPG when directly
pitted against it.
Keep-away This scenario consists of L= 1 target land-
mark,N= 2 cooperative agents and M= 1 adversarial
agent. Cooperating agents need to reach the landmark and
keep the adversarial agent away from the landmark by push-
ing it while the adversarial agent must stay at the landmark
to occupy it.
Physical deception Here,N= 2agents cooperate to reach
a single target landmark from a total of L= 2 landmarks.
They are rewarded based on the minimum distance of any
agent to the target (so only one agent needs to reach the target
landmark). However, a lone adversary ( M= 1) also desires
to reach the target landmark; the catch is that the adversary
does not know which of the landmarks is the correct one.
Thus the cooperating agents, who are penalized based on the
adversary distance to the target, learn to spread out and cover
all landmarks so as to deceive the adversary.
Predator-prey In this variant of the classic predator-prey
game,N= 3slower cooperating agents must chase the faster
adversary (M= 1) around a randomly generated environ-
ment withL= 2large landmarks impeding the way. Each
time the cooperative agents collide with an adversary, the
agents are rewarded while the adversary is penalized.
Comparison to MADDPG
To evaluate the quality of learned policies trained by dif-
ferent algorithms in competitive scenarios, we measure the
performance of agents trained by our M3DDPG and agents
by classical MADDPG in the roles of both normal agent and
adversary in each environment.
The results are demonstrated in Figure 2, where we mea-
sure the rewards of the normal agents in different scenarios
and normalize them to 0-1. We notice that in all the envi-
ronments, the highest score is achieved when the M3DDPG
agents play as the normal agents against the MADDPG ad-
versary (Minimax vs MA); while the lowest score is when the
MADDPG agents act as normal agents against the M3DDPG
adversary (MA vs Minimax). This indicates that policies
Figure 3: Performances of M3DDPG (Minimax, red) and MADDPG (MA, blue) under the worst situation , i.e., against the
disruptive adversaries , on convert communication, keep-away, physical deception and predator-pray from left to right. The y-axis
denotes the reward of normal agents (fixed) and x-axis denotes the training episodes performed of the disruptive adversaries.
Higher reward implies a more robust policy. Agents trained by M3DDPG (Minimax) perform better on all the scenarios.
trained by M3DDPG have much higher quality than original
MADDPG.
Evaluation with Disruptive Adversaries
Instead of M3DDPG and MADDPG directly competing
against each other, now we consider their performances in the
worst possible situations through their individual competi-
tions against disruptive adversaries . We construct disruptive
adversaries by (1) fixing the trained normal agents (M3DDPG
or MADDPG); (2) setting the reward of the disruptive adver-
sary as the negative value of normal agents’ reward (so that
the problem becomes zero-sum); and then (3) using DDPG
to train the disruptive adversary from scratch under the zero-
sum reward setting.
In the particle world environment, the competitive scenar-
ios are generally not zero-sum, which implies that according
to the default reward function, the adversaries may have dif-
ferent purposes rather than directly disrupting the normal
agents. So, in order to evaluate the effectiveness our minimax
optimization in the worst situation, we convert every prob-
lems into a zero-sum form and compare the performances of
our M3DDPG agents as well as the MADDPG agents against
this artificially constructed adversaries. Moreover, since each
of our four environments has only 1 adversary, after fixing
the normal agents, the learning problem degenerates to the
single agent setting and classical DDPG is sufficient to stably
train a disruptive adversary.
The results are shown in Figure 3, where we plot the re-
ward of the fixed normal agents of different algorithms as
the training of the disruptive adversaries progresses until con-
vergence. Note that due to the different environment designs,
the difficulty for the disruptive agents to break the strategy of
normal agents varies: for example, in convert communication ,
since the private key is not accessible to the adversary agent,
breaking the encrypted communication will be very hard;
while in physical deception , since we do not allow communi-
cation and fix the normal agents, a smart enough adversary
may easily infer the target landmark by observing the initial
behavior of the two cooperative agents. Nevertheless, despite
these intrinsic properties, the M3DDPG agents (Minimax)
achieves higher reward in all the scenarios, which implies
better robustness even in the worst situation.6. Conclusion
In this paper, we propose a novel algorithm, minimax mulit-
agent deep deterministic policy gradient (M3DDPG), for
robust multi-agent reinforcement learning, which leverages
the minimax concept and introduces a minimax learning ob-
jective. To efficiently optimize the minimax objective, we pro-
pose MAAL, which approximates the inner-loop minimiza-
tion by a single gradient descent step. Empirically, M3DDPG
outperforms the benchmark methods on four mixed coopera-
tive and competitive scenarios.
Nevertheless, due to the single step gradient approximation
in MAAL, which is efficient in computation, an M3DDPG
agent can only explore locally worst situation during the
evolving process at training, which can still lead to unsatisfy-
ing behavior when testing opponents have drastically differ-
ent strategies. It will be an interesting direction to re-examine
the robustness-efficiency trade-off in MAAL and further im-
prove policy learning by placing more computations on the
minimax optimization. We leave this as our future work.
References
[Busoniu, Babuska, and De Schutter 2008] Busoniu, L.;
Babuska, R.; and De Schutter, B. 2008. A comprehensive
survey of multiagent reinforcement learning. IEEE Transac-
tions on Systems Man and Cybernetics Part C Applications
and Reviews 38(2):156.
[Finn, Abbeel, and Levine 2017] Finn, C.; Abbeel, P.; and
Levine, S. 2017. Model-agnostic meta-learning for fast adap-
tation of deep networks. arXiv preprint arXiv:1703.03400 .
[Foerster et al. 2016a] Foerster, J. N.; Assael, Y . M.; de Fre-
itas, N.; and Whiteson, S. 2016a. Learning to communi-
cate with deep multi-agent reinforcement learning. CoRR
abs/1605.06676.
[Foerster et al. 2016b] Foerster, J. N.; Assael1, Y . M.; de Fre-
itas, N.; and Whiteson, S. 2016b. Learning to Commu-
nicate to Solve Riddles with Deep Distributed Recurrent
Q-Networks.
[Foerster et al. 2017] Foerster, J.; Farquhar, G.; Afouras, T.;
Nardelli, N.; and Whiteson, S. 2017. Counterfactual multi-
agent policy gradients. arXiv preprint arXiv:1705.08926 .
[Foerster et al. 2018] Foerster, J.; Chen, R. Y .; Al-Shedivat,
M.; Whiteson, S.; Abbeel, P.; and Mordatch, I. 2018. Learn-
ing with opponent-learning awareness. In Proceedings of the
17th International Conference on Autonomous Agents and
MultiAgent Systems , 122–130. International Foundation for
Autonomous Agents and Multiagent Systems.
[Gao, Mueller, and Hayward 2018] Gao, C.; Mueller, M.;
and Hayward, R. 2018. Adversarial policy gradient for
alternating markov games.
[Goodfellow et al. 2014] Goodfellow, I.; Pouget-Abadie, J.;
Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville,
A.; and Bengio, Y . 2014. Generative adversarial nets. In
Advances in neural information processing systems , 2672–
2680.
[Goodfellow, Shlens, and Szegedy 2014] Goodfellow, I. J.;
Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing
adversarial examples. arXiv preprint arXiv:1412.6572 .
[Grau-Moya, Leibfried, and Bou-Ammar 2018] Grau-Moya,
J.; Leibfried, F.; and Bou-Ammar, H. 2018. Balancing
two-player stochastic games with soft q-learning. arXiv
preprint arXiv:1802.03216 .
[Haarnoja et al. 2017] Haarnoja, T.; Tang, H.; Abbeel, P.; and
Levine, S. 2017. Reinforcement learning with deep energy-
based policies. arXiv preprint arXiv:1702.08165 .
[He et al. 2016] He, H.; Boyd-Graber, J.; Kwok, K.; and
Daumé III, H. 2016. Opponent modeling in deep rein-
forcement learning. In International Conference on Machine
Learning , 1804–1813.
[Hu, Wellman, and others 1998] Hu, J.; Wellman, M. P.; et al.
1998. Multiagent reinforcement learning: theoretical frame-
work and an algorithm. In ICML , volume 98, 242–250. Cite-
seer.
[Lanctot et al. 2017] Lanctot, M.; Zambaldi, V .; Gruslys, A.;
Lazaridou, A.; Perolat, J.; Silver, D.; Graepel, T.; et al. 2017.
A unified game-theoretic approach to multiagent reinforce-
ment learning. In Advances in Neural Information Processing
Systems , 4190–4203.
[Lazaridou, Peysakhovich, and Baroni 2016] Lazaridou, A.;
Peysakhovich, A.; and Baroni, M. 2016. Multi-agent co-
operation and the emergence of (natural) language. arXiv
preprint arXiv:1612.07182 .
[Leibo et al. 2017] Leibo, J. Z.; Zambaldi, V . F.; Lanctot,
M.; Marecki, J.; and Graepel, T. 2017. Multi-agent rein-
forcement learning in sequential social dilemmas. CoRR
abs/1702.03037.
[Levine et al. 2016] Levine, S.; Finn, C.; Darrell, T.; and
Abbeel, P. 2016. End-to-end training of deep visuomo-
tor policies. The Journal of Machine Learning Research
17(1):1334–1373.
[Lillicrap et al. 2015] Lillicrap, T. P.; Hunt, J. J.; Pritzel, A.;
Heess, N.; Erez, T.; Tassa, Y .; Silver, D.; and Wierstra, D.
2015. Continuous control with deep reinforcement learning.
arXiv preprint arXiv:1509.02971 .
[Littman 1994] Littman, M. L. 1994. Markov games as a
framework for multi-agent reinforcement learning. In Pro-
ceedings of the eleventh international conference on machine
learning , volume 157, 157–163.
[Lowe et al. 2017] Lowe, R.; Wu, Y .; Tamar, A.; Harb, J.;
Abbeel, O. P.; and Mordatch, I. 2017. Multi-agent actor-critic for mixed cooperative-competitive environments. In
Advances in Neural Information Processing Systems , 6379–
6390.
[Mandlekar et al. 2017] Mandlekar, A.; Zhu, Y .; Garg, A.;
Fei-Fei, L.; and Savarese, S. 2017. Adversarially robust
policy learning: Active construction of physically-plausible
perturbations. In Intelligent Robots and Systems (IROS), 2017
IEEE/RSJ International Conference on , 3932–3939. IEEE.
[Mnih et al. 2015] Mnih, V .; Kavukcuoglu, K.; Silver, D.;
Rusu, A. A.; Veness, J.; Bellemare, M. G.; Graves, A.; Ried-
miller, M.; Fidjeland, A. K.; Ostrovski, G.; et al. 2015.
Human-level control through deep reinforcement learning.
Nature 518(7540):529–533.
[Mordatch and Abbeel 2017] Mordatch, I., and Abbeel, P.
2017. Emergence of grounded compositional language in
multi-agent populations. arXiv preprint arXiv:1703.04908 .
[Morimoto and Doya 2005] Morimoto, J., and Doya, K. 2005.
Robust reinforcement learning. Neural computation
17(2):335–359.
[OpenAI 2018] OpenAI. 2018. OpenAI Five.
https://blog.openai.com/openai-five/. Accessed: 2018-
09-03.
[Osborne and others 2004] Osborne, M. J., et al. 2004. An
introduction to game theory , volume 3. Oxford university
press New York.
[Peng et al. 2017a] Peng, P.; Yuan, Q.; Wen, Y .; Yang, Y .;
Tang, Z.; Long, H.; and Wang, J. 2017a. Multiagent
bidirectionally-coordinated nets for learning to play starcraft
combat games. CoRR abs/1703.10069.
[Peng et al. 2017b] Peng, X. B.; Andrychowicz, M.; Zaremba,
W.; and Abbeel, P. 2017b. Sim-to-real transfer of robotic
control with dynamics randomization. arXiv preprint
arXiv:1710.06537 .
[Pérolat et al. 2016] Pérolat, J.; Strub, F.; Piot, B.; and
Pietquin, O. 2016. Learning nash equilibrium for general-
sum markov games from batch data. arXiv preprint
arXiv:1606.08718 .
[Pinto et al. 2017] Pinto, L.; Davidson, J.; Sukthankar, R.; and
Gupta, A. 2017. Robust adversarial reinforcement learning.
arXiv preprint arXiv:1703.02702 .
[Silver et al. 2014] Silver, D.; Lever, G.; Heess, N.; Degris, T.;
Wierstra, D.; and Riedmiller, M. 2014. Deterministic policy
gradient algorithms. In Proceedings of the 31st International
Conference on Machine Learning , 387–395.
[Silver et al. 2016] Silver, D.; Huang, A.; Maddison, C. J.;
Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.;
Antonoglou, I.; Panneershelvam, V .; Lanctot, M.; et al. 2016.
Mastering the game of go with deep neural networks and tree
search. Nature 529(7587):484–489.
[Sukhbaatar, Fergus, and others 2016] Sukhbaatar, S.; Fer-
gus, R.; et al. 2016. Learning multiagent communication
with backpropagation. In Advances in Neural Information
Processing Systems , 2244–2252.
[Sutton and Barto 1998] Sutton, R. S., and Barto, A. G. 1998.
Reinforcement learning: An introduction , volume 1. MIT
press Cambridge.
[Sutton et al. 2000] Sutton, R. S.; McAllester, D. A.; Singh,
S. P.; and Mansour, Y . 2000. Policy gradient methods for
reinforcement learning with function approximation. In Ad-
vances in neural information processing systems , 1057–1063.
[Tesauro 2004] Tesauro, G. 2004. Extending q-learning to
general adaptive multi-agent systems. In Advances in neural
information processing systems , 871–878.
[Tobin et al. 2017] Tobin, J.; Fong, R.; Ray, A.; Schneider, J.;
Zaremba, W.; and Abbeel, P. 2017. Domain randomization
for transferring deep neural networks from simulation to the
real world. In Intelligent Robots and Systems (IROS), 2017
IEEE/RSJ International Conference on , 23–30. IEEE.
[Uther and Veloso 1997] Uther, W. T., and Veloso, M. M.
1997. Generalizing adversarial reinforcement learning. In
Proceedings of the AAAI Fall Symposium on Model Directed
Autonomous Systems , 206. Citeseer.
[Wei et al. 2018] Wei, E.; Wicke, D.; Freelan, D.; and Luke,
S. 2018. Multiagent soft q-learning. arXiv preprint
arXiv:1804.09817 .
[Williams 1992] Williams, R. J. 1992. Simple statistical
gradient-following algorithms for connectionist reinforce-
ment learning. Machine learning 8(3-4):229–256.
[Wu et al. 2017] Wu, C.; Kreidieh, A.; Parvate, K.; Vinitsky,
E.; and Bayen, A. M. 2017. Flow: Architecture and bench-
marking for reinforcement learning in traffic control. arXiv
preprint arXiv:1710.05465 .
[Yang et al. 2018] Yang, Y .; Luo, R.; Li, M.; Zhou, M.; Zhang,
W.; and Wang, J. 2018. Mean field multi-agent reinforcement
learning. arXiv preprint arXiv:1802.05438 . |
6542ffef-5bca-43b1-ba50-993a4925f092 | trentmkelly/LessWrong-43k | LessWrong | Why is effective altruism new and obvious?
Crossposted from the EA forum ages ago. I meant to put it on my own blog then, but somehow failed to it seems.
Ben Kuhn, playing Devil’s advocate:
> Effective altruists often express surprise that the idea of effective altruism only came about so recently. For instance, my student group recently hosted Elie Hassenfeld for a talk in which he made remarks to that effect, and I’ve heard other people working for EA organizations express the same sentiment. But no one seems to be actually worried about this—just smug that they’ve figured out something that no one else had.
>
> The “market” for ideas is at least somewhat efficient: most simple, obvious and correct things get thought of fairly quickly after it’s possible to think them. If a meme as simple as effective altruism hasn’t taken root yet, we should at least try to understand why before throwing our weight behind it. The absence of such attempts—in other words, the fact that non-obviousness doesn’t make effective altruists worried that they’re missing something—is a strong indicator against the “effective altruists are actually trying” hypothesis.
I think this is a good point. If you find yourself in a small group advocating for an obvious and timeless idea, and it’s 2014, something a bit strange is probably going on. As a side note, if people actually come out and disagree, this is more worrying and you should really take some time out to be puzzled by it.
I can think of a few reasons that ‘effective altruism’ might seem so obvious and yet the EA movement might only just be starting.
I will assume that the term ‘effective altruism’ is intended to mean roughly what the words in it suggest: helping other people in efficient ways. If you took ‘effective altruism’ to be defined by principles regarding counterfactual reasons and supererogatory acts and so on, I don’t think you should be surprised that it is a new movement. However I don’t think that’s what ‘effective altruism’ generally means to people; rather |
7203d9a0-f087-4859-9f57-3cde70dfbe2b | trentmkelly/LessWrong-43k | LessWrong | Blueprint for a Brighter Future
Presumably our shared goal is to make possible a comfortable and dignified life for the maximum number of people while treading lightly on the environment, balancing individual rights and status seeking behaviors against the common good (with a view to preventing tragedy of the commons). Importantly, any solution to these problems must not require human nature to fundamentally change. Both left and right wing efforts of the 20th century to create a "new man", fit to inhabit their respective notions of utopia, only led to so much butchery.
I instead propose to design society around the human animal in such a way as to soften our rough edges, mainly by meeting everyone's needs and preventing extreme inequality. This won't prevent, say, crimes of passion but it would prevent crimes born of scarcity, such as burglaries, muggings, squatting, carjacking, scams, ransom, etc. It should also make accommodations for people wanting to live different ways, such as those unhappy in the dense living conditions found in urban population centers.
To this end I envision a nationwide network of robotically printed arcologies built atop or adjacent to the ruins of former cities (to make use of existing electrical infrastructure, roads, etc.) reclaimed by nature due to climate disaster. Said arcologies are coated in PV to generate all their own power as well as integrated geothermal (which now works anywhere on Earth thanks to Quaise Energy), linked to neighboring arcologies via enclosed elevated highspeed rail guideways, with PRT, moving walkways and multi-directional ropeless elevators furnishing transport within each structure.
Architecturally the interiors would share qualities of airports, malls and cruise ships, subdivided into units equipped according to their use, with every structure being mixed zone residential, commercial, agricultural (indoor vertical aquaponics) and so on. Mining and manufacturing are remote automated facilities.
Arcology interiors would be self-clean |
c5081052-4cc2-4f44-9f5c-729c2ee6bc2c | trentmkelly/LessWrong-43k | LessWrong | 5. Uphold Voluntarism: Digital Defense
Security in the physical realm is an extraordinarily hard problem, but an inescapable one. Our civilization now also rests on software infrastructure. Security in the digital realm is now also an inescapable problem. The current software infrastructure is not just insecure, it is insecurable. Digital security is extraordinarily hard, but it is differently hard than physical security. Reasoning by analogy is fraught with peril. Let’s begin by explaining how these realms differ, and how to approach computer security.
Security in Digital vs. Physical Realms
Security in the digital realm differs in fundamental ways from security in the physical realm. But all security starts with defensible boundaries.
Physics does not let us build impenetrable walls. We can build stronger and stronger walls, but no matter how strong the wall, there is always an ever stronger force that can break through it. In the digital realm, perfect boundaries are cheap and plentiful. All modern CPUs support address space separation and containment of user-mode processes. Many programming languages support memory safety and object encapsulation. Modern cryptographic algorithms give us separation that seems close enough to perfect. The software built on these boundaries—operating systems, application software, cryptographic protocols—thus could be built securely.
In the physical realm, an attack is costly for the attacker. There is a marginal cost per victim, if nothing else, of the attacker’s attention. A good defense raises the marginal cost of attack. By contrast, software attacks typically have zero marginal cost per victim. Once malware works, its damage can be multiplied by billions using only the victim’s resources. Any vulnerable software system exposed to the outside world will eventually be attacked. We must build invulnerable systems when we can, and otherwise minimize the damage of a successful attack.
With perfect boundaries to build on, why is so much software so hopelessly insec |
9dc604e1-5f4d-4038-9243-eb852a2b751b | trentmkelly/LessWrong-43k | LessWrong | Drawing on Walls
When I started the bathroom project there was a lot of reason to move quickly: the bathroom wouldn't be usable while I was working on it, and the back bedroom was full of construction stuff. Once I got to the stage where the only thing left to do was plaster and paint the hallway, however, it was less of a priority. So we spent May-November with unfinished walls.
Since we were going to paint them at some point, one afternoon I thought it would be fun to draw on them with the kids. We got out the markers and drew lots of different things. I emphasized that it was only these walls we could draw on, which is the kind of rule the kids do well with.
A couple days later they drew on the walls again, but this time with crayon. Crayon, being wax-based, is not a good layer to have under paint. I hadn't thought to tell them not to use them, and they didn't have a way to know, but I was annoyed at myself. I got most of it off with hot water and a cloth, and then when it came to do the plastering I put a skim coat over it.
Later on a friend wanted help preparing for a coding interview, so we used the wall as a whiteboard:
One thing I hadn't considered was that you need more primer over dark marker than plain drywall. As with "no crayon under paint" this seems like it should have been obvious, and something I should have thought about before letting them draw on a large area of the wall, but it wasn't and I didn't, so Julia ended up spending longer painting and priming than we'd been thinking.
And then, the evening after all that painting, Anna took a marker over to the nice new clean white wall and started drawing. We hadn't told her that the wall was no longer ok for drawing, and at 3y "now that the wall is painted drawing isn't ok" is not the sort of thing I should be expecting her to know on her own. Luckily the marker was washable, so it wasn't too bad.
Overall this was more work than I was expecting, and probably wasn't worth it for the fun, at least not the way |
ea331d5f-0d22-4fb4-b9d4-d94227c60296 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Abstraction Learning
1 Introduction
---------------
Human intelligence has long been the goal of artificial intelligence. Humans can effectively solve multi-task learning, lifelong learning, transfer learning, few-shot learning, generalization, exploration, prediction, and decision. AI algorithms, on the other hand, still struggle with these problemsLegg and Hutter ([2007](#bib.bib16)). For example, supervised learning based on neural networks typically requires large training datasets, and suffers from catastrophic forgettingKirkpatrick et al. ([2017](#bib.bib13)); Rusu et al. ([2016](#bib.bib23)); Goodfellow et al. ([2013](#bib.bib8)); most reinforcement learning algorithmsParisotto et al. ([2015](#bib.bib19)); Schaul et al. ([2015](#bib.bib24)); Rusu et al. ([2015](#bib.bib22)) specialize in a single task through huge amounts of trial and error, and are difficult to generalize to new tasks.
So how is human intelligence formed? This remains an open question, and we hypothesize that there are at least three key elements. 1) *Intrinsic motivation*. Humans are intrinsically motivated to truly understand the world. This understanding is crucial to intelligence because it applies to and is required by all tasks that humans will be faced with. 2) *Unified network*. Humans use a single unified network – the brain – to tackle all tasks. This allows knowledge to be accumulated and shared across tasks, and thus increase the level of intelligence over time. 3) *Limited complexity*. Humans face several constraints on complexity. They have limited time and data to learn each task; they have limited space to encode and pass on knowledge; the brain has limited energy to operate. These constraints force humans to develop intelligence and find efficient solutions.
Interestingly, we find that abstraction learning is a way to combine these elements, and thus key to achieving human-like intelligence. Here, we define abstractions as the set of concepts and laws that the world is built upon. By this definition, learning abstractions is equivalent to discovering a universal model of the world and using it to interpret observations. This satisfies intrinsic motivation, and has two implications. First, because only a single world model is yielded by abstraction learning, it can be maintained in a unified network. This model can then be shared and improved across all tasks, facilitating knowledge transfer and generalization. Second, because abstraction learning enables interpretation of complex observations as concise concepts and laws, task models can be greatly simplified. Consequently, fewer samples are required to learn each task, and less energy is needed to perform each task.
While there is biological evidence suggesting the existence of abstractions in the brain (e.g. the grandmother cellGross ([2002](#bib.bib9)); Clark ([2000](#bib.bib4)); Knorkski ([1967](#bib.bib14))), abstraction learning remains largely unexplored in AI research. One of the reasons why abstraction learning is harder than other machine learning tasks such as classification is because it is difficult to establish a clear objective, or target for trainingBengio et al. ([2013](#bib.bib2)). Two broadly related areas are probabilistic graphical models and neural networks. Probabilistic graphical modelsRoweis ([1998](#bib.bib21)); Olshausen and Field ([1996](#bib.bib18)); Smolensky ([1986](#bib.bib26)) use abstractions to define random variables and graph structures. These abstractions serve to decompose the joint probability distribution and simplify computation, but they are given by human experts rather than learned by the model itself. Neural networks, on the other hand, aim to learn effective representations for specific tasks. Taking auto-encoderYann ([1987](#bib.bib29)); Bourlard and Kamp ([1988](#bib.bib3)); Hinton and Zemel ([1994](#bib.bib12)) as an example, one can learn a set of representation for data by unsupervised learning, typically aiming for dimensionality reduction. The process of closely matching the output with the original data enforces the auto-encoder to learn the abstraction. Through visualizationZeiler and Fergus ([2014](#bib.bib30)); Simonyan et al. ([2013](#bib.bib25)); Springenberg et al. ([2014](#bib.bib27)), these artificial neurons can be qualitatively regarded as representing abstract concepts. In lifelong learning, knowledge shared between tasks was considered as abstractions in recent works, such as using a regularizer that prevents the parameters from drastic changes in their valuesKirkpatrick et al. ([2017](#bib.bib13)) or blocking any changes to the old task parametersRusu et al. ([2016](#bib.bib23)). However, these abstractions are not directly learned, but emerge as a by-product of minimizing task losses. Therefore, the quality of these abstractions cannot be guaranteed, depriving neural networks of the benefits that real abstractions would offer.
In this paper, we aim to learn abstractions directly. This raises at least three challenges. First, while abstractions may be straightforward to express in natural language, they are hard to specify in the language of neurons and synapses. Second, because abstractions are like hidden variables that do not appear in the training data, there is no simple objective function that distinguishes between right and wrong abstractions. Third, even if we find such an objective function, which would probably be non-differentiable, how it should influence the network structure remains unclear. To overcome these challenges, we propose a novel framework named ONE that formulates abstraction learning as Optimization via Network Evolution. ONE incorporates three levels of innovations:
∙ Partition structure prior, structure prior, with pre-allocated abstraction neurons. These abstraction neurons accumulate abstractions, and separate the network into a task-agnostic cognition part and task-specific decision parts. The cognition part generates abstractions, and the decision parts select abstractions. Task losses do not modify abstractions, thus improving reusability and avoiding catastrophic forgetting.
∙ Constrained optimization formulation, integrating three abstraction properties. 1) Variety. Abstractions should cover the various concepts and laws behind observations. 2) Simplicity. Each observation must be described succinctly by only a few abstractions, thus simplifying task-specific decision making. 3) Effectiveness. Each abstraction should be effective for multiple tasks, enhancing task performance and knowledge sharing.
∙ Network evolution algorithm, producing and improving abstraction structures. The constrained optimization problem for abstraction learning involves optimizing the network structure, which is beyond the capabilities of standard gradient-based methods. To solve this problem, we introduce connection growth to search through the structure space, local competition to improve search efficiency, and use the objective function to guide search direction.
Through extensive experiments on the MNIST dataset, we demonstrate that ONE successfully converges and learns abstractions that accomplish different tasks. Importantly, ONE shows elementary human-like intelligence in three aspects. First, ONE performs tasks based on simple abstractions, and thus activate only a small proportion of the whole network for each task. This leads to better generalization and less energy consumption. Second, when faced with new tasks, ONE is able to generate new abstractions. These abstractions accumulate, so as ONE learns more tasks, it makes more use of existing abstractions and generates fewer new ones. This facilitates task transfer and boosts learning speed. Third, ONE does not forget, and can thus learn continually.
2 Partition structure prior
----------------------------
To our knowledge, although abstraction has been proposed in a large number of literatureHinton ([1986](#bib.bib10)); Bengio ([2012](#bib.bib1)); Krizhevsky et al. ([2012](#bib.bib15)); Collobert et al. ([2011](#bib.bib5)), the current research remains basically at the level of giving general concept and description of abstractions, which we refer to as virtual abstractions. We consider that the process human beings realizing basic intellectual activities can be divided into the cognitive process for concepts and laws and the decision making process related to tasks. Because of the lack of physical meanings of virtual abstractions, it is difficult to draw a clear line between the parts of cognition and decision-making, which makes the channels of knowledge sharing among tasks obscure and the decision-making process of a single task complicated. Therefore, in this section, we propose a specific form of entity abstraction and a partition structure based on the abstraction locked layer (see Figure [1](#S2.F1 "Figure 1 ‣ 2 Partition structure prior ‣ Abstraction Learning")).
Moreover, the structures (abstractions) activated by limited amounts of tasks cannot reach the level of understanding the whole world, abstractions needs to be continuously generated, accumulated, and reused by different tasks. In other words, each task makes full use of the existing abstractions, and only generates necessary abstractions that are conducive to the new task. This promotes the accumulation and reuse of abstractions and avoids forgetting knowledge acquired in the past.

Figure 1: Partition structure of ONE model: ONE performs structure search in the cognitive part and parameters learning in the decision part, with an abstraction locked layer in the middle of them. In the cognitive part, the mechanisms of growth, extinction, and competition promote the production of excellent structures (abstractions), where the dashed line represents death, the number of pulses denotes the firing intensity of neurons and the dark units are winners. The mechanisms of each layer is applied to all layers of the cognitive part.
The whole network can be seen as a feedforward neural network with a fixed number of neurons, which has been divided into the cognition and decision parts by an abstraction locked layer (see Figure [1](#S2.F1 "Figure 1 ‣ 2 Partition structure prior ‣ Abstraction Learning")). The abstraction layer separates neurons into two types: task-agnostic and task-specific. Each neuron in the abstraction layer is the output of a subnetwork composed of task-agnostic neurons. These subnetworks can overlap and take any form, in addition, they should be sparse and hierarchical. The task-specific neurons are organized into non-overlapping groups. The neurons within each group form fully connected layers and are intended for a specific task. Each group connects sparsely with the abstraction layer, selecting the relevant abstractions for each task. Inspired by biologyDayan and Abbott ([2001](#bib.bib7)), there are three mechanisms that promote the production of excellent abstractions: 1) Growth: This a way of exploring the structure space. The more extensively we explore, the better structure we expect to find. 2) Competition: It promotes the variety and simplicity of abstractions and prunes unpromising structures on the fly. 3) Extinction: It aims to optimize the abstractions for task effectiveness.
Formally, suppose there are K tasks in total. Each task is associated with a dataset D(k) and a loss function L(k), and requires to learn a mapping f(k):X(k)→Y(k),k=1,2,…,K. All input spaces have the same dimension, X(k)⊆Rn, but the output spaces can be arbitrary, for example, discrete for classification and continuous for regression.
Let L denote the number of layers formed by task-agnostic neurons, with the input layer being the first layer and the abstraction layer being the L-th layer. Let I(l) denote the set of neurons in the l-th layer, and N(l) the size of I(l). Each task-agnostic neuron represents a mapping Φi:X→R, where X=⋃Kk=1X(k),i∈⋃Ll=1I(l). Let Φ(l)=[Φi]Ti∈I(l) denote the concatenation of mappings of each layer, then Φ(l):X→RN(l) can be defined recursively:
| | | | | |
| --- | --- | --- | --- | --- |
| | Φ(1)(x) | =x, | | (1) |
| | Φ(l)(x) | =σ(l)(W(l−1)Φ(l−1)(x)+b(l)), l=2, 3, …, L, | | (2) |
where is the weight matrix, b(l)∈RN(l)is the bias vector, and σ(l):RN(l)→RN(l)+ is the activation function. Note that Φ(l)(x) is non-negative for l≥2, and that the neuron i is included in the feedforward process and thus "active" only if Φi(x)>0. This means the network operates in a distributed manner, activating a portion of subnetworks and abstraction neurons for a given input. In other words, the network distributes its capacity over the input space.
In essence, ONE instantiates a special decomposition of f(k), which is f(k)=g(k)∘Φ(L), where g(k) represents the fully-connected layers in each task-specific group. If Φ(L) represents really good abstractions, this decomposition can be highly efficient, because each g(k) will be quite simple and only need a few parameters. This means the majority of computation for each task is shared, and a new task merely adds a small increment to the whole network.
There are at least four differences between ONE and other neural networks (NNs): 1) Partition structure: NNs are mostly designed to implement specific tasks, making the entire network serve the tasks. 2) Local parameter updating: NNs’ parameters are fully updated during back propagation, which is one of reasons that causes knowledge acquired in old tasks being forgotten in the process of implementing new tasks. In pre-trained modelsHinton and Salakhutdinov ([2006](#bib.bib11)); Mesnil et al. ([2011](#bib.bib17)), parameters are updated in a similar way to ours, but the knowledge is only a small part of all knowledge, which cannot be further accumulated in later learning. 3) Abstraction locked layer: The traditional NNs do not point out the specific form of abstractions, but only stay on the concept and description level. 4) Dynamic network based on growth, competition and extinction: The works of Zhou et al. ([2012](#bib.bib31)); Philipp and Carbonell ([2017](#bib.bib20)); Cortes et al. ([2016](#bib.bib6)) explored dynamic neural networks, while none of them considered multi-task setting. The networkXiao et al. ([2014](#bib.bib28)) grows and branches only on the topmost layer, while ONE can change structures at any layer in the cognitive part.
3 Constraint optimization formulation
--------------------------------------
The structure mentioned above is just a framework of abstraction generation, which satisfies reusability but not guarantee other properties of abstractions. So, we formulate the development of abstractions into a constrained optimization problem, where the constraints ensure the variety and simplicity of abstractions, and the objective function measures the effectiveness of abstractions.
Variety constraint. ONE is intended for multiple tasks that can be radically different, so it must have the ability to generate and maintain various abstractions required to accomplish these tasks. Even if they are not required, a variety of abstractions would offer a wider perspective and probably lead to better solutions.
In ONE, each abstraction is represented by an abstraction neuron, which in turn corresponds to a subnetwork. Therefore, a variety of abstractions require a variety of subnetworks. The variety also provides an opportunity to search for specialized structures that express abstractions best.
Because the network capacity is fixed, in order to realize variety, we need to control the size of subnetworks and make sure that they do not overlap much. Accordingly, we introduce two constraints as followed.
| | | | |
| --- | --- | --- | --- |
| | A1:∑j∈I(l)I{Φj(x)>0}≤V1(l), l=2,3, …, L−1, ∀x∈X, | | (3) |
| | | | (4) |
Here, I{⋅} is the indicator function. The first constraint limits the number of active neurons at each layer for any given input, thus controls the size of the activated subnetworks. The second constraint limits the number of outgoing connections of each neuron, thus reduces the overlap among subnetworks.
Simplicity constraint. Once we develop good abstractions, we shall be able to grasp these concepts and principles, and use them to guide our actions and decisions. This greatly simplifies the tasks we encounter.
We introduce simplicity into ONE from two aspects. First, we require that only a few abstraction neurons are activated for any given input. This not only fosters simple explanation of the input, but also limits the size of task-specific subnetworks, enforcing simple decision making. Second, we expect abstractions to capture the commonality among entities. For clarity, we consider classification tasks only. We impose an upper bound on the total number of abstraction neurons that can be activated for all inputs belonging to the same category.
More concretely, let C(k) denote the set of possible categories in the k-th classification task, and Xc denote the input space for each category c∈C(k). We encourage simplicity by adding the following two constraints.
| | | | |
| --- | --- | --- | --- |
| | A3:∑i∈I(L)I{Φi(x)>0}≤S1, ∀x∈X, | | (5) |
| | A4:∑i∈I(L)I{∑x∈XcI{Φi(x)>0}>0}≤S2, ∀c∈C(k), k=1, 2, …, K. | | (6) |
Effectiveness constraint. Although abstractions are task-agnostic, some of them are particularly relevant to certain tasks while others are not. To boost task performance, we propose a matching process that selects highly effective abstraction neurons for each task. These selected neurons are then included in separate fully connected layers to adapt to task-specific details.
Thus, the network achieves effectiveness for tasks through two stages of optimization. First, it optimizes the selection variables e(k)i for each abstraction neuron i∈I(L) and each task k. If neuron i is selected for task k, e(k)i=1, and otherwise e(k)i=0. Second, the network optimizes the weights of the fully connected layers W(k)(≥L) to reduce task losses. For neurons with e(k)i=0, their connection weights to the k-th task-specific group W(k)i are constrained to be zero.
We now define the objective function for optimizing e(k)i. For classification tasks, it is reasonable to assume that an abstraction neuron can promote task effectiveness if it has some preliminary ability to distinguish different categories. We evaluate the ability through the distribution of the neuron’s activations over inputs of all categories for any given task. If the neuron is activated by inputs of only a few categories, we expect that it has such ability. More formally, we optimize e(k)i to select the E neurons whose activation distributions have the lowest entropy.
| | | | |
| --- | --- | --- | --- |
| | mine(k)i∑i∈I(L)e(k)i⋅H(k)i | | (7) |
| | | | |
| --- | --- | --- | --- |
| | subject to | | (8) |
| | | A5:∑i∈I(L)e(k)i=E, k=1, 2, …, K, | |
| | | A6:W(k)i=0, if e(k)i=0, ∀i∈I(l), k=1, 2, ..., K. | |
Here
| | | | |
| --- | --- | --- | --- |
| | H(k)i=−∑c∈C(k)Pci⋅logPci, | | (9) |
| | | | (10) |
Overall optimization problem. We now present the overall optimization problem that the network needs to solve. It integrates both the requirements for abstractions and the performance on each task.
| | | | |
| --- | --- | --- | --- |
| | minW,e(k)iK∑k=1⎛⎜⎝L(k)+∑i∈I(L)e(k)i⋅H(k)i⎞⎟⎠, | | (11) |
| | | | |
| --- | --- | --- | --- |
| | subject to A1, A2, A3, A4, A5, A6. | | (12) |
Note that we do not include Φi in optimization variables, because they depend implicitly on the weights W(<L) between task-agnostic neurons. Note also that because H(k)i depends on Φi, the term ∑i∈I(L)e(k)i⋅H(k)i is optimized jointly on W and e(k)i. This implies that the differentiation ability of abstractions can also be improved.
4 Network evolution algorithm
------------------------------
We observe that this optimization problem presents two major difficulties. First, the objective function includes discrete variables e(k)i. Also, the constraints depend on discrete values I{Φj(x)>0} and I{Wj→m≠0}. Therefore, this difficulty of discreteness results from optimizing structure along with parameters, which is essential because the best structure for abstraction is unknown. The second difficulty comes from the nature of learning. In particular, tasks are not carried out simultaneously, but divided into batches and implemented sequentially. Consequently, abstractions for all tasks cannot be acquired at a time, but need to accumulate over time. This means the structure of the network keeps changing.
Standard gradient-based methods only work for continuous parameters and fixed structures, and thus are not suitable for our problem. Moreover, there is the same problem of optimizing discrete variables and dynamic structures in nature, which gets solved though natural evolution. Therefore, we propose an evolution-inspired optimization framework to search for good abstractions. More concretely, we identify three key elements of natural evolution: production, competition, and selection. Production generates a vast number of individuals, which then go through local competition. Nature selects the fittest individuals from the winners, and use them to guide a new round of production.
We incorporate these three components into our optimization framework to perform efficient parallel search of the structure space for good abstractions. In particular, we use production to grow new structures and maintain the variety abstractions. This corresponds to drawing a large sample from the structure space. These structures compete locally and only a small proportion of them get activated. This promotes the simplicity of abstractions, and improves search efficiency. The activated structures are then evaluated for their effectiveness and selected by each task. Only good structures survive, and they will affect the direction of production afterwards. We now describe each of these components in detail.
Production. In our framework, production refers to growing connections among task-agnostic neurons. This is necessary for two reasons. First, production is a way of exploring the structure space. The more extensively we explore, the better structure we expect to find. Second, production enables the network to better adapt to new tasks which may require new structures (see Figure [3](#S4.F3 "Figure 3 ‣ 4 Network evolution algorithm ‣ Abstraction Learning")).
We integrate production into the feedforward pass of the network, which means connections are created layer by layer. We design three production principles that help enhance the efficiency, sufficiency, and diversity of exploration. 1) Efficiency. In each feedforward pass, only the activated neurons can grow connections to the next layer. In this way, newly created structures are more likely to be activated and evaluated. Also, this helps capture regularities in the input data. 2) Sufficiency. To ensure that sufficient structures are explored, production is triggered if there are not enough activations in the next layer. 3) Diversity. The created structures should also be diverse. This is required by a variety of abstractions. We promote diversity by reducing the production probability of neurons which already have a large number of outgoing connections.

Figure 2: Production. The dark units repre
-sent activated neurons, while the light ones
do not. The dashed units denotes the newly
activated neurons.

Figure 3: Competition. The dark units denote winners, while the light ones denote losers. The dashed units indicate the dying neurons.
| | |
| --- | --- |
| Death
\thesubsubfigure Death
| Growth
\thesubsubfigure Growth
|
Figure 4: (a) The third abstraction is not good enough and it dies along with the structure it is connected to. (b) The third one is new abstraction produced by a new structure.
Competition. We introduce competition among task-agnostic neurons within each layer. Specifically, for a given input, the neurons with the highest output values win the competition. Only the winners propagate their outputs to the next layer; other neurons are de-activated. This is similar in spirit to winner-take-all circuits which are common modules in the brain.
We believe that competition brings two benefits. First, it promotes the variety and simplicity of abstractions, because it controls the size of activated subnetworks and the number of activated abstractions. Second, it prunes unpromising structures on the fly – structures that contain deactivated neurons will not be evaluated and selected for the given task. This is essential to maintain search efficiency. Because the structure space is enormous and task-specific evaluation is costly, we can only afford to evaluate a very small proportion of all possible structures.
Since the network capacity is fixed, we need to give priority to promising structures and delete others if necessary. Therefore, we keep track of the promising structures by maintain a reward variable rj for each neuron j. Specifically, rj is incremented each time neuron j wins a competition, and decremented when it loses.When rj is less than a threshold, the corresponding neuron dies (see Figure [3](#S4.F3 "Figure 3 ‣ 4 Network evolution algorithm ‣ Abstraction Learning")).
Selection. Selection aims to optimize the abstractions for task effectiveness. Because the objective function is non-differentiable, we take an iterative approach instead of standard gradient-based methods. At each iteration, we select the most effective abstractions from those that satisfy the variety and simplicity constraints. We preserve these abstractions and their corresponding structures, and eliminate others to allow new abstractions to be produced (see Figure [4](#S4.F4 "Figure 4 ‣ 4 Network evolution algorithm ‣ Abstraction Learning")). In order to make production more efficient, we guide the direction of production by giving advantage to components of effective structures in the production process.
Evaluating task effectiveness is costly, because computing the entropy of activation distributions requires a certain amount of input data. Consequently, compared to production and competition, selection operates on a larger time scale, and leads to significant modifications to the network structure.
The detailed algorithm for these three components is in the appendix.
5 Experiments
--------------
In all experiments of this section, we verify the feasibility of the model on the issues of single task and multiple consecutive tasks respectively. The prediction accuracy and effectiveness of the model are the main concerns, andwe verify effectiveness in terms of the forgotten rate of knowledge and the sharing rate of abstractions. In order to evaluate the performance of ONE more clearly, no special strategies, such as augmenting data with transformations, dropout or noise, were used. We did not fine-tune any parameters such as parameters initialization in the abstraction layer, either. Thus, our objective is to evaluate the feasibility of the model rather than achieving the absolute best testing scores. Moreover, it is well known that one of the drawbacks of evolution algorithms is that the speed of convergence is slow, so we only validate the feasibility on MNIST dataset. In the future, we will verify our ideas on more datasets.
The MNIST dataset consists of 60K training images and 10K test images of handwritten digits from 0 to 9, where the size of each image is 28×28. In the following experiments, we used all available training data to train the model. Furthermore, we divided the whole classification task into 5 tasks, each of which is a binary classification task on each class.
Our model introduces several hyper-parameters , which we refer to as extinction and growth parameters, in the process of generating abstractions. In terms of growth parameters, parameter al controls the minimum number of neurons that each training sample activates on layer l, parameter bl indicates that each neuron in layer l can be connected to bl neurons in layer l−1 at most, parameter cl controls the number of neurons that win in layer l by directly enforcing a winner-take-all sparsity constraint and coordinates with al to produce good neurons. In terms of extinction parameters, parameter d and parameter e jointly indicate that when the activation number of a neuron is greater than d and the activation number of synapses connected with the neuron is less than e, then the synaptic dies. And if a neuron does not have any outgoing connection, the neuron dies. If growth parameters are set too large, they will promote the exploration of the structure, so as to find the optimal structure, but it is easy to cause the redundancy of the structure. On the contrary, if growth parameters are set too small, the structure exploration is slow and the number of shared neurons are reduced, which is not conducive to the generation of abstractions. What’s more, too large extinction parameters will not be conducive to the accumulation of abstractions, while too small parameters lead to abstraction redundancy. Instead of using the 2D structure of the images, we turn each image into a vector of pixels, where the pixel values were rescaled to [0, 1]. According to the general configuration of CNNs, we set the parameters as follows: the configuration of layers in the cognitive part: {784, 500, 500, 500}, the configuration of layers in the decision part: {200, 50, 2}, a={200, 160, 100}, b={50, 50, 50}, c={100, 80, 60}, d=3000, e=300.
The second column of Table [1](#S5.T1 "Table 1 ‣ Figure 5 ‣ 5 Experiments ‣ Abstraction Learning") shows the experimental results in five tasks. ONE obtains pretty good test scores, which proves the feasibility of the model. The reason why the recognition rates for task 2 and 5 are not as good as those for the rest 3 tasks may be that the difference between the digits is small.

(a) Abstraction generation

(b) Abstraction sharing
Figure 5: (a) shows the proportion of abstractions generated by the previous tasks in the total abstractions used by the current task. (b) shows the total number of abstractions as tasks increase.
Table 1: Test accuracy on MNIST without data augmentation.
Subtask
Single Task
Consecutive tasks
1
99.81%
99.81%
2
97.06%
97.01%
3
98.83%
98.67%
4
99.04%
99.45%
5
96.72%
97.23%
In incremental learning, tasks come in sequence. All parameters in the cognitive part are shared, including structure, weight and bias, while the parameters in the decision part for each task are not shared. The third column of Table [1](#S5.T1 "Table 1 ‣ Figure 5 ‣ 5 Experiments ‣ Abstraction Learning") shows the test accuracy of each task in incremental learning. Our model, ONE, performs almost the same as these batch models in single-task experiments, and even outperforms them in the last two tasks. Figure [4(b)](#S5.F4.sf2 "(b) ‣ Figure 5 ‣ 5 Experiments ‣ Abstraction Learning") shows that ONE performs each task based on part of the total abstraction (500), and thus activates only a small proportion of the whole network for each task. To go further, we count the abstraction sharing rate between current tasks and all previous tasks (see figure [4(a)](#S5.F4.sf1 "(a) ‣ Figure 5 ‣ 5 Experiments ‣ Abstraction Learning")). The result shows that fewer new abstractions are produced as tasks increase, which means that abstractions produced by ONE are highly reusable. This reduces energy consumption and boosts learning speed. And the reason why the test accuracy increases in incremental learning is that some abstractions produced in other tasks are useful in current task, while they cannot be generated by current data. Moreover, because of the accumulation of the abstractions and the non-overlap of the decision part for different tasks, ONE does not forget, and thus can learn continually. We believe that abstractions play an extremely important role in lifelong learning, and the advantages of ONE will be more prominent as the number of tasks increases.
6 Conclusion
-------------
We proposed ONE, a completely new framework for learning abstractions and achieving human-like intelligence. The partition structure provides the foundation for abstraction accumulation across tasks, enabling knowledge transfer and lifelong learning. The constrained optimization formulation directly specifies the properties of abstractions, turning abstraction learning into a well-defined problem. The network evolution algorithm effectively searches through the structure space, ensuring the quality of abstractions.
While human beings may represent and learn abstractions in a different way, we have demonstrated that the abstractions learned by ONE can offer similar benefits as those learned by human beings. In the future, we plan to extend the experiments to larger datasets and perform cross-domain abstraction learning. Another interesting direction is to learn abstractions of tasks. This would allow sharing among task-specific parts, and also improve interpretability of decision making. |
8f1e7de4-8779-4aeb-b06a-a9102f048c42 | trentmkelly/LessWrong-43k | LessWrong | Learning and Internalizing the Lessons from the Sequences
I'm just beginning to go through Rationality: From AI to Zombies. I want to make the most of the lessons contained in the sequences. Usually when I read a book I simply take notes on what seems useful at the time, and a lot of it is forgotten a year later. Any thoughts on how best to internalize the lessons from the sequences? |
213e8d86-e15a-45e0-94bb-9c8253b33a18 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Working in AI | Jan Leike, Helen Toner, Malo Bourgon, and Miles Brundage
the theme of the this series of
lightning talks is working in AI and the
possible careers you could have in that
space I'm going to announce each speaker
and they're going to give a short talk
they're going to go sit back down and
then everyone will come back up at the
end and I'll ask questions okay
so our first speaker is a yan Lika he is
a PhD in reinforcement learning and
research scientist at deep mine working
on technical AI safety so please welcome
yan Thanks
all right how do you build a career in
technical Asst I'm going to start with a
big spoiler and my main point is
basically you should just it's the same
thing as building a Korean AI so if you
want to do a safety you should learn any
I so but before I get into that what
does research on technical AI safety
mean roughly you can categorize a s
safety into three may like categories of
research first is alignment research
which basically about how do we build
agents that we can teach arbitrary goals
and that end up being end up doing what
we wanted to do the second so another
like includes things like
semi-supervised reinforcement learning
reward hacking the safe exploration and
so on and the second part is about
robustness how we make sure that machine
learning algorithms don't brain
unexpectedly and there's include things
like machine learning security
interrupts ability and robustness to
distributional shift and and other
things and finally the third category is
about trust how do you make sure how can
we establish trust that we build the
right systems and that entry includes
things like interpretability how do I
look into the black boxes and see what
what is going on there and formal
verification and I should emphasize that
this is kind of the machine learning
perspective
and there's been other kind of research
agendas around safety that have been
raised but most notably the agent
foundation's agenda by advocated by Mary
so but in this talk I'm going to take
the machine learning perspective
so what could technically a safety
research look like this is an example of
a recent collaboration between open air
and deep mind and this is a new
technique to teach goals arbitrary goals
to agents and what we do is so here this
is kind of a schematic of what the
research what what the setup looks like
you usually have NRL give'em and acting
with environment so this is just normal
reinforcement learning set up and what
we did is we put as a new module here
that is called the reward predictor
which learns reward function it learns
to predict the reward for what the human
wants so and be kind of important part
of this is that it is two people who
work in machine learning this looks like
normal machine learning research and
fast and it's motivated by like
long-term considerations of AI safety so
if you want to do this kind of research
it's amazing we need a lot more people
doing this stuff and there's currently a
huge talent gap and so you might be
asking yourself how do they get there so
let's start with your under get to
undergraduate degree what should you be
focusing on and like ideally your
undergraduate degree should be in
computer science or mathematics should
learn all the regular basics like
programming and algorithms and algebra
calculus and so on and then later on you
might focus more on machine learning and
deep learning and enforcement learning
and those are kind of like the major
core skill sets that at the moment are
mostly in demand and there's some some
kind of important things I think to keep
in mind you should try to prioritize
harder courses or easier ones if you
want to
like their math courses that you find
interesting but they don't seem as we
related like phew they still feel free
to take them you should but the end of
say your master's degree you should get
you should try to aim to have some
research experience by them so you have
in a better position to apply for PhD
positions or similar and so you want to
get you want to start working with
researchers early on you want to get a
paper published and for that it's
usually advisable to find someone a
supervisor who is good at supervising
and not necessarily the most famous
supervisor because the most famous
people usually don't have time to really
easy supervise students and the idea is
that you should really try to find out
early whether you're good you're good
fit for this line of work so that you
can you know that this is worth pursuing
this career path for you because if you
go into a PhD that's usually a many year
commitment so that's the next page how
do you get a PhD and I I should
emphasize that getting a PhD is usually
the prerequisite for someone to hire you
as a researcher but it's not strictly
required so you could get away with
having experience equivalent experience
but getting a PhD is usually like a
one-stop-shop package for all the
relevant research skills that you need
that includes like coming up with ideas
knowing what knowing the literature very
well knowing all the basics and being
able to write and present your research
results I think an important point to
make you is that you shouldn't you
should not be afraid to work on regular
machine learning research there's been
there's like a mistake that I see every
once in a while where people are really
ambitious about doing a safety research
and they want to dive into a a safety
researcher by the way but right now the
field is on a stage where is often not
quite clear what good projects are and
so you can you can get very lost very
easily so usually
I recommend to people to work on a
project that you supervise the finds
interesting so you can get a lot of
feedback from them and and ideally like
if you if you doing your PhD you just do
a normal machine learning research but
you're still you're interested in moving
into a safety after that I think that's
still a very good idea there's also if
you have a master's degree then you
could do a PhD in Europe which usually
takes only three or four years with this
in the US which can take over five years
and then you should for PhDs in general
like there's lots of advice out there
like to do a PhD where should you do h
PhD which you focus on should look at
that I think like right now the ideal
background for doing a sh t research is
a PhD in machine learning and there's
some alternatives to that is the brain
residency which is very competitive and
there's you can do internships in
various spaces is research engineering
where you which is like a lot of people
at open air and deep mind are doing and
this is basically working together with
researchers on code but more from from
engineering perspectives and for that
kind of work you usually don't need a
PhD but yeah so at the moment PhDs and
machine learning are extreme competitive
but if you can get into a program I
think it's very bold with it and you're
also like your date usual you will
probably have very good exit strategies
because the people want to do one of
your higher machine learning people
right now that's it if you have
questions there is two excellent
resources on the ad self now is page the
career review on machine and PhDs and BA
a safety syllabus and if you're
interested in the source stuff come talk
to me afterwards and once you get your
PhD and machine learning you should come
work it did mine door opening I or other
places thank you
[Applause]
thank you yawn next we have miles
Brundage speaking to us about careers in
AI policy miles is a research fellow at
Oxford future of humanity Institute his
research focuses on AI policy and
strategy and recently he's been working
on the relationship between AI and
security as well as thinking through how
international agreements on AI might
work
welcome miles so I'm just going to say a
few brief words about what AI policy is
and what sorts of topics you would be
working on and worrying about if you
were to pursue a field at AI policy and
how on toner is going to say a bit more
about the institutional landscape and
you know the more practical side so to
begin with I think there are a lot of
misconceptions about what AI policy
means that sometimes conflated with you
know sort of strict top-down regulatory
you know bans on AI or something to that
effect but I I define policy as just
authoritative social decision-making
about a certain topic or about a certain
technology or issue so AI policy is
really just about what we as a society
should do and we doesn't just mean
governments but also corporations have
policies towards AI in terms of what
they publish and what the protocols are
for privacy and so forth so working on
AI policy doesn't necessarily mean that
you think we need you know restrictive
government regulations anytime soon or
even ever in the same way that someone
who is say a left or a right leading
healthcare policy analyst might conclude
that you know the government should have
a hands-off approach on health care
policy rather it's about asking the
question of how we can ensure that AI
has positive social consequences there's
a common distinction made in AI policy
between short and long term AI policy or
AI policy issues and I'll say something
briefly about that because I think it's
important to wrap your head around so
the way that I define long term AI
policy is worrying about the issues and
trying to make progress on the issues
related to AI that won't materialize for
some
I'm or will have very long-lasting
consequences and I think that maps on
somewhat to the the differences in focus
between say organizations like the
future of humanity Institute which see
themselves as doing long term AI policy
because we're concerned with AIS that
have transformative economic and social
consequences whereas others who you know
one might call doing short term AI
policy or focus on issues that we have
right now so accountability for
algorithm based decision making and
privacy issues related to the use of
training data for AI but I think this is
also a somewhat misleading distinction
because there's a lot of uncertainty
about how quickly AI will develop so
some of the issues that we call long
term might actually arrive sooner than
we think
so you might all instead think of it as
policies focused on current AI
technologies and policies that are
specifically and policy analysis that's
specifically about more transformative
scenarios so there are already a lot of
government policies that pertain
indirectly to AI even though there isn't
you know a department of AI policy or
anything like that in any government and
there are things like drone
registrations and no-fly zones for
drones that sometimes use AI their
intellectual property laws that pertain
to technologies in general and therefore
AI and there's a lot of government
funding of AI research but at the same
time they're still an awareness that
something more more serious and more
sort of wide-ranging might be useful in
the future so for example as was
mentioned in my bio one thing I'm
interested in right now is whether
International Cooperation might be
useful for dealing with some of the you
know purported arms race situations that
might might already be arising or might
arise in the long term as countries
strive to use AI to pursue their
military advantage so I think it's not
clear yet what the appropriate level of
policy implementation or analysis is for
AI but I think there are a lot of
potentially very thorny issues to think
about so just give a quick overview of
some of the issues in this landscape on
the short term front there are issues of
economic impacts of AI those still you
know smaller than one might see over the
long term they're issues around
liability
bias security including you know
automated defenses and attacks involving
AI as well as privacy and I think these
are all potentially very important
issues in their own right and I do think
that from an ei perspective you might
want to be particularly focused on
issues that will have very long-term
impacts such as those associated with
general AI or super intelligence but
it's also the case that you can gain a
lot of career capital by working on the
short term issues and it might be that a
lot of the same tools and institutions
will be relevant to some of the short
term issues as well as a long term
issues so and I'm happy to talk more
about that offline but I think it's you
know important to note that there might
be synergies between you know different
timeframes and different types of work
over the long term we might see scaled
up versions of the sorts of issues that
I just mentioned so even more severe
concerns around privacy even more severe
concerns about the economic impacts of
AI but there also might be some
fundamentally new or at least you know
qualitatively differently large changes
such as you know big concerns around
wealth distribution created by huge
economic productivity gains from AI or
the risks of catastrophic misuse or
catastrophic safety accidents there
could be concerns about you know even
more extreme curtailment of privacy or
freedom in a society where you could
have nearly perfect surveillance as a
result of automated you know
infrastructure and drones and so forth
so I think there are very serious issues
to think about in this area it's not
clear yet what the right solutions are
and I think that's why it would be good
to have more people working in these
areas and I think you know we we need to
move from a general awareness of that
there all these problems and start
towards start thinking about what
exactly can be done about them if
anything and what the right level of
governance should be whether it's
corporations or international bodies or
national governments there are lots of
open questions such as you know how
should we evaluate AI policies so
there's a paper for example by some of
my colleagues at the future of humanity
Institute called a policy to sit errata
for the development of machine super
intelligence which raises lots of issues
around how we should think about justice
and and speed of transitions towards a
more powerful AI there are questions
around
logistics of cooperating internationally
if we if nations were to agree that they
don't want to just race to build the
most powerful AI but instead they want
to cooperate in some fashion how would
you actually implement that so I think
there are interdisciplinary questions
that involve the science of AI and you
know thinking about the global
distribution of software and hardware
that that that also require a policy
analysis in a various discipline so I
think a policy is an exciting area and
one that requires a lot of different
perspectives in order to make progress
on so with that I will wrap up and look
forward to any questions later on Thanks
next we have Helen toner speaking about
policy funder and strategy her research
careers in AI Helen Connor is a senior
research analyst at the open
philanthropy project where she focuses
on policy governance and strategy issues
related to the progress in artificial
intelligence
before joining open philanthropy she led
EA Melbourne while studying chemical
engineering and Arabic at the University
of Melbourne
please welcome Helen great so as Martha
said there's a huge range of societal
strategic policy political issues
relating to AI that are going to need a
huge amount of work over the coming
years my guess is that navigating the
effects of progress in AI machine
learning robotics automation is going to
be the defining policy challenge of the
coming decades so now that miles has
laid out some of the topics that could
be valuable to work on and are likely to
need work over the coming years I'm
going to briefly touch on what kinds of
institutions are out there where you
might be able to do some of this type of
work and obviously there's a wide range
of options here so I'm going to kind of
zoom over them at a high level rather
than going in in detail early on in
particular options and actually one side
note I'd make is that although there are
a large number of kind of related fields
where you might be able to do work on
these kinds of issues there really isn't
an obvious home for work on AI policy
and strategy in the same way that
machine learning is a pretty obvious
home for work on at least the machine
learning angles of technical AI safety
until what this means and this is
particularly true I think for the kinds
of more transformative and longer-term
issues that ei is tend to be more
interested in and so what this means is
that if you want to work on these topics
on AI policy and strategy topics you're
going to need to be a bit more
self-directed and a bit more
entrepreneurial then you might need to
be in other areas so there aren't
necessarily going to be existing courses
of study or you know existing
fellowships or journals or or that kind
of thing there for you and you might
need to be a bit more sort of
self-directed and tread your own path in
terms of choosing what topics you want
to work on persuading supervisors and
superiors that what you're doing is it's
interesting great so with that intro to
break down the space into two main types
of work you might do which I'm going to
call research and practice and then
split each of those into the kinds of
institutions that might be good good
homes for each kind of work so the first
one research this is like going deep and
and doing original thinking on the kinds
of questions that miles is talking about
I think three key types of institutions
where you might do this would be
academia think tanks and ei
organizations so I'll go through one by
one
so if academia is likely looks like you
know going to grad school maybe doing a
postdoc or a fellowship maybe eventually
going into more senior roles in a
university and academia is really well
suited to choosing one specialized topic
and going into a lot of depth on that
topic there's a lot of room for you know
fundamental and theoretical research and
is there's an ever-increasing number of
interdisciplinary centers springing up
at universities that could be good homes
for this kind of work so I'm thinking of
places like the Center for International
Security and Cooperation at Stanford the
Center for long-term cybersecurity at
Berkeley pulse at UCLA there's a bunch
of other centers like this or you could
also go into a more traditional
department like International Relations
or economics and you know the best home
for the work you want to do is just
going to depend on what topics are
interested in and also which people
you're working with and what they're
interested in as well
cool so number two for research think
tanks I think think tanks are fairly
similar to academia in some ways and
that they're relatively mainstream they
tend to have established areas that they
focus on establish ways of doing things
and the key difference would just be
that they are often more focused on
concrete policy proposals and specific
ways of implementing ideas rather than
the kind of theoretical or fundamental
research so there's definitely some
overlap between between think tanks in
academia there are already some some
newer think tanks that are specifically
focused on AI issues and particularly on
the the kind of neuro term issues that
that Myles was talking about so AI now
is one and data and society is another
and you could also aim for a more
established and kind of general purpose
think-tank as well like you know the
RAND Corporation the Center for
Strategic and International Studies
Brookings the Center for a New American
Security just to name a few there are
lots subsisting tanks the third type of
place you might want to do research
would be a TA organizations like the
future of humanity Institute at Oxford
or the Center for the Study of
existential risk at Cambridge and these
places are going to be you know really
best if you want to be focusing on types
of issues that pas tend to care about
disproportionately so you know again
these kind of longer-term more
transformative scenarios and I guess the
the trade-off that comes along with
having the flexibility to work on these
issues is going to be essentially that
you know these organizations don't yet
seem to have as much connection to or
recognition by political decision-makers
so you know I don't want to sell them
short there's certainly been some
contact between FHI and UK government
for example and that's that's likely to
continue but I do think there is a bit
of a trade-off there in terms of
flexibility versus influenced okay so
that is the research side of things what
I'm calling practice is is roles that is
that are more about you know
understanding what research is out there
and putting it into practice you know
making decisions based on it for some
given body and I think I think there are
two big categories of this type of role
probably you know government and
government ish organizations and
industries sort of start with government
it seems very likely that the role of
governments and government like bodies
is is only going to increase in you know
in the their role in the development and
use of AI and because of this it would
be really great
I have more people from the effective
altruism community working in government
I think particularly in the US
government given how influential the
u.s. is on the international stage but
certainly also in other country
governments and also in you know
multilateral organizations like the UN
and others which are starting to show
some interest in AI issues and I think
that's that's likely to continue so who
should work in government I think
there's a unfortunately low number of
effective altruists who seem to have you
know an appropriate profile for roles in
government so I think if the description
that I'm about to give sounds like you I
would I would really seriously consider
it so I think a good profile for someone
to do well in a government role I guess
I'm thinking particularly bout the
federal government but I think this
applies to too many types of roles it's
someone who is you know well-rounded and
generally capable I think if you're a
genius in any one given thing you'll do
better in that thing outside of
government most likely but if you can do
you know a wide range that might be good
you'll want to have decent interpersonal
skills I think particularly the kinds of
interpersonal skills that will let you
go into a meeting you know with
different people you don't know very
well from different agencies with
different goals and you know sound
reasonable to them get them on board
behind the kinds of things that you're
interested in I think a couple of other
important traits are to be really
patient with bureaucracy unfortunately I
think that's going to be a key part of
working in government and then the last
one it's just going to be to be you know
a citizen of whatever country you're you
want to work in so if you want to work
in US government be a US citizen in my
case that's an area for improvement
I think the second major major category
of you know practitioner type work would
be working in industry so this could
mean working within a old AI
organizations like deepmind or open AI
could also mean other industry
organizations like the I Triple E or the
partnership on AI I don't think there
are many positions like this available
right now deepmind certainly has some
policy staff but I do think that the
number of these types of positions is
likely to increase and this is likely to
be you know a less sort of stuffy and
bureaucratic option where you're still
close to to the relevant decision makers
and I think um you know which of these
whether you're more interested in
and type roles or industry roles is also
going to depend of course on how you
expect the the development of AI to play
out and how influential you expect these
these different sectors to be okay so to
sum up and again noting that this talk
is simplifying a lot a naming to you
know skim over the landscape rather than
cover anything in great detail if you
want to do research work you might
consider academia for in-depth or
theoretical work think tanks for more
policy relevance or ei organizations if
you want to really stick to the topics
that ears tend to care about more than
others if you want to work as a
practitioner you could consider
governance if you have the patience for
the bureaucracy or otherwise in there
may also be roles in industry
organizations that work for you
including in AI research groups that's
just about all that I have to say if
you're interested you can check out the
80,000 hours careers guide which miles
wrote on these topics I think if you
google 80,000 hours AI strategy that
should be the first result and just
before closing I want to throw in one
bonus angle which is that China has been
showing more and more interest and
sophistication in AI and machine
learning technologies so I don't think
this is for everyone but in the spirit
of taking a portfolio approach if
spending time in China and getting to
know the scene there seems appealing to
you I would definitely consider that as
well thank you alright thank you so much
Helen next we have mala Borg on and
Andrew Snyder Bedi tag-teaming it to
talk about careers in operations and
management in AI malo is the chief
operating officer of the machine
intelligence Research Institute mala
oversees all day-to-day operations and
program activities at Miri he also
co-chairs the Committee on the safety
and beneficence of artificial general
intelligence and artificial
superintelligence of the I Triple E
Global Initiative for ethical
considerations and artificial
intelligence and autonomous systems malo
joined Miri in 2012 shortly after
competing a map completing a master's
degree in engineering at the University
of Guelph Andrew Schneider BD is the
research director at the future of
humanity Institute University of Oxford
before that he worked as a project
manager at the future of humanity
Institute and as a researcher at Mehta
his projects @fh I cover existential
risk research fundraising recruitment
and outreach
while at Shi Andrew obtained over 4.2
million pounds from grant writing and
wrote popular articles that received
over 500,000 readers his current
interests include observation flexion
effects technological forecasting and
longer-term biosecurity issues
please welcome Malo and Andrew so I
think if I was to summarize the point in
one sentence it's just that operations
is an incredibly incredibly important
set of skills and career paths for
people that want to have an impact at an
EA organization so that would be like
the overall summary I'll give a little
bit background as as kind of like my
experience as a project manager at FHI
um so I I went into FHI as a project
manager there are a number of other
project managers who have worked at FHI
and I think it's a tremendously a high
impact career option for someone who
cares a lot about the research is really
interested in the research but also
doesn't want to entirely specialize on
the research and wants to also kind of
integrate with the real world and like
translate research and impact and do
fundraising and and interact with the
world in these ways so so three-three
like kind of main areas as a project
manager that i found super useful
so one obviously is fundraising so if if
you're familiar with the research and
you're good writer and you enjoy writing
grants which is like relatively rare you
can have a tremendous impact so a
typical project manager FHI raises
something like 1 million in grant
writing every year from non ei sources
so these are like Research Council
funding or various trusts and whatnot
and so even even just from a purely
fundraising standpoint I would argue
that project management within an EI
organization could potentially be a
superior option to earning to give if
you're interested in these kind of
issues so that's like one kind of
easily measurable a way in which these
careers have an impact another is just
kind of outreach and ensuring that the
the research is having an impact in the
real world and this requires a very wide
range of skills so I think my favorite
example of this was the Asilomar
conference um you know this was after
the publication of super intelligence
and this was really kind of the first
conference to got a lot of the
stakeholders together to kind of create
some common knowledge around Yai safety
would be important in the future and
this took a tremendous amount of effort
this wasn't done at FHI this have done
it fli but but basically all the effort
that went into that was an operation
skill set effort and I think that that
event really kind of got the ball
rolling for the AI safety and AI
strategy a trajectory that we're on now
and was a tremendously tremendously
important piece of work potentially more
important than all the research that
like any any single piece of research
that that's happened recently
other examples kind of within FHI
include things like writing press
releases and making sure that the right
stakeholders have access to the the
research that one is producing or
interfacing with governments and kind of
setting up collaboration so recently we
had a collaboration with the finnish
foreign ministry who is kind of
interested in some of these issues and
and setting that up so that takes a lot
of effort as well that isn't necessarily
just on the research end one thing that
I'll jump in there is I think on the
communication side there's also a very
underappreciated skill of being able to
both be interested and kind of deeply
understand the work that people are
doing but also have the skill of being
able to like very accurately model the
different stakeholders and people that
you're communicating with because a lot
of the topics that we're talking about
have different audiences that has maybe
different like pain points or concerns
and if you're kind of delivering this
message that sometimes is a little
complicated or hard to buy or whatever
there's like a very valuable skill of
being able to kind of like create these
models and speak to different audiences
and so if you feel like you have this
writing skill or this communication
skill and you feel like you're good at
kind of knowing your audience and
different people like this is very high
value I think you could wander
into basically any AI or who's doing
like AI safety type of stuff and even if
they don't list a position if you're the
type of person and you hang out around
there you'll very quickly be someone who
they want to kind of work with more so
yeah please get in touch with any of us
that's the thing you have yeah
absolutely
and then I guess the the final thought
is just kind of on the basic operations
and management style roles that need to
be filled within this space so this is
this is kind of like a really
fundamental role basically being the
foundation on which things get done and
on which an organization can rely on you
to ensure that there's a platform on
which the research and outreach and all
the impact can occur and I think I think
there's relatively few people with an EI
community that have considered this as a
longer-term career option but I think a
lot of people really ought to um so I
think well I don't know malla maybe you
can say more about the impact of just
yeah yeah sure so one thing I'll note
kind of right out of the gate is
oftentimes people who are good at this
don't realize how like few people have
kind of this skill and so it seems
obvious to them that this is like a very
replaceable position or oh you know I
can't do a you know direct policy work
or AI technical safety work and so I
guess I can go earn to give or use my
skills elsewhere and see this kind of
like ops role is something that's very
replaceable and kind of a little bit
second-tier and I think that's like very
wrong and that is even from my own
experience basically everyone I know
who's good at this role kind of doesn't
understand why other people think it's
special and so if you think you have
this thing like you are special and or
just like like FHI and Miri and other
places need your kind of help and the
other thing I kind of want to emphasize
is that there are a lot of like
interesting opportunities for
advancement in these types of roles so
even if you like the initial role is
something where it's very much like
you're the person who's making things
happen and you're not making decisions
and managing people if you have an
interest in these subjects and as you're
working at these organizations you
developing more of a specializations or
interacting with everybody and you have
the ability of getting things done and
moving projects forward there will be
lots of opportunities especially in the
smaller organizations for someone who's
a good generalist in that way to take on
more responsibilities you know go into
management that sort of thing I mean it
in kind of Mary's particular case we
have this thing where we're always
trying to eat our superiors job and so
I'm always trying to steal all the
things I came from Nate our office
managers trying to always take all of my
job and so I would get encouraged people
who think they have the skill to jump in
on kind of intro roles and kind of build
their their capital that way and there
are actually a lot of opportunities
right now in this space and maybe Andrew
can talk a little bit about some of the
the openings that FHI is going to have
or has already yeah so FHI is going to
be hiring for two maybe three Operations
roles in the very near future the open
fill is also I gather hiring for a
Director of Operations so there are a
number of positions already within this
space that that you know we're searching
for really top-notch people like kind of
elite operators and so I will be doing
office hours at 2 o'clock
along with Carrick and Neil and some
other FHI people so if if you're
interested in a role like this please
come get in touch I'd love to chat more
yeah and i'll say generally as well if
this is the type of thing you're
interested in and you want more tips and
thoughts on how to kind of make your way
in feel free to catch me around the
conference
the speakers could come up and sit for a
few questions thank you all so let's
start with a question for Yann for those
of us who have already completed an
undergraduate and perhaps graduate
degree in something other than CS or
mathematics how do we gain the technical
expertise to do AI technical safety
research yeah that's a good question
generally so there's there's things you
can so the first step will just be
reading up on literature right there's
the ASAP syllabus which is again really
just an AI syllabus and has like lots of
textbook textbooks that you can look at
usually in order for someone to hire
your researcher in machine learning if
you have a PhD in related field is that
you demonstrate that you can do research
in the space and like one idea would be
that you may do an internship in a place
like Mila and then try to get a paper
published in machine learning and that
then you can point to you the fact that
you can do this kind of research
and there's other avenues as well that
you can take to to gain this kind of
skills like for example the band
residency and other things so we have a
question for the policy people Helen and
Myles what are the best places to get a
graduate degree at which universities
for people who want to influence AI
policy yes so I think in general any top
university is like any university good
reputation is going to be a good place
to start the area that I can speak to
best on this question is national
security or security studies and I think
they're sice so the Johns Hopkins School
of Advanced International Studies and
Georgetown which are
in DC have really strong reputations
there yeah my impression is that any top
school is going to put you in good
standing and maybe some of them have you
know slightly more or less established
programs for different areas so that
some right to you yeah I agree with that
and I would add you know in addition to
schools that are top you know across the
board like Stanford and Harvard and
others so like at Harvard's Kennedy
School would be an example but also uh
places where there's you know an
especially strong program in policy so
like Tufts Fletcher's school or
something like that and I would also add
that you know to some extent the the the
school is not the not the only question
it might be the important most important
question but might also not be the most
important question it also matters that
you're you know in involve with a
productive research group and have a
good advisor who is interested and the
sorts of issues that you're interested
in and can provide good guidance so I
think that's that's another factor to
consider just adding that FHI might um
so so we recently have Alan Defoe who's
come to FHI and he's also concerning
building out the team of PhD students at
Oxford so that would be a good
opportunity like if you're interested in
kind of advancing doing a PhD like
within kind of an A org
this is something that we're trying to
get a pipeline started for um but uh
yeah I mean yeah so that would be kind
of an exciting opportunity for some
people
question number the next question is is
there a place for philosophers of mind
in technical safety research like deep
mind or are they better off doing policy
type stuff I guess it depends on like
your exact background right so usually
in technical is safety we do technical
research so you'd have to have a
technical skill set to do this kind of
research whether I guess if you have
like a few background is like solely in
philosophy
I would guess you better placed in like
a policy or strategy position but again
and I would just add a general comment
on like the value of philosophy I think
it's a very important skill set and a
lot of people who have done pioneering
work on you know AI policy and strategy
like Nick Bostrom and Toby Ord have
training and philosophy so but I think
there's a there's a need for those with
that skill set and who you know are in a
program in that area to sort of focus on
the sorts of questions that are policy
relevant so for example I could imagine
a lot of false philosophy of mind or you
know ethics to not be super relevant to
say you know the design of AI systems
but I could imagine very fruitful
research that looks at the specific
questions of you know machine you know
the moral patientsí and and moral agency
of like the sorts of AI systems that are
being developed or will soon be
developed as opposed sort of abstract
analysis of you know the space of
possible Minds it Thanks next question
what are the AI related career
opportunities for people who are unable
to get a PhD for whatever reason I mean
I think most of the positions that
Andrew and I spoke about lend themselves
well to people who don't have PhDs
there's you know a wide spectrum of kind
of just making things happen at an
organization we're kind of very little
specialization and anything is
necessarily required if you have the
right abilities up to writing grants
we're kind of a lot more domain
knowledge is important but again it's
the type of thing that you can kind of
pick up through interaction with the
space and working at an org and
developing those skills to the point
where you can kind of take on those
kinds of responsibilities I guess
another thing I'll say is at me REE
credentials for researchers aren't as
important if you can do the type of you
know very big picture kind of mix of
math and philosophy research that we're
doing and you kind of have the skill set
for that whether you have an
undergraduate degree or a PhD that's not
particularly interesting to us it's
whether you have the skills and so if
you're in that direction
you know you should feel free to get in
touch yeah and I got that in the in the
policy and strategy space I think a PhD
is not super necessary they're a bunch
of graduates and master's programs and
things like international relations or
MPP Master of Public Policy
that are great options and will set you
up pretty well even for the more
mainstream you know sort of think-tank
government type things especially if you
can get internships while you're doing
them and again you know echoing Marlowe
I think if you want if you're aiming to
work in EI organizations there is even
less important and if you you know
having I think having some good work
sample and showing that you can write
about these issues is going to be more
important than credentials for those
organizations so a question for the
policy people can you talk about the
benefits and costs of pursuing a law
degree in order to work on AI policy
issues versus the other sorts of things
Helen was mentioning um so I think the
benefits are you know somewhat analogous
to philosophy it's sort of a disciplined
way of thinking and and you know
analyzing you know institutions and laws
and I think there's also a lot of
subject knowledge that that could be
useful in terms of understanding you
know what what sorts of you know legal
apparatus are relevant to the
development of AI or what as you know a
fruitful legal policy would look like
and actually analyzing particular
legislative proposals in terms of costs
I mean you know there always opportunity
costs with any discipline and and I
agree with what Helen said earlier that
there's not any one disciplinary home of
of AI policy so you know you like the
the opportunity costs would just be like
learning more you know developing other
skill sets like you know game theory in
an economics program or whatever so I
think you know there are lots of ways
there are lots of disciplinary
contributions to be made and and you
know different degrees that could be
useful yeah I agree with that and I'd
also just add that if you have a
particular incident going into to
specifically government I think another
benefit of law degrees is that that
generally pretty well well looked upon
in instead of a wide range of government
roles any recommendations for AI related
work for those with an interest in
reducing far future suffering in
particular as opposed to existential
risk I mean I don't know if they have
any open positions but Fri is definitely
kind of like the main place that's
thinking about those sorts of things
oftentimes with small ei orgs
many people who get positions there
aren't people who applied for a general
posting but if an opportunity presents
itself for someone with that there's
certain skill set you know is in contact
with the organization that might be
enough so if you know that's a subject
that you're interested in I would
definitely recommend getting in touch
with those folks and seeing whether
there's there's some fit there I also
think there's a good argument we made
that working in kind of any ei
organization is a good fit for someone
with those interests I mean you know
unless you want to be specifically and
only working on those things where you
know something like Fri might be a
better fit I just briefly add that in
the same way that some of the same
concepts and tools and issues are
relevant over the short and long term it
might also be the case that you know
some of the same you know policy
concepts are relevant to addressing
particular risks over the long term so
if you're able to you know make
international coordination work to
prevent you know automated you know
hacking or whatever then you might also
be in a better position to prevent
future suffering so I think both you
know Fri and also potentially other
organizations it could be good for that
okay so we have five minutes left so
we're going to do two more questions the
first one for yan what's the difference
in the work that research engineers and
research scientists do a deep mind you
say a bit more about the qualifications
there - um yeah so usually the research
engineers are more focused on janu
engineering part of projects so a lot of
most of the work that goes on at the
mind is very engineering focused where
we build new machine learning models we
implement new things we try out most new
stuff and that involves a lot of
implementation necessarily and research
engineers are more focused on the
implementation set of things so usually
you have teams of research in years and
researchers working together on a
particular project on particular
research project and usage scientists
will focus more on like the conceptual
side and like on figuring out like the
high-level goals of the project and
research engineers will work more on the
implementation side but that doesn't
mean that research engineers are just
coders and so research engineer
expected to know machine learning pretty
well and like be able to read and write
machine learning papers so but the
qualifications for uses scientists are
usually more selective so for a research
engineer you're like you wouldn't be
expected to have a PhD in machine
learning and usually a master's degree
in physics can be the science or maybe
it's efficient we are engineers from
like other disciplines you have like
done physics undergraduate juries or
other cognitive fields and finally for
the policy people could you talk a bit
more about if someone is trying to
decide between academia versus advancing
policy research directly in AI
organizations or ei organizations excuse
me what sort of considerations might tip
them one way or the other I I think it
really depends on what sorts of issues
you're concerned about and what sort of
work you want to do so I think you know
there tends to be a greater orientation
towards like temporary or contemporary
issues that you know a research
organizations because they have fires to
put out and yeah you know current issues
like privacy and so forth to worry about
and they have to worry about you know
the particular concerns of the
organization if you want to you know
take a more sort of you know a global
scale perspective then it might be
better to you know not be tied to a
particular organization but I think you
know they're their pros and cons of both
and there isn't there there isn't a
clear you know choice between them like
even if you just want to do the research
so like there are people who focus on
research at AI research at AI
organizations and I think it also can be
very helpful to you know be close to the
research and to know what the actual
technology is capable of but you know it
also to some extent is sort of a false
choice because one can you know you know
be in close collaboration with AI
research organizations from the outside
and vice-versa I'm Adam spoke I meant
effective alterus organizations such as
Shi or open philanthropy project or
whatever as opposed to academia I'm
supposed to academia
okay sorry about that yeah so I think
again I think it's it's somewhat of a
false choice in that like there are
people at FHI who are you know in the
process of getting their PhDs or D
filled and you know it's housed at you
know a university but generally speaking
academics tend you know like non you
know ei organization academics tend to
be focused more on their disciplinary
skill set and like getting you know
really high competence in a particular
you know in a particular skill whereas
at you know at an EI organization will
be more interdisciplinary and drawing
out a bunch of different perspectives
you might you know might lend itself to
more Jeanette general analysis at you
know they're drawing up to good
discipline so if you want to get like a
really strong foundation in a particular
area then you might want to go to like
you know the best Poli Sci program or
the best computer science program yeah I
think I think I just add that probably
the best thing to do is to figure out
which which specific topics you're
interested in and then figure out who
you think is doing the best work on that
and so that could be you know it could
be an academic institution or could be
an EI organizational could be somewhere
like FHI that's opposed thank you so
much everybody
you |
05f9e0ef-ddb2-42c3-ab30-7b8aa6e98345 | trentmkelly/LessWrong-43k | LessWrong | Immortality: A Practical Guide
Immortality: A Practical Guide
Introduction
This article is about how to increase one’s own chances of living forever or, failing that, living for a long time. To be clear, this guide defines death as the long-term loss of one’s consciousness and defines immortality as never-ending life. For those who would like less lengthy information on decreasing one’s risk of death, I recommend reading the sections “Can we become immortal,” “Should we try to become immortal,” and “Cryonics,” in this guide, along with the article Lifestyle Interventions to Increase Longevity.
This article does not discuss how to treat specific disease you may have. It is not intended as a substitute for the medical advice of physicians. You should consult a physician with respect to any symptoms that may require diagnosis or medical attention.
When reading about the effect sizes in scientific studies, keep in mind that many scientific studies report false-positives and are biased,101 though I have tried to minimize this by maximizing the quality of the studies used. Meta-analyses and scientific reviews seem to typically be of higher quality than other study types, but are still subject to biases.114
Corrections, criticisms, and suggestions for new topics are greatly appreciated. I’ve tried to write this article tersely, so feedback on doing so would be especially appreciated. Apologies if the article’s font type, size and color isn’t standard on Less Wrong; I made it in google docs without being aware of Less Wrong’s standard and it would take too much work changing the style of the entire article.
Contents
1. Can we become immortal?
2. Should we try to become immortal?
3. Relative importance of the different topics
4. Food
1. What to eat and drink
2. When to eat and drink
3. How much to eat
4. How much to drink
5. Exercise
6. Carcinogens
1. Chemicals
2. Infections
3. Radiation
7. Emotions an |
9bfb31f7-9bc9-4822-a552-a6516af48dcb | trentmkelly/LessWrong-43k | LessWrong | A bunch of videos for intuition building (2x speed, skip ones that bore you)
did I mention 2x speed? hit play on many, pause on almost as many.
This is a big list of the youtube videos I find myself linking to people most often. These are all from the same playlist: https://www.youtube.com/playlist?list=PLgx5WuezywJMj_JS47QIqcn8_3UiiEwPs - comment here if you'd like edit access, I give it out readily. I'd love to have people moving the most important and insightful videos towards the beginning. I'd also love to see people clone the playlist and just make their own version.
These vary wildly in topic and difficulty level. I generally do not try to avoid watching things above my level, I just use it as inspiration for how to fill in what I'm missing. If something sounds basic to you, it probably is.
Many of these videos are quite short, many are quite long.
1min: neuron clip
23min: neuroscience overview (bio neuron interpretation)
or 10min with 2x speed!
10min: overview of learning techniques (bit clickbaity title but I include because I like it plenty anyhow)
or 5min with 2x speed!
2min: visual intuition - details of how one particular chaotic fluid flow move
11min: research talk on what collective intelligence is. (see also many more cool talks from MITCBMM!)
or 5min with 2x speed!
2min: visualization of a volume of neurons in a (mouse?) amygdala
8min: cognitive biases in practice
33min: absolutely incredible visual intro to physics sims focusing towards fluid simulation
or 15min with 2x speed!
15min: cs101 "ok, but what does it mean to abstract over the matter of a computer"
or 7min with 2x speed!
1min: visualization of particle lenia
20min: overview of Michael Levin's research on the bioelectric communication of cells for morphogenesis and morphogenic editing without genetic change
or 10min with 2x speed
11min: cs101 how a neural network is actually just line segments (with relu, anyway)
12min: nice intro to what chaos theory is actually about
18min: overview of ways visual proofs can mislea |
771d45d5-c4e5-43bb-b020-7a0891b136e9 | trentmkelly/LessWrong-43k | LessWrong | Constructing Goodhart
A recent question from Scott Garrabrant brought up the issue of formalizing Goodhart’s Law. The problem is to come up with some model system where optimizing for something which is almost-but-not-quite the thing you really want produces worse results than not optimizing at all. Considering how endemic Goodhart’s Law is in the real world, this is surprisingly non-trivial.
Let’s start simple: we have some true objective u(x), and we want to choose x to maximize it. Sadly, we don’t actually have any way to determine the true value u for a given value x — but we can determine u(x)+ϵ(x), where ϵ is some random function of x. People talked about this following Scott’s question, so I won’t math it out here, but the main answer is that more optimization of u+ϵ still improves u on average over a wide variety of assumptions. John Maxwell put it nicely in his answer to Scott’s question:
> If your proxy consists of something you’re trying to maximize plus unrelated noise that’s roughly constant in magnitude, you’re still best off maximizing the heck out of that proxy, because the very highest value of the proxy will tend to be a point where the noise is high and the thing you’re trying to maximize is also high.
In short: absent some much more substantive assumptions, there is no Goodhart effect.
Rather than generic random functions, I suggest thinking about Goodhart on a causal DAG instead. As an example, I’ll use the old story about soviet nail factories evaluated on number of nails made, and producing huge numbers of tiny useless nails.
We really want to optimize something like the total economic value of nails produced. There’s some complicated causal network leading from the factory’s inputs to the economic value of its outputs (we’ll use a dramatically simplified network as an example).
If we pick a specific cross-section of that network, we find that economic value is mediated by number of nails, size, and strength — those variables are enough to determine the objec |
34d31b46-23ff-4fba-9bd3-cf3511533031 | trentmkelly/LessWrong-43k | LessWrong | Book Review: Why Honor Matters
People who live in honor cultures have a sense of purpose and meaning. They dwell in solidarity with their fellows, are courageous in the face of danger, set great store in hospitality, and put the welfare of the group above their own.
Mostly.
I expect when readers from this site think of honor, it brings to mind Culture of Honor: the Psychology of Violence in the South, the gist of which is that American Southerners kill each other over insults more often than the rest of the country because the biggest chunk of colonial immigrants there were cattle-herders from the border between Scotland and England. I also guess that through Scott Alexander's review of Albion's Seed, the usual view of these people (and how they think) is perhaps unflattering.
In Why Honor Matters, Tamler Sommers offers a defense of honor. It is only a defense, and not an apologia, so he makes no excuses for the evils associated with it (eternal feuds, subjugation of women, etc). He speaks for the general case of honor with a variety of examples, rather than any specific implementation of it. The writing is clear and untechnical.
I.
Why write the book?
> Courage, integrity, solidarity, drama, hospitality, a sense of purpose and meaning - these are attractive values and characteristic, important for living a good and worthwhile life. But there was something else drawing me to honor too, something more fundamental and harder to describe. Though I subscribe to liberal values of toleration and respect for individual freedom, I've come to believe that the Western liberal approach to ethics is deeply misguided. The approach is too systematic, too idealized and abstract - incapable of reckoning with the messy complexity of the real world.
How does honor differ?
> Idealized, systematic, abstract, and universalizable - honor has none of these attributes. Honor, unlike dignity, is not abstract; it's grounded in fact. Honor is real only when people recognize and acknowledge it. Honor codes are local |
d01ec637-f2e7-408b-b06d-c36c277627b7 | trentmkelly/LessWrong-43k | LessWrong | Trying to align humans with inclusive genetic fitness
Epistemic status: I think this post points to some important ideas, I think the specific proposals might have flaws and there are likely better ideas. If you’re interested, I would be interested in other proposals, or converting standard alignment proposals into this frame.
Also, I don’t think any of the proposals in the post are moral or good things to do, obviously. IGF does not seem to be the one true moral imperative.
----------------------------------------
Reasoning about future AIs is hard, we want to be able to talk about systems which “optimize” for “goals”, but we don’t really know what either of these terms mean. It might not be clear if we should talk about an AI “wanting” something, and discussion often gets bogged down in terminology and confusion. But there is at least one example of an optimized system “wanting” something: humans!
Humans are often used as an example of an inner alignment failure, where evolution via natural selection optimizes for inclusive genetic fitness (IGF), and yet humans ended up pursuing goals other than maximizing IGF. I want to demonstrate some key alignment difficulties by pretending we are a god who is attempting to align humans to care about IGF. The aim here is to have humans which are just as intelligent as current humans, but do primarily terminally care about maximizing IGF.
I discuss IGF in this post, but I mostly expect difficulties here to carry over to any similarly abstract and specific concept. I don’t think it is valid to say “humans care about abstract and specific human values, and so it should be easy to make them care about abstract and specific”, because this is basically drawing the target around the arrow.
Ground rules
I am assuming we, as the god, are omnipotent but only as smart as current day humans. We can:
* Smite humans
* Clone humans
* Make humans infertile
* Rewind time
* Provide external stimuli to humans
I am not assuming that the god can just rewire the humans’ brain to make the |
a63b4d53-5a92-49b7-9d5b-05ed8ce0ec21 | trentmkelly/LessWrong-43k | LessWrong | Machine Learning Projects on IDA
TLDR
We wrote a 20-page document that explains IDA and outlines potential Machine Learning projects about IDA. This post gives an overview of the document.
What is IDA?
Iterated Distillation and Amplification (IDA) is a method for training ML systems to solve challenging tasks. It was introduced by Paul Christiano. IDA is intended for tasks where:
* The goal is to outperform humans at the task or to solve instances that are too hard for humans.
* It is not feasible to provide demonstrations or reward signals sufficient for super-human performance at the task
* Humans have a high-level understanding of how to approach the task and can reliably solve easy instances.
The idea behind IDA is to bootstrap using an approach similar to AlphaZero, but with a learned model of steps of human reasoning instead of the fixed game simulator.
Our document provides a self-contained technical description of IDA. For broader discussion of IDA and its relevance to value alignment, see Ought's presentation, Christiano's blogpost, and the Debate paper. There is also a technical ML paper applying IDA to algorithmic problems (e.g. shortest path in a graph).
ML Projects on IDA
Our document outlines three Machine Learning projects on IDA. Our goal in outlining these projects is to generate discussion and encourage research on IDA. We are not (as of June 2019) working on these projects, but we are interested in collaboration. The project descriptions are “high-level” and leave many choices undetermined. If you took on a project, part of the work would be refining the project and fixing a concrete objective, dataset and model.
PROJECT 1: AMPLIFYING MATHEMATICAL REASONING
This project is about applying IDA to problems in mathematics. This would involve learning to solve math problems by breaking them down into easier sub-problems. The problems could be represented in a formal language (as in this paper) or in natural language. We discuss a recent dataset of high-school problems in |
ea136e13-0e75-400a-aebf-17c714a025d0 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations
1 Introduction
---------------
Imitation learning is a popular paradigm to teach robots and other autonomous agents to perform complex tasks simply by showing examples of how to perform the task. However, imitation learning methods typically find policies whose performance is upper-bounded by the performance of the demonstrator. While it is possible to learn policies that perform better than a demonstrator, existing methods either require access to a hand-crafted reward function [[1](#bib.bib1), [2](#bib.bib2), [3](#bib.bib3), [4](#bib.bib4)] or a human supervisor who acts as a reward or value function during policy learning [[5](#bib.bib5), [6](#bib.bib6), [7](#bib.bib7)]. Recent empirical results [[7](#bib.bib7)] give evidence that better-than-demonstrator performance can be achieved using ranked demonstrations; however, theoretical conditions for improvement over a demonstrator are lacking. This lack of theory makes it difficult to predict when current imitation learning approaches may exceed the performance of the demonstrator and precludes using theory to design better imitation learning algorithms.
In this paper, we first present theoretical results for when better-than-demonstrator performance is possible in an inverse reinforcement learning (IRL) setting [[8](#bib.bib8)], where the goal is to recover a reward function from demonstrations. We then present theoretical results demonstrating that rankings (or alternatively, pairwise preferences) over demonstrations can enable better-than-demonstrator performance by reducing error and ambiguity in the learned reward function. Next, we address the problem of leveraging the benefits of reward learning via ranked demonstrations in a way that does not require human rankings.
Recently, Brown et al. [[7](#bib.bib7)] proposed Trajectory-ranked Reward Extrapolation (T-REX), an imitation learning approach that uses a set of ranked demonstrations to learn a reward function that allows better-than-demonstrator performance without requiring human supervision during policy learning. However, requiring a demonstrator to rank demonstrations can be tedious and error prone, and precludes learning from prerecorded, unranked demonstrations, or learning from demonstrations of similar quality that are difficult to rank. Thus, we investigate whether it is possible to generate a set of ranked demonstrations, in order to surpass the performance of a demonstrator, without requiring supervised preference labels or reward information.

(a) Demonstration

(b) Small noise

(c) Larger noise

(d) Learned reward function from ranking: (a)≻(b)≻(c)succeeds𝑎𝑏succeeds𝑐(a)\succ(b)\succ(c)( italic\_a ) ≻ ( italic\_b ) ≻ ( italic\_c )

(e) Optimized policy
Figure 1: D-REX high-level approach: given a suboptimal demonstration (a), we run behavioral cloning to approximate the demonstrator’s policy. By progressively adding more noise to this cloned policy ((b) and (c)), we are able to automatically synthesize a preference ranking: (a)≻(b)≻(c)succeeds𝑎𝑏succeeds𝑐(a)\succ(b)\succ(c)( italic\_a ) ≻ ( italic\_b ) ≻ ( italic\_c ). Using this ranking, we learn a reward function (d) which is then optimized using reinforcement learning to obtain a policy (e) that performs better than the demonstrator.
We propose Disturbance-based Reward Extrapolation (D-REX), a ranking-based reward learning algorithm that does not require ranked demonstrations. Our approach injects noise into a policy learned through behavioral cloning to automatically generate ranked policies of varying performance. D-REX makes the weak assumption that the demonstrations are better than a purely random policy, and that adding increasing levels of noise into a cloned policy will result in increasingly worse performance, converging to a random policy in the limit. Our approach is summarized in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations"). The intuition behind this approach is that generating ranked trajectories via noise injection reveals relative weightings between reward features: features that are more prevalent in noisier trajectories are likely inversely related to the reward, whereas features that are more common in noise-free trajectories are likely features which are positively correlated with the true reward. Furthermore, adding noise provides a form of feature selection since, if a feature is equally common across all levels of noise, then it likely has no impact on the true reward function and can be ignored.
By automatically generating rankings, preference-based imitation learning methods [[9](#bib.bib9), [10](#bib.bib10), [11](#bib.bib11), [7](#bib.bib7)] can be applied in standard imitation learning domains where rankings are unavailable. We demonstrate this by combining automatic rankings via noise-injections with a state-of-the-art imitation learning algorithm that uses ranked demonstrations [[7](#bib.bib7)]. We empirically validate our approach on simulated robotics and Atari benchmarks and find that D-REX results in policies that can both significantly outperform the demonstrator as well as significantly outperform standard imitation learning. To the best of our knowledge, D-REX is the first imitation learning approach to achieve significant performance improvements over the demonstrations without requiring extra supervision or additional side-information, such as ground-truth rewards or human preferences.
2 Related Work
---------------
Imitation learning has grown increasingly popular in recent years [[12](#bib.bib12), [13](#bib.bib13)], with many applications in robotics [[14](#bib.bib14), [15](#bib.bib15), [16](#bib.bib16), [17](#bib.bib17)]. However, little work has addressed the problem of achieving better-than-demonstrator performance.
When ground-truth rewards are known, it is common to initialize a policy using demonstrations and then improve this policy using reinforcement learning [[1](#bib.bib1), [18](#bib.bib18), [2](#bib.bib2)]. However,
designing good reward functions for reinforcement learning can be difficult and can easily lead to unintended behaviors [[19](#bib.bib19), [20](#bib.bib20)].
Inverse reinforcement learning can be used to estimate a demonstrator’s intent by inferring a reward function that explains the demonstrations. While there has been some work on learning from suboptimal demonstrations, most approaches either require suboptimal demonstrations to be manually clustered [[21](#bib.bib21)] or labeled [[22](#bib.bib22)]. Other methods are robust to unlabeled, suboptimal demonstrations, but require most demonstrations to come from an expert in order to correctly identify which demonstrations are anomalous [[23](#bib.bib23), [24](#bib.bib24)]. Syed and Schapire [[25](#bib.bib25)] proved that knowing which features contribute positively or negatively to the true reward allows an apprenticeship policy to outperform the demonstrator. However, their approach requires hand-crafted, linear features, knowledge of the true signs of the rewards features, and repeatedly solving a Markov decision process.
Preference learning [[26](#bib.bib26)] is another way to potentially learn better-than-demonstrator policies. Sadigh et al. [[9](#bib.bib9)] and Christiano et al. [[6](#bib.bib6)] propose reward learning approaches that use active learning to collect pairwise preferences labels. Ibarz et al. [[10](#bib.bib10)] and Palan et al. [[11](#bib.bib11)] combine demonstrations with active preference learning during policy optimization. Rather than collecting pairwise preferences via active queries, other research has investigated learning better-than-demonstrator policies via prelabeled demonstrations [[27](#bib.bib27), [7](#bib.bib7)]. Brown et al. [[7](#bib.bib7)] propose Trajectory-ranked Reward Extrapolation (T-REX), an algorithm that uses a set of pre-ranked demonstrations to learn a reward function for policy optimization. Brown et al. evaluate T-REX on a variety of MuJoCo and Atari benchmarks and show that policies optimized via T-REX are able to consistently surpass the performance of a suboptimal demonstrator, but provide no theory to shed light on when extrapolation is possible. Our work provides a better theoretical understanding of when better-than-demonstrator performance is possible and why ranked demonstrations can help. Furthermore, our work demonstrates for the first time that ranking-based imitation learning approaches are applicable even in cases where human rankings are unavailable.
Prior work on imitation learning has investigated the use of random or noisy trajectories. Boularias et al. [[28](#bib.bib28)] and Kalakrishnan et al. [[29](#bib.bib29)] use uniformly random and locally perturbed trajectories, respectively, to estimate the partition function for Maximum Entropy inverse reinforcement learning [[30](#bib.bib30)]. Both methods seek a linear combination of predefined features such that the returns of the demonstrations are maximized with respect to the random trajectories. These methods can be seen as a special case of our proposed method, where only one level of noise is used and where the reward function is represented as a linear combination of known features.
Disturbances for Augmenting Robot Trajectories (DART) [[31](#bib.bib31)] is a recently proposed behavioral cloning approach
that adds noise during demonstrations to collect a richer set of state-action pairs for behavioral cloning. DART avoids the problem of compounding error that is common to most behavioral cloning approaches by repeatedly requesting and perturbing new demonstrations.
Instead of repeatedly collecting perturbed trajectories from the demonstrator, we instead propose to collect a small number of initial demonstrations, run behavioral cloning once, and then inject varying amounts of noise into the cloned policy. This automatically creates a large set of ranked demonstrations for reward learning, without requiring a human to provide preference labels.
3 Problem Statement
--------------------
Our goal is to achieve better-than-demonstrator performance via imitation learning. We model the environment as a Markov decision process (MDP) consisting of a set of states 𝒮𝒮\mathcal{S}caligraphic\_S, actions 𝒜𝒜\mathcal{A}caligraphic\_A, transition probabilities P:𝒮×𝒜×𝒮→[0,1]:𝑃→𝒮𝒜𝒮01P:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]italic\_P : caligraphic\_S × caligraphic\_A × caligraphic\_S → [ 0 , 1 ], reward function R\*:𝒮→ℝ:superscript𝑅→𝒮ℝR^{\*}:\mathcal{S}\rightarrow\mathbb{R}italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT : caligraphic\_S → blackboard\_R, and discount factor γ∈[0,1)𝛾01\gamma\in[0,1)italic\_γ ∈ [ 0 , 1 ).
A policy π𝜋\piitalic\_π is a probability distribution over actions given state.
Given a policy and an MDP, the expected discounted return of the policy is given by J(π|R\*)=𝔼π[∑t=0∞γtR\*(st)]𝐽conditional𝜋superscript𝑅subscript𝔼𝜋delimited-[]superscriptsubscript𝑡0superscript𝛾𝑡superscript𝑅subscript𝑠𝑡J(\pi|R^{\*})=\mathbb{E}\_{\pi}[\sum\_{t=0}^{\infty}\gamma^{t}R^{\*}(s\_{t})]italic\_J ( italic\_π | italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) = blackboard\_E start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ]. Similarly, the return of a trajectory consisting of states and actions, τ=(s0,a0,s1,a1,…,sT,aT)𝜏subscript𝑠0subscript𝑎0subscript𝑠1subscript𝑎1…subscript𝑠𝑇subscript𝑎𝑇\tau=(s\_{0},a\_{0},s\_{1},a\_{1},\ldots,s\_{T},a\_{T})italic\_τ = ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ), is given by J(τ|R\*)=∑t=0TγtR\*(st)𝐽conditional𝜏superscript𝑅superscriptsubscript𝑡0𝑇superscript𝛾𝑡superscript𝑅subscript𝑠𝑡J(\tau|R^{\*})=\sum\_{t=0}^{T}\gamma^{t}R^{\*}(s\_{t})italic\_J ( italic\_τ | italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) = ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ).
We assume that we have no access to the true reward function of the MDP. Instead, we are given a set of m𝑚mitalic\_m demonstrations 𝒟={τ1,…τm}𝒟subscript𝜏1…subscript𝜏𝑚\mathcal{D}=\{\tau\_{1},\ldots\tau\_{m}\}caligraphic\_D = { italic\_τ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … italic\_τ start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT }, where each demonstrated trajectory is a sequence of states and actions, τi=(s0,a0,s1,a1,…)subscript𝜏𝑖subscript𝑠0subscript𝑎0subscript𝑠1subscript𝑎1…\tau\_{i}=(s\_{0},a\_{0},s\_{1},a\_{1},\ldots)italic\_τ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … ). We assume that the demonstrator is attempting (possibly unsuccessfully) to follow a policy that optimizes the true reward function R\*superscript𝑅R^{\*}italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT. Given the demonstrations 𝒟𝒟\mathcal{D}caligraphic\_D, we wish to find a policy π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG that can extrapolate beyond the performance of the demonstrator. We say a policy π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG can extrapolate beyond of the performance of the demonstrator if it achieves a larger expected return than the demonstrations, when evaluated under the true reward function R\*superscript𝑅R^{\*}italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, i.e., J(π^|R\*)>J(𝒟|R\*)𝐽conditional^𝜋superscript𝑅𝐽conditional𝒟superscript𝑅J(\hat{\pi}|R^{\*})>J(\mathcal{D}|R^{\*})italic\_J ( over^ start\_ARG italic\_π end\_ARG | italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) > italic\_J ( caligraphic\_D | italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ), where J(𝒟|R\*)=1|𝒟|∑τ∈𝒟J(τ|R\*)𝐽conditional𝒟superscript𝑅1𝒟subscript𝜏𝒟𝐽conditional𝜏superscript𝑅J(\mathcal{D}|R^{\*})=\frac{1}{|\mathcal{D}|}\sum\_{\tau\in\mathcal{D}}J(\tau|R^{\*})italic\_J ( caligraphic\_D | italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) = divide start\_ARG 1 end\_ARG start\_ARG | caligraphic\_D | end\_ARG ∑ start\_POSTSUBSCRIPT italic\_τ ∈ caligraphic\_D end\_POSTSUBSCRIPT italic\_J ( italic\_τ | italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ). Similarly, we say that a learned policy π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG extrapolates beyond the performance of the best demonstration if J(π^|R\*)>maxτ∈𝒟J(τ|R\*)𝐽conditional^𝜋superscript𝑅subscript𝜏𝒟𝐽conditional𝜏superscript𝑅J(\hat{\pi}|R^{\*})>\max\_{\tau\in\mathcal{D}}J(\tau|R^{\*})italic\_J ( over^ start\_ARG italic\_π end\_ARG | italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) > roman\_max start\_POSTSUBSCRIPT italic\_τ ∈ caligraphic\_D end\_POSTSUBSCRIPT italic\_J ( italic\_τ | italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ).
4 Extrapolating Beyond a Demonstrator: Theory
----------------------------------------------
We first provide a sufficient condition under which it is possible to achieve better-than-demonstrator performance in an inverse reinforcement learning (IRL) setting, where the goal is to recover the demonstrator’s reward function which is then used to optimize a policy [[13](#bib.bib13)].
We consider a learner that approximates the reward function of the demonstrator with a linear combination of features: R(s)=wTϕ(s)𝑅𝑠superscript𝑤𝑇italic-ϕ𝑠R(s)=w^{T}\phi(s)italic\_R ( italic\_s ) = italic\_w start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ ( italic\_s ).111Our results also hold for reward functions of the form R(s,a)=wTϕ(s,a)𝑅𝑠𝑎superscript𝑤𝑇italic-ϕ𝑠𝑎R(s,a)=w^{T}\phi(s,a)italic\_R ( italic\_s , italic\_a ) = italic\_w start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ ( italic\_s , italic\_a ). These can be arbitrarily complex features, such as the activations of a deep neural network. The expected return of a policy when evaluated on R(s)𝑅𝑠R(s)italic\_R ( italic\_s ) is given by
| | | | |
| --- | --- | --- | --- |
| | J(π|R)=𝔼π[∑t=0∞γtR(st)]=wT𝔼π[∑t=0∞γtϕ(st)]=wTΦπ,𝐽conditional𝜋𝑅subscript𝔼𝜋delimited-[]superscriptsubscript𝑡0superscript𝛾𝑡𝑅subscript𝑠𝑡superscript𝑤𝑇subscript𝔼𝜋delimited-[]superscriptsubscript𝑡0superscript𝛾𝑡italic-ϕsubscript𝑠𝑡superscript𝑤𝑇subscriptΦ𝜋J(\pi|R)=\mathbb{E}\_{\pi}\bigg{[}\sum\_{t=0}^{\infty}\gamma^{t}R(s\_{t})\bigg{]}=w^{T}\mathbb{E}\_{\pi}\bigg{[}\sum\_{t=0}^{\infty}\gamma^{t}\phi(s\_{t})\bigg{]}=w^{T}\Phi\_{\pi},italic\_J ( italic\_π | italic\_R ) = blackboard\_E start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_R ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] = italic\_w start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_ϕ ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] = italic\_w start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT roman\_Φ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT , | | (1) |
where ΦπsubscriptΦ𝜋\Phi\_{\pi}roman\_Φ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT are the expected discounted feature counts that result from following the policy π𝜋\piitalic\_π.
######
Theorem 1.
If the estimated reward function is R^(s)=wTϕ(s)normal-^𝑅𝑠superscript𝑤𝑇italic-ϕ𝑠\hat{R}(s)=w^{T}\phi(s)over^ start\_ARG italic\_R end\_ARG ( italic\_s ) = italic\_w start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ ( italic\_s ), the true reward function is R\*(s)=R^(s)+ϵ(s)superscript𝑅𝑠normal-^𝑅𝑠italic-ϵ𝑠R^{\*}(s)=\hat{R}(s)+\epsilon(s)italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s ) = over^ start\_ARG italic\_R end\_ARG ( italic\_s ) + italic\_ϵ ( italic\_s ) for some error function ϵ:𝒮→ℝnormal-:italic-ϵnormal-→𝒮ℝ\epsilon:\mathcal{S}\rightarrow\mathbb{R}italic\_ϵ : caligraphic\_S → blackboard\_R, and ‖w‖1≤1subscriptnorm𝑤11\|w\|\_{1}\leq 1∥ italic\_w ∥ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ≤ 1, then extrapolation beyond the demonstrator, i.e., J(π^|R\*)>J(𝒟|R\*)𝐽conditionalnormal-^𝜋superscript𝑅𝐽conditional𝒟superscript𝑅J(\hat{\pi}|R^{\*})>J(\mathcal{D}|R^{\*})italic\_J ( over^ start\_ARG italic\_π end\_ARG | italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) > italic\_J ( caligraphic\_D | italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ), is guaranteed if :
| | | | |
| --- | --- | --- | --- |
| | J(πR\*\*|R\*)−J(𝒟|R\*)>ϵΦ+2‖ϵ‖∞1−γ𝐽conditionalsubscriptsuperscript𝜋superscript𝑅superscript𝑅𝐽conditional𝒟superscript𝑅subscriptitalic-ϵΦ2subscriptnormitalic-ϵ1𝛾J(\pi^{\*}\_{R^{\*}}|R^{\*})-J(\mathcal{D}|R^{\*})>\epsilon\_{\Phi}+\frac{2\|\epsilon\|\_{\infty}}{1-\gamma}italic\_J ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT | italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) - italic\_J ( caligraphic\_D | italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) > italic\_ϵ start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT + divide start\_ARG 2 ∥ italic\_ϵ ∥ start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT end\_ARG start\_ARG 1 - italic\_γ end\_ARG | | (2) |
where πR\*\*subscriptsuperscript𝜋superscript𝑅\pi^{\*}\_{R^{\*}}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT is the optimal policy under R\*superscript𝑅R^{\*}italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, ϵΦ=‖ΦπR\*\*−Φπ^‖∞subscriptitalic-ϵnormal-Φsubscriptnormsubscriptnormal-Φsubscriptsuperscript𝜋superscript𝑅subscriptnormal-Φnormal-^𝜋\epsilon\_{\Phi}=\|\Phi\_{\pi^{\*}\_{R^{\*}}}-\Phi\_{\hat{\pi}}\|\_{\infty}italic\_ϵ start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT = ∥ roman\_Φ start\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT - roman\_Φ start\_POSTSUBSCRIPT over^ start\_ARG italic\_π end\_ARG end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT and ∥ϵ∥∞=sup{|ϵ(s)|:s∈𝒮}\|\epsilon\|\_{\infty}=\sup\left\{\,\left|\epsilon(s)\right|:s\in\mathcal{S}\,\right\}∥ italic\_ϵ ∥ start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT = roman\_sup { | italic\_ϵ ( italic\_s ) | : italic\_s ∈ caligraphic\_S }.
All proofs are given in the appendix.
Intuitively, extrapolation depends on the demonstrator being sufficiently suboptimal, the error in the learned reward function being sufficiently small, and the state occupancy of the imitation policy, π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG, being sufficiently close to πR\*\*subscriptsuperscript𝜋superscript𝑅\pi^{\*}\_{R^{\*}}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT. If we can perfectly recover the reward function, then reinforcement learning can be used to ensure that ϵΦsubscriptitalic-ϵΦ\epsilon\_{\Phi}italic\_ϵ start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT is small. Thus, we focus on improving the accuracy of the learned reward function via automatically-ranked demonstrations. The learned reward function can then be optimized with any reinforcement learning algorithm [[32](#bib.bib32)].
###
4.1 Extrapolation via ranked demonstrations
The previous results demonstrate that in order to extrapolate beyond a suboptimal demonstrator, it is sufficient to have small reward approximation error and a good policy optimization algorithm. However, the following proposition, adapted from [[33](#bib.bib33)], shows that the reward function learned by standard IRL may be quite superficial and miss potentially important details, whereas enforcing a ranking over trajectories leads to a more accurate estimate of the true reward function.
######
Proposition 1.
There exist MDPs with true reward function R\*superscript𝑅R^{\*}italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, expert policy πEsubscript𝜋𝐸\pi\_{E}italic\_π start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT, approximate reward function R^normal-^𝑅\hat{R}over^ start\_ARG italic\_R end\_ARG, and non-expert policies π1subscript𝜋1\pi\_{1}italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and π2subscript𝜋2\pi\_{2}italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, such that
| | | | |
| --- | --- | --- | --- |
| | πE=argmaxπ∈ΠJ(π|R\*) and J(π1|R\*)≪J(π2|R\*)subscript𝜋𝐸subscript𝜋Π𝐽conditional𝜋superscript𝑅 and 𝐽conditionalsubscript𝜋1superscript𝑅much-less-than𝐽conditionalsubscript𝜋2superscript𝑅\displaystyle\pi\_{E}=\arg\max\_{\pi\in\Pi}J(\pi|R^{\*})\;\text{ and }\;J(\pi\_{1}|R^{\*})\ll J(\pi\_{2}|R^{\*})italic\_π start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT = roman\_arg roman\_max start\_POSTSUBSCRIPT italic\_π ∈ roman\_Π end\_POSTSUBSCRIPT italic\_J ( italic\_π | italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) and italic\_J ( italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT | italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ≪ italic\_J ( italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT | italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) | | (3) |
| | πE=argmaxπ∈ΠJ(π|R^) and J(π1|R^)=J(π2|R^).subscript𝜋𝐸subscript𝜋Π𝐽conditional𝜋^𝑅 and 𝐽conditionalsubscript𝜋1^𝑅𝐽conditionalsubscript𝜋2^𝑅\displaystyle\pi\_{E}=\arg\max\_{\pi\in\Pi}J(\pi|\hat{R})\;\text{ and }\;J(\pi\_{1}|\hat{R})=J(\pi\_{2}|\hat{R}).italic\_π start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT = roman\_arg roman\_max start\_POSTSUBSCRIPT italic\_π ∈ roman\_Π end\_POSTSUBSCRIPT italic\_J ( italic\_π | over^ start\_ARG italic\_R end\_ARG ) and italic\_J ( italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT | over^ start\_ARG italic\_R end\_ARG ) = italic\_J ( italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT | over^ start\_ARG italic\_R end\_ARG ) . | | (4) |
However, enforcing a preference ranking over trajectories, τ\*≻τ2≻τ1succeedssuperscript𝜏subscript𝜏2succeedssubscript𝜏1\tau^{\*}\succ\tau\_{2}\succ\tau\_{1}italic\_τ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ≻ italic\_τ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ≻ italic\_τ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, where τ\*∼π\*similar-tosuperscript𝜏superscript𝜋\tau^{\*}\sim\pi^{\*}italic\_τ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∼ italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, τ2∼π2similar-tosubscript𝜏2subscript𝜋2\tau\_{2}\sim\pi\_{2}italic\_τ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, and τ1∼π1similar-tosubscript𝜏1subscript𝜋1\tau\_{1}\sim\pi\_{1}italic\_τ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, results in a learned reward function R^normal-^𝑅\hat{R}over^ start\_ARG italic\_R end\_ARG, such that
| | | | |
| --- | --- | --- | --- |
| | πE=argmaxπ∈ΠJ(π|R^) and J(π1|R^)<J(π2|R^).subscript𝜋𝐸subscript𝜋Π𝐽conditional𝜋^𝑅 and 𝐽conditionalsubscript𝜋1^𝑅𝐽conditionalsubscript𝜋2^𝑅\pi\_{E}=\arg\max\_{\pi\in\Pi}J(\pi|\hat{R})\;\text{ and }\;J(\pi\_{1}|\hat{R})<J(\pi\_{2}|\hat{R}).italic\_π start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT = roman\_arg roman\_max start\_POSTSUBSCRIPT italic\_π ∈ roman\_Π end\_POSTSUBSCRIPT italic\_J ( italic\_π | over^ start\_ARG italic\_R end\_ARG ) and italic\_J ( italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT | over^ start\_ARG italic\_R end\_ARG ) < italic\_J ( italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT | over^ start\_ARG italic\_R end\_ARG ) . | | (5) |
Proposition [1](#Thmproposition1 "Proposition 1. ‣ 4.1 Extrapolation via ranked demonstrations ‣ 4 Extrapolating Beyond a Demonstrator: Theory ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations") proves the existence of MDPs where an approximation of the true reward leads to an optimal policy, yet the learned reward reveals little about the underlying reward structure of the MDP.
This is problematic for several reasons. The first problem is that if the learned reward function is drastically different than the true reward, this can lead to poor generalization. Another problem is that many learning from demonstration methods are motivated by providing non-experts the ability to program by example. Some non-experts will be good at personally performing a task, but may struggle when giving kinesthetic demonstrations [[34](#bib.bib34)] or teleoperating a robot [[35](#bib.bib35), [36](#bib.bib36)].
Other non-experts may not be able to personally perform a task at a high level of performance due to lack of precision or timing, or due to physical limitations or impairment. Thus, the standard IRL approach of finding a reward function that maximizes the likelihood of the demonstrations may lead to an incorrect, superficial reward function that overfits to suboptimal user behavior in the demonstrations.
Indeed, it has been proven that it is impossible to recover the correct reward function without additional information beyond observations, regardless of whether the policy is optimal [[37](#bib.bib37)] or suboptimal [[38](#bib.bib38)]. As demonstrated in Proposition [1](#Thmproposition1 "Proposition 1. ‣ 4.1 Extrapolation via ranked demonstrations ‣ 4 Extrapolating Beyond a Demonstrator: Theory ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations"), preference rankings can help to alleviate reward function ambiguity. If the true reward function is a linear combination of features, then the feasible region of all reward functions that make a policy optimal can be defined as an intersection of half-planes [[39](#bib.bib39)]:
Hπ=⋂π′∈ΠwT(Φπ−Φπ′)≥0subscript𝐻𝜋subscriptsuperscript𝜋′Πsuperscript𝑤𝑇subscriptΦ𝜋subscriptΦsuperscript𝜋′0H\_{\pi}=\bigcap\_{\pi^{\prime}\in\Pi}w^{T}(\Phi\_{\pi}-\Phi\_{\pi^{\prime}})\geq 0italic\_H start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT = ⋂ start\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ roman\_Π end\_POSTSUBSCRIPT italic\_w start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT ( roman\_Φ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT - roman\_Φ start\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ) ≥ 0.
We define the reward ambiguity, G(Hπ)𝐺subscript𝐻𝜋G(H\_{\pi})italic\_G ( italic\_H start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ), as the volume of this intersection of half-planes:
G(Hπ)=Volume(Hπ)𝐺subscript𝐻𝜋Volumesubscript𝐻𝜋G(H\_{\pi})=\text{Volume}(H\_{\pi})italic\_G ( italic\_H start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ) = Volume ( italic\_H start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ),
where we assume without loss of generality that ‖w‖≤1norm𝑤1\|w\|\leq 1∥ italic\_w ∥ ≤ 1, to ensure this volume is bounded.
In Appendix [A.2](#A1.SS2 "A.2 Ranking Theory ‣ Appendix A Theory and Proofs ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations") we prove that a total ranking over policies can result in less reward ambiguity than performing IRL on the optimal policy.
######
Proposition 2.
Given a policy class Πnormal-Π\Piroman\_Π, an optimal policy π\*∈Πsuperscript𝜋normal-Π\pi^{\*}\in\Piitalic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∈ roman\_Π and a total ranking over Πnormal-Π\Piroman\_Π, and true reward function R\*(s)=wTϕ(s)superscript𝑅𝑠superscript𝑤𝑇italic-ϕ𝑠R^{\*}(s)=w^{T}\phi(s)italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s ) = italic\_w start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ ( italic\_s ), the reward ambiguity resulting from π\*superscript𝜋\pi^{\*}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is greater than or equal to the reward ambiguity of using a total ranking, i.e., G(Hπ\*)≥G(Hranked)𝐺superscriptsubscript𝐻𝜋𝐺subscript𝐻normal-rankedG(H\_{\pi}^{\*})\geq G(H\_{\rm ranked})italic\_G ( italic\_H start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ≥ italic\_G ( italic\_H start\_POSTSUBSCRIPT roman\_ranked end\_POSTSUBSCRIPT ).
Learning a reward function that respects a set of strictly ranked demonstrations avoids some of the ill-posedness of IRL [[37](#bib.bib37)] by eliminating a constant, or all-zero reward function. Furthermore, ranked demonstrations provide explicit information about both what to do as well as what not to do in an environment and each pairwise preference over trajectories gives a half-space constraint on feasible reward functions. In Appendix [A.3](#A1.SS3 "A.3 Uncertainty Reduction for Random Halfspaces ‣ Appendix A Theory and Proofs ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations") we prove that sampling random half-space constraints results in an exponential decrease in reward function ambiguity.
######
Corollary 1.
To reduce reward function ambiguity by x% it suffices to have k=log2(1/(1−x/100))𝑘subscript211𝑥100k=\log\_{2}(1/(1-x/100))italic\_k = roman\_log start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( 1 / ( 1 - italic\_x / 100 ) )
random half-space constraints over reward function weights.
In practice, sampling random half-space constraints on the ground-truth reward function is infeasible. Instead, our proposed approach for better-than-demonstrator imitation learning uses noise injection to produce a wide variety of automatically-ranked of demonstrations in order to reduce the learner’s reward function ambiguity. As we show in the next section, automatically-generating preferences over demonstrations also improves the efficiency of IRL by removing the need for an MDP solver in the inner-loop and turning IRL into a supervised learning problem [[7](#bib.bib7)]. Amin and Singh [[40](#bib.bib40)] proved that a logarithmic number of demonstrations from a family of MDPs with different transition dynamics is sufficient to resolve reward ambiguity in IRL. We generate ranked trajectories via noise injection, which can be seen as an efficient heuristic for generating demonstrations under different transition dynamics.
5 Algorithm
------------
We now describe our approach for achieving better-than-demonstrator imitation learning without requiring human-provided preference labels. We first briefly review a recent state-of-the-art IRL algorithm that learns from ranked demonstrations. We then describe our proposed approach to generate these rankings automatically via noise injection. Videos and code are available online.222The project website and code can be found at <https://dsbrown1331.github.io/CoRL2019-DREX/>
###
5.1 Trajectory-ranked Reward Extrapolation (T-REX)
Given a sequence of m𝑚mitalic\_m demonstrations ranked from worst to best, τ1,…,τmsubscript𝜏1…subscript𝜏𝑚\tau\_{1},\ldots,\tau\_{m}italic\_τ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_τ start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT, T-REX [[7](#bib.bib7)] first performs reward inference by approximating the reward at state s𝑠sitalic\_s using a neural network, R^θ(s)subscript^𝑅𝜃𝑠\hat{R}\_{\theta}(s)over^ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_s ), such that ∑s∈τiR^θ(s)<∑s∈τjR^θ(s)subscript𝑠subscript𝜏𝑖subscript^𝑅𝜃𝑠subscript𝑠subscript𝜏𝑗subscript^𝑅𝜃𝑠\sum\_{s\in\tau\_{i}}\hat{R}\_{\theta}(s)<\sum\_{s\in\tau\_{j}}\hat{R}\_{\theta}(s)∑ start\_POSTSUBSCRIPT italic\_s ∈ italic\_τ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT over^ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_s ) < ∑ start\_POSTSUBSCRIPT italic\_s ∈ italic\_τ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT over^ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_s ) when τi≺τjprecedessubscript𝜏𝑖subscript𝜏𝑗\tau\_{i}\prec\tau\_{j}italic\_τ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≺ italic\_τ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT.
The reward function R^θsubscript^𝑅𝜃\hat{R}\_{\theta}over^ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT is learned via supervised learning, using a pairwise ranking loss [[41](#bib.bib41)] based on the Luce-Shephard choice rule [[42](#bib.bib42)]:
| | | | |
| --- | --- | --- | --- |
| | ℒ(θ)≈−1|𝒫|∑(i,j)∈𝒫logexp∑s∈τjR^θ(s)exp∑s∈τiR^θ(s)+exp∑s∈τjR^θ(s),ℒ𝜃1𝒫subscript𝑖𝑗𝒫subscript𝑠subscript𝜏𝑗subscript^𝑅𝜃𝑠subscript𝑠subscript𝜏𝑖subscript^𝑅𝜃𝑠subscript𝑠subscript𝜏𝑗subscript^𝑅𝜃𝑠\mathcal{L}(\theta)\approx-\frac{1}{|\mathcal{P}|}\sum\_{(i,j)\in\mathcal{P}}\log\frac{\exp\displaystyle\sum\_{s\in\tau\_{j}}\hat{R}\_{\theta}(s)}{\exp\displaystyle\sum\_{s\in\tau\_{i}}\hat{R}\_{\theta}(s)+\exp\displaystyle\sum\_{s\in\tau\_{j}}\hat{R}\_{\theta}(s)},caligraphic\_L ( italic\_θ ) ≈ - divide start\_ARG 1 end\_ARG start\_ARG | caligraphic\_P | end\_ARG ∑ start\_POSTSUBSCRIPT ( italic\_i , italic\_j ) ∈ caligraphic\_P end\_POSTSUBSCRIPT roman\_log divide start\_ARG roman\_exp ∑ start\_POSTSUBSCRIPT italic\_s ∈ italic\_τ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT over^ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_s ) end\_ARG start\_ARG roman\_exp ∑ start\_POSTSUBSCRIPT italic\_s ∈ italic\_τ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT over^ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_s ) + roman\_exp ∑ start\_POSTSUBSCRIPT italic\_s ∈ italic\_τ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT over^ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_s ) end\_ARG , | | (6) |
where 𝒫={(i,j):τi≺τj}𝒫conditional-set𝑖𝑗precedessubscript𝜏𝑖subscript𝜏𝑗\mathcal{P}=\{(i,j):\tau\_{i}\prec\tau\_{j}\}caligraphic\_P = { ( italic\_i , italic\_j ) : italic\_τ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≺ italic\_τ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT }.
After learning a reward function, T-REX can be combined with any RL algorithm to optimize a policy π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG with respect to R^θ(s)subscript^𝑅𝜃𝑠\hat{R}\_{\theta}(s)over^ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_s ). Brown et al. [[7](#bib.bib7)] demonstrated that T-REX typically results in policies that perform significantly better than the best demonstration.
###
5.2 Disturbance-based Reward Extrapolation (D-REX)
Algorithm 1 D-REX: Disturbance-based Reward Extrapolation
0: Demonstrations 𝒟𝒟\mathcal{D}caligraphic\_D, noise schedule ℰℰ\mathcal{E}caligraphic\_E, number of rollouts K𝐾Kitalic\_K
1: Run behavioral cloning on demonstrations 𝒟𝒟\mathcal{D}caligraphic\_D to obtain policy πBCsubscript𝜋BC\pi\_{\rm BC}italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT
2: for ϵi∈ℰsubscriptitalic-ϵ𝑖ℰ\epsilon\_{i}\in\mathcal{E}italic\_ϵ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ caligraphic\_E do
3: Generate a set of K𝐾Kitalic\_K trajectories from a noise injected policy πBC(⋅|ϵi)\pi\_{\rm BC}(\cdot|\epsilon\_{i})italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT ( ⋅ | italic\_ϵ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT )
4: end for
5: Generate automatic preference labels τi≺τjprecedessubscript𝜏𝑖subscript𝜏𝑗\tau\_{i}\prec\tau\_{j}italic\_τ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≺ italic\_τ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT if τi∼πBC(⋅|ϵi)\tau\_{i}\sim\pi\_{\rm BC}(\cdot|\epsilon\_{i})italic\_τ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT ( ⋅ | italic\_ϵ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ), τj∼πBC(⋅|ϵj)\tau\_{j}\sim\pi\_{\rm BC}(\cdot|\epsilon\_{j})italic\_τ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT ( ⋅ | italic\_ϵ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ), and ϵi>ϵjsubscriptitalic-ϵ𝑖subscriptitalic-ϵ𝑗\epsilon\_{i}>\epsilon\_{j}italic\_ϵ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT > italic\_ϵ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT
6: Run T-REX [[7](#bib.bib7)] on automatically ranked trajectories to obtain R^^𝑅\hat{R}over^ start\_ARG italic\_R end\_ARG
7: Optimize policy π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG using reinforcement learning with reward function R^^𝑅\hat{R}over^ start\_ARG italic\_R end\_ARG
8: return π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG
We now describe Disturbance-based Reward Extrapolation (D-REX), our proposed approach for automatically generating ranked demonstrations. Our approach is summarized in Algorithm [1](#alg1 "Algorithm 1 ‣ 5.2 Disturbance-based Reward Extrapolation (D-REX) ‣ 5 Algorithm ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations"). We first take a set of unranked demonstrations and use behavioral cloning to learn a policy πBCsubscript𝜋BC\pi\_{\rm BC}italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT. Behavioral cloning [[43](#bib.bib43)] treats each state action pair (s,a)∈𝒟𝑠𝑎𝒟(s,a)\in\mathcal{D}( italic\_s , italic\_a ) ∈ caligraphic\_D as a training example and seeks a policy πBCsubscript𝜋BC\pi\_{\rm BC}italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT that maps from states to actions. We model πBCsubscript𝜋BC\pi\_{\rm BC}italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT using a neural network with parameters θBCsubscript𝜃BC\theta\_{\rm BC}italic\_θ start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT and find these parameters using maximum-likelihood estimation such that
θBC=argmaxθ∏(s,a)∈𝒟πBC(a|s)subscript𝜃BCsubscript𝜃subscriptproduct𝑠𝑎𝒟subscript𝜋BCconditional𝑎𝑠\theta\_{\rm BC}=\arg\max\_{\theta}\prod\_{(s,a)\in\mathcal{D}}\pi\_{\rm BC}(a|s)italic\_θ start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT = roman\_arg roman\_max start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ∏ start\_POSTSUBSCRIPT ( italic\_s , italic\_a ) ∈ caligraphic\_D end\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT ( italic\_a | italic\_s ).
By virtue of the optimization procedure, πBCsubscript𝜋BC\pi\_{\rm BC}italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT will usually only perform as well as the average performance of the demonstrator—at best it may perform slightly better than the demonstrator if the demonstrator makes mistakes approximately uniformly at random.
Our main insight is that if πBCsubscript𝜋BC\pi\_{\rm BC}italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT is significantly better than the performance of a completely random policy, then we can inject noise into πBCsubscript𝜋BC\pi\_{\rm BC}italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT and interpolate between the performance of πBCsubscript𝜋BC\pi\_{\rm BC}italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT and the performance of a uniformly random policy. In Appendix [B.1](#A2.SS1 "B.1 Noise Injection Theory ‣ Appendix B Noise Degradation ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations"), we prove that given a noise schedule ℰ=(ϵ1,ϵ2,…,ϵd)ℰsubscriptitalic-ϵ1subscriptitalic-ϵ2…subscriptitalic-ϵ𝑑\mathcal{E}=(\epsilon\_{1},\epsilon\_{2},\ldots,\epsilon\_{d})caligraphic\_E = ( italic\_ϵ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_ϵ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … , italic\_ϵ start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ) consisting of a sequence of noise levels such that ϵ1>ϵ2>…>ϵdsubscriptitalic-ϵ1subscriptitalic-ϵ2…subscriptitalic-ϵ𝑑\epsilon\_{1}>\epsilon\_{2}>\ldots>\epsilon\_{d}italic\_ϵ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT > italic\_ϵ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT > … > italic\_ϵ start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT, then with high-probability, J(πBC(⋅|ϵ1))<J(πBC(⋅|ϵ2))<⋯<J(πBC(⋅|ϵd))J(\pi\_{\rm BC}(\cdot|\epsilon\_{1}))<J(\pi\_{\rm BC}(\cdot|\epsilon\_{2}))<\cdots<J(\pi\_{\rm BC}(\cdot|\epsilon\_{d}))italic\_J ( italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT ( ⋅ | italic\_ϵ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ) < italic\_J ( italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT ( ⋅ | italic\_ϵ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ) < ⋯ < italic\_J ( italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT ( ⋅ | italic\_ϵ start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ) ).
Given noise level ϵ∈ℰitalic-ϵℰ\epsilon\in\mathcal{E}italic\_ϵ ∈ caligraphic\_E, we inject noise via an ϵitalic-ϵ\epsilonitalic\_ϵ-greedy policy such that with probability 1-ϵitalic-ϵ\epsilonitalic\_ϵ, the action is chosen according to πBCsubscript𝜋BC\pi\_{\rm BC}italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT, and with probability ϵitalic-ϵ\epsilonitalic\_ϵ, the action is chosen uniformly at random within the action range.
For every ϵitalic-ϵ\epsilonitalic\_ϵ, we generate K𝐾Kitalic\_K policy rollouts and thus obtain K×d𝐾𝑑K\times ditalic\_K × italic\_d ranked demonstrations, where each trajectory is ranked based on the noise level that generated it, with trajectories considered of equal preference if generated from the same noise level.
Thus, by generating rollouts from πBC(⋅|ϵ)\pi\_{\rm BC}(\cdot|\epsilon)italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT ( ⋅ | italic\_ϵ ) with varying levels of noise, we can obtain an arbitrarily large number of ranked demonstrations:
| | | | |
| --- | --- | --- | --- |
| | Dranked={τi≺τj:τi∼πBC(⋅|ϵi),τj∼πBC(⋅|ϵj),ϵi>ϵj}.D\_{\rm ranked}=\{\tau\_{i}\prec\tau\_{j}:\tau\_{i}\sim\pi\_{\rm BC}(\cdot|\epsilon\_{i}),\tau\_{j}\sim\pi\_{\rm BC}(\cdot|\epsilon\_{j}),\epsilon\_{i}>\epsilon\_{j}\}.italic\_D start\_POSTSUBSCRIPT roman\_ranked end\_POSTSUBSCRIPT = { italic\_τ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≺ italic\_τ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT : italic\_τ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT ( ⋅ | italic\_ϵ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) , italic\_τ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT roman\_BC end\_POSTSUBSCRIPT ( ⋅ | italic\_ϵ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) , italic\_ϵ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT > italic\_ϵ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT } . | | (7) |
Given these ranked demonstrations, we then use T-REX to learn a reward function R^^𝑅\hat{R}over^ start\_ARG italic\_R end\_ARG from which we can optimize a policy π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG using any reinforcement learning algorithm (see Appendix [E](#A5 "Appendix E D-REX Details ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations") for details).
6 Experimental Results
-----------------------
###
6.1 Automatically generating rankings via noise
To test whether injecting noise can create high-quality, automatic rankings, we used simulated suboptimal demonstrations from a partially trained reinforcement learning agent. To do so, we used the Proximal Policy Optimization (PPO) [[44](#bib.bib44)] implementation from OpenAI Baselines [[45](#bib.bib45)] to partially train a policy on the ground-truth reward function. We then ran behavioral cloning on these demonstrations and plotted the degradation in policy performance for increasing values of ϵitalic-ϵ\epsilonitalic\_ϵ.
We evaluated noise degradation on the Hopper and Half-Cheetah domains in MuJoCo and on the seven Atari games listed in Table [1](#S6.T1 "Table 1 ‣ 6.3 Extrapolating beyond the demonstrator’s performance ‣ 6 Experimental Results ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations").
To perform behavioral cloning, we used one suboptimal demonstration trajectory of length 1,000 for the MuJoCo tasks and 10 suboptimal demonstrations for the Atari games. We then varied ϵitalic-ϵ\epsilonitalic\_ϵ and generated rollouts for different noise levels. We plotted the average return along with one standard deviation error bars in Figure [2](#S6.F2 "Figure 2 ‣ 6.1 Automatically generating rankings via noise ‣ 6 Experimental Results ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations") (see Appendix [B](#A2 "Appendix B Noise Degradation ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations") for details). We found that behavioral cloning with small noise tends to have performance similar to that of the average performance of the
demonstrator. As noise is added, the performance degrades until it reaches the level of a uniformly random policy (ϵ=1italic-ϵ1\epsilon=1italic\_ϵ = 1). These plots validate our assumption that, in expectation, adding increasing amounts of noise will cause near-monotonic performance degradation.

(a) Hopper

(b) Half-Cheetah

(c) Beam Rider

(d) Seaquest
Figure 2: Examples of the degradation in performance of an imitation policy learned via behavioral cloning as more noise is injected into the policy. Behavioral cloning is done on a 1,000-length trajectory (MuJoCo tasks) or 10 demonstrations (Atari games). Plots show mean and standard deviations over 5 rollouts (MuJoCo tasks) or 20 rollouts (Atari games).
###
6.2 Reward extrapolation
We next tested whether D-REX allows for accurate reward extrapolation.
We used noise injection, as described in the previous section, to generate 100 synthetically-ranked demonstrations. For MuJoCo, we used the noise schedule consisting of 20 different noise levels, evenly spaced over the interval [0,1)01[0,1)[ 0 , 1 ) and generated K=5𝐾5K=5italic\_K = 5 rollouts per noise level. For Atari, we used the noise schedule ℰ=(1.0,0.75,0.5,0.25,0.02)ℰ1.00.750.50.250.02\mathcal{E}=(1.0,0.75,0.5,0.25,0.02)caligraphic\_E = ( 1.0 , 0.75 , 0.5 , 0.25 , 0.02 ) with K=20𝐾20K=20italic\_K = 20 rollouts per noise level. By automatically generating ranked demonstrations, D-REX is able to leverage a small number of unranked demonstrations to generate a large dataset of ranked demonstrations for reward function approximation. We used the T-REX algorithm [[7](#bib.bib7)] to learn a reward function from these synthetically ranked demonstrations.
To investigate how well D-REX learns the true reward function, we evaluated the learned reward function R^θsubscript^𝑅𝜃\hat{R}\_{\theta}over^ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT on the original demonstrations and the synthetic demonstrations obtained via noise injection. We then compared the ground-truth returns with the predicted returns under R^θsubscript^𝑅𝜃\hat{R}\_{\theta}over^ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT. We also tested reward extrapolation on a held-out set of trajectories obtained from PPO policies that were trained longer on the ground-truth reward than the policy used to generate the demonstrations for D-REX. These additional trajectories allow us to measure how well the learned reward function can extrapolate beyond the performance of the original demonstrations. We scale all predicted returns to be in the same range as the ground-truth returns. The results for four of the tasks are shown in Figure [3](#S6.F3 "Figure 3 ‣ 6.2 Reward extrapolation ‣ 6 Experimental Results ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations"). The remaining plots are included in Appendix [G](#A7 "Appendix G Reward Extrapolation and Attention Heatmaps ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations"). The plots show relatively strong correlation between ground truth returns and predicted returns across most tasks, despite having no a priori access to information about true returns, nor rankings.
We also generated reward sensitivity heat maps [[46](#bib.bib46)] for the learned reward functions. These visualizations provide evidence that D-REX learns semantically meaningful features that are highly correlated with the ground truth reward. For example, on Seaquest, the reward function learns a shaped reward that gives a large penalty for an imminent collision with an enemy (Appendix [G](#A7 "Appendix G Reward Extrapolation and Attention Heatmaps ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations")).

(a) Hopper

(b) Half-Cheetah

(c) Beam Rider

(d) Seaquest
Figure 3: Extrapolation plots for a selection of MuJoCo and Atari tasks (see the appendix for more plots). Blue dots represent synthetic demonstrations generated via behavioral cloning with different amounts of noise injection. Red dots represent actual demonstrations, and green dots represent additional trajectories not seen during training. We compare ground truth returns over demonstrations to the predicted returns from D-REX (normalized to be in the same range as the ground truth returns).
###
6.3 Extrapolating beyond the demonstrator’s performance
Lastly, we tested whether the reward functions learned using D-REX can be used in conjunction with deep reinforcement learning to achieve better-than-demonstrator performance. We ran PPO on the learned reward function R^θsubscript^𝑅𝜃\hat{R}\_{\theta}over^ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT for 1 million timesteps (MuJoCo tasks) and 50 million frames (Atari games). We ran three replicates of PPO with different seeds and report the best performance on the ground-truth reward function, averaged over 20 trajectory rollouts. Table [1](#S6.T1 "Table 1 ‣ 6.3 Extrapolating beyond the demonstrator’s performance ‣ 6 Experimental Results ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations") compares the performance of the demonstrator with the performance of D-REX, behavioral cloning (BC), and Generative Adversarial Imitation Learning (GAIL) [[47](#bib.bib47)], a state-of-the-art imitation learning algorithm.
The results in Table [1](#S6.T1 "Table 1 ‣ 6.3 Extrapolating beyond the demonstrator’s performance ‣ 6 Experimental Results ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations") demonstrate that policies optimized using D-REX outperform the best demonstration in all tasks except for Pong. Furthermore, D-REX is also able to outperform BC and GAIL across all tasks except for Hopper and Pong.
On the simulated MuJoCo robotics tasks, D-REX results in a 77% (Hopper) and 418% (HalfCheetah) performance increase when compared with the best demonstration.
On Q\*Bert, D-REX exploits a known loophole in the game which allows nearly infinite points. Excluding Q\*Bert, D-REX results in an average performance increase of 39% across the Atari tasks, when compared with the best demonstration.
To test the robustness of the policy learned via D-REX, we also considered the worst-case performance, something that is important for safe inverse reinforcement learning [[48](#bib.bib48), [49](#bib.bib49), [50](#bib.bib50)]. As shown in Appendix [C](#A3 "Appendix C Risk Analysis ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations"), D-REX achieves better worst-case performance than either the demonstrator or standard imitation learning algorithms. To ensure that D-REX is learning more than a simple bonus for staying alive, we also compared D-REX with a PPO agent trained with a +1 reward for every timestep. Our results in Appendix [D](#A4 "Appendix D Live-Long Baseline ‣ Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations") demonstrate that D-REX is superior to a simple +1 reward across all games, except for Pong.
Table 1: Comparison of the performance of D-REX with behavioral cloning (BC), GAIL [[47](#bib.bib47)], and the demonstrator’s performance. Results are the best average ground-truth returns over 3 random seeds with 20 trials per seed. Bold denotes performance that is better than the best demonstration.
| | | | | |
| --- | --- | --- | --- | --- |
| | Demonstrations | D-REX | BC | GAIL |
| Task | Avg. | Best | Average | Stdev. | Avg. | Stdev. | Avg. | Stdev. |
| Hopper | 1029.1 | 1167.9 | 2072.0 | (1574.2) | 943.8 | (208.4) | 2700.2 | (692.3) |
| HalfCheetah | 187.7 | 187.7 | 972.9 | (96.1) | -115.9 | (179.8) | 85.2 | (86.0) |
| Beam Rider | 1,524.0 | 2,216.0 | 7,220.0 | (2221.9) | 1,268.6 | (776.6) | 1778.0 | (787.1) |
| Breakout | 34.5 | 59.0 | 94.7 | (16.5) | 29.75 | (10.1) | 0.3 | (0.4) |
| Enduro | 85.5 | 134.0 | 247.9 | (88.4) | 83.4 | (27.0) | 62.4 | (24.0) |
| Pong | 3.7 | 14.0 | -9.5 | (9.8) | 8.6 | (9.5) | -3.4 | (3.8) |
| Q\*bert | 770.0 | 850.0 | 22543.8 | (7434.1) | 1,013.75 | (721.1) | 737.5 | (311.4) |
| Seaquest | 524.0 | 720.0 | 801.0 | (4.4) | 530.0 | (109.8) | 554.0 | (108.8) |
| Space Invaders | 538.5 | 930.0 | 1,122.5 | (501.2) | 426.5 | (187.1) | 364.8 | (139.7) |
7 Conclusion
-------------
Imitation learning approaches are typically unable to outperform the demonstrator. This is because most approaches either directly mimic the demonstrator or find a reward function that makes the demonstrator appear near optimal. While algorithms that can exceed the performance of a demonstrator exist, they either rely on a significant number of active queries from a human [[6](#bib.bib6), [9](#bib.bib9), [11](#bib.bib11)], a hand-crafted reward function [[18](#bib.bib18), [2](#bib.bib2)], or pre-ranked demonstrations [[7](#bib.bib7)]. Furthermore, prior research has lacked theory about when better-than-demonstrator performance is possible. We first addressed this lack of theory by presenting a sufficient condition for extrapolating beyond the performance of a demonstrator. We also provided theoretical results demonstrating how preferences and rankings allow for better reward function learning by reducing the learner’s uncertainty over the true reward function.
We next focused on making reward learning from rankings more applicable to a wider variety of imitation learning tasks where only unlabeled demonstrations are available. We presented a novel imitation learning algorithm, Disturbance-based Reward Extrapolation (D-REX) that automatically generates ranked demonstrations via noise injection and uses these demonstrations to seek to extrapolate beyond the performance of a suboptimal demonstrator. We empirically evaluated D-REX on a set of simulated robot locomotion and Atari tasks and found that D-REX outperforms state-of-the-art imitation learning techniques and also outperforms the best demonstration in 8 out of 9 tasks. These results provide the first evidence that better-than-demonstrator imitation learning is possible without requiring extra information such as rewards, active supervision, or preference labels. Our results open the door to the application of a variety of ranking and preference-based learning techniques [[41](#bib.bib41), [51](#bib.bib51)] to standard imitation learning domains where only unlabeled demonstrations are available.
#### Acknowledgments
This work has taken place in the Personal Autonomous Robotics Lab (PeARL) at The University of Texas at Austin. PeARL research is supported in part by the NSF (IIS-1724157, IIS-1638107, IIS-1617639, IIS-1749204) and ONR(N00014-18-2243). |
b8f339ff-a30f-4399-a96b-ef33e6e43d45 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | What is "Do what I mean"?
“Do what I mean” is an alignment strategy in which the AI is programmed to try and do what the human meant by an instruction, rather than following the literal interpretation of the explicit instruction (akin to following the spirit of the law over the letter). This potentially helps with alignment in two ways: First, it might allow the AI to learn more subtle goals, which you might not have been able to explicitly state. Second, it might make the AI [corrigible](/?state=87AG&question=What%20is%20corrigibility%3F), willing to have its goals or programming corrected, and continuously interested in what people want (including allowing itself to be shut off if need be). Since it is programmed to "do what you mean", it will be open to accepting correction.
This approach contrasts with the more typical “do what I say” approach of programming an AI by giving it an explicit goal. The problem with an explicit goal is that if the goal is misstated, or leaves out some detail, the AI will optimize for something we don’t want. Think of the story of King Midas, who wished that everything he touched would turn to gold and died of starvation.
One specific "Do what I mean" proposal is ["Cooperative Inverse Reinforcement Learning"](https://www.lesswrong.com/tag/inverse-reinforcement-learning), in which the goal is hidden from the AI. Since it doesn't have direct access to its reward function, the AI will try and discover the goal from the things you tell it and from the examples you give it. Thus, it slowly gets closer to doing what you actually want.
For more information, see [Do what we mean vs. do what we say](https://www.lesswrong.com/posts/8Q5h6hyBXTEgC6EZf/do-what-i-mean-vs-do-what-i-say) by Rohin Shah, in which he defines a "do what we mean" system, shows how it might help with alignment, and discusses how it could be combined with a "do what we say" subsystem for added safety.
For a discussion of a spectrum of different levels of "do what I mean" ability, see [Do What I Mean hierarchy](https://arbital.greaterwrong.com/p/dwim) by Eliezer Yudkowsky.
|
f7146974-07f8-453c-bc8d-2232723ba73d | StampyAI/alignment-research-dataset/arxiv | Arxiv | Exploring Hierarchy-Aware Inverse Reinforcement Learning
1 Introduction
---------------
As Reinforcement Learning (RL) algorithms have become more and more capable, we are increasingly aware of
the limitations of how we specify their goals. While these goals can be hand-crafted
for simple environments, this approach requires expert knowledge in the domain.
If we are to eventually use AI to perform tasks that are beyond human abilities
(e.g. ‘plan a city’), we have to develop a more robust method of goal specification.
Our algorithms would ideally be able to learn what goals they should pursue by
inferring human preferences: this is often known as *value learning*, or
*preference elicitation*.
A leading approach to value learning from observed human actions
is inverse optimal control (Kálmán, [1960](#bib.bib9)) or inverse reinforcement learning (IRL),
formalised by Ng & Russell ([2000](#bib.bib11)) and
Abbeel & Ng ([2004](#bib.bib1)).
In IRL we treat human behaviour as planning in a Markov decision process (MDP)
and aim to find a reward function that explains observed trajectories of human agents.
While we may naively assume that human beings always act perfectly to
achieve their goals (the ‘principle of revealed preference’ in
economics (Samuelson, [1938](#bib.bib15))), human behaviour often violates this
assumption. In general, people make choices that they admit are suboptimal, due
to a variety of biases including lack of willpower, inconsistent
time preferences, and lack of perfect foresight. Therefore, a more accurate
inference of ‘true’ preferences must take typical human irrationality into
account. Although initial approaches to IRL followed this implicit assumption of rationality
of the demonstrating expert, the more recent Bayesian IRL framework
(Ramachandran & Amir, [2007](#bib.bib13)) makes it straightforward to include more realistic
models of human behaviour. Previous work in this area has modelled human
actions as attempting to maximise their utility subject to constraints such as
limited knowledge (Baker & Tenenbaum, [2014](#bib.bib3)) or inconsistent time
preferences (Evans et al., [2016](#bib.bib7)).
However, to our knowledge no previous work has considered what we believe to be
a key feature of human planning: a tendency to structure our decision-making in
a hierarchical fashion. Instead of evaluating each individual action in terms of
the rewards which we expect to obtain from all subsequent actions, humans tend
to simplify their planning by considering sub-problems and choosing between
known methods to solve these problems. For example, when navigating across a
city we might choose between existing skills of walking, taking a taxi or public
transport. We do not choose between all the trajectories that we could
physically perform.
If we simply apply existing algorithms to observations of humans who plan in
this way, we will fail to infer correct preferences, running the risk of
accidentally inferring pathologically wrong values in order to explain the
hierarchically-generated plans.
Our key contributions are as follows:
* •
We introduce a generative model of human decisions as resulting from hierarchical planning,
which uses both primitive actions and extended options comprised of sequences of actions.
* •
We discuss the theoretical justification for considering
such a model and introduce a simple algorithm for inference with hierarchically-generated trajectories.
* •
Evaluating our model on
trajectories of players of the ‘Wikispeedia’ game shows us that incorporating
hierarchical structure gives us a sizeable boost in goal prediction accuracy compared to
standard Bayesian IRL.
* •
Finally, we discuss how our inference procedure can be extended to
jointly infer options and preferences, and show that our performance
advantage over BIRL is retained even when we don’t know what the precise
hierarchical structure of the agent is.
2 Our Model
------------
An MDP is a tuple (𝒮,𝒜,T,R,γ)𝒮𝒜𝑇𝑅𝛾(\mathcal{S},\mathcal{A},T,R,\gamma)( caligraphic\_S , caligraphic\_A , italic\_T , italic\_R , italic\_γ ) consisting of a set of
states 𝒮𝒮\mathcal{S}caligraphic\_S and actions 𝒜𝒜\mathcal{A}caligraphic\_A, a transition function T𝑇Titalic\_T, reward function R𝑅Ritalic\_R, and discount rate γ𝛾\gammaitalic\_γ, following the usual definition in
e.g. Sutton & Barto ([1998](#bib.bib16)). In IRL we are given an MDP without R𝑅Ritalic\_R and aim to
recover the reward from an observed trajectory of an agent’s actions and entered states at each timestep 𝒯a=(s0,a0),(s1,a1),…subscript𝒯𝑎subscript𝑠0subscript𝑎0subscript𝑠1subscript𝑎1…\mathcal{T}\_{a}=(s\_{0},a\_{0}),(s\_{1},a\_{1}),\ldotscaligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT = ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) , ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , …. (We need to include the states as actions do not uniquely map to states in a stochastic MDP). We can typically extend the inference over multiple observed trajectories.
We describe the behaviour of an agent in an MDP by a stochastic policy π𝜋\piitalic\_π. We write the optimal policy as π\*superscript𝜋\pi^{\*}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, with corresponding Q-function
Q\*superscript𝑄Q^{\*}italic\_Q start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT.
Human planning is commonly modelled as being Boltzmann-rational: that is, satisfying π(s,a)∝exp(βQ\*(s,a))proportional-to𝜋𝑠𝑎𝛽superscript𝑄𝑠𝑎\pi(s,a)\propto\exp(\beta Q^{\*}(s,a))italic\_π ( italic\_s , italic\_a ) ∝ roman\_exp ( italic\_β italic\_Q start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s , italic\_a ) ) for a fixed parameter β𝛽\betaitalic\_β. Boltzmann-policies can also be self-consistent, so that the value-function is computed taking into account the Boltzmann-rational policy. This gives a policy π(s,a)∝exp(βQ⊙(s,a))proportional-to𝜋𝑠𝑎𝛽superscript𝑄direct-product𝑠𝑎\pi(s,a)\propto\exp(\beta Q^{\odot}(s,a))italic\_π ( italic\_s , italic\_a ) ∝ roman\_exp ( italic\_β italic\_Q start\_POSTSUPERSCRIPT ⊙ end\_POSTSUPERSCRIPT ( italic\_s , italic\_a ) ), where Q⊙superscript𝑄direct-productQ^{\odot}italic\_Q start\_POSTSUPERSCRIPT ⊙ end\_POSTSUPERSCRIPT is the111In general there is no unique consistent Boltzmann-policy (Asadi & Littman, [2016](#bib.bib2)). In practice we have not noticed any problems arising from this non-uniqueness. Q-value under this same Boltzmann-rational policy. The parameter β𝛽\betaitalic\_β can be increased or decreased to model more or less rational humans, respectively.
One method for describing the behaviour of agents that plan hierarchically is
the options framework, comprehensively described by Sutton et al. ([1999](#bib.bib17)). An
option o𝑜oitalic\_o consists of a policy πosubscript𝜋𝑜\pi\_{o}italic\_π start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT, an initiation set τ⊆𝒮𝜏𝒮\tau\subseteq\mathcal{S}italic\_τ ⊆ caligraphic\_S, and a termination function α:𝒮→[0,1]:𝛼→𝒮01\alpha:\mathcal{S}\rightarrow[0,1]italic\_α : caligraphic\_S → [ 0 , 1 ]. The initiation set τ⊆𝒮𝜏𝒮\tau\subseteq\mathcal{S}italic\_τ ⊆ caligraphic\_S gives the states where the agent may activate the policy, thereafter following the policy πosubscript𝜋𝑜\pi\_{o}italic\_π start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT. At each state s𝑠sitalic\_s the policy enters, the termination function α(s)𝛼𝑠\alpha(s)italic\_α ( italic\_s ) gives the probability that the option terminates, after which the agent no longer follows πosubscript𝜋𝑜\pi\_{o}italic\_π start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT.
These parameters define an exit distribution Po(s,s′)superscript𝑃𝑜𝑠superscript𝑠′P^{o}(s,s^{\prime})italic\_P start\_POSTSUPERSCRIPT italic\_o end\_POSTSUPERSCRIPT ( italic\_s , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) giving the
probability that the option o𝑜oitalic\_o, if initiated in state s𝑠sitalic\_s, will terminate
in state s′superscript𝑠′s^{\prime}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, and a reward function ro(s)superscript𝑟𝑜𝑠r^{o}(s)italic\_r start\_POSTSUPERSCRIPT italic\_o end\_POSTSUPERSCRIPT ( italic\_s ) giving the expected
reward for activating option o𝑜oitalic\_o in state s𝑠sitalic\_s. For a given state-action sequence 𝒯asubscript𝒯𝑎\mathcal{T}\_{a}caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT, we can further consider the consistent-exit distribution
Poc(s,s′,𝒯a)superscript𝑃𝑜𝑐𝑠superscript𝑠′subscript𝒯𝑎P^{oc}(s,s^{\prime},\mathcal{T}\_{a})italic\_P start\_POSTSUPERSCRIPT italic\_o italic\_c end\_POSTSUPERSCRIPT ( italic\_s , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT ). This gives the probability that taking
the option o𝑜oitalic\_o in state s𝑠sitalic\_s results in the option’s policy giving the exact state-action trajectory in 𝒯asubscript𝒯𝑎\mathcal{T}\_{a}caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT, terminating
in state s′superscript𝑠′s^{\prime}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT. An action a𝑎aitalic\_a in a state s𝑠sitalic\_s in an MDP can be described as a
degenerate option where πo(a,s)=1subscript𝜋𝑜𝑎𝑠1\pi\_{o}(a,s)=1italic\_π start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT ( italic\_a , italic\_s ) = 1, τ={s}𝜏𝑠\tau=\{s\}italic\_τ = { italic\_s }, and α(s1)=1𝛼subscript𝑠11\alpha(s\_{1})=1italic\_α ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) = 1 if T(s1,s,a)≠0𝑇subscript𝑠1𝑠𝑎0T(s\_{1},s,a)\neq 0italic\_T ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s , italic\_a ) ≠ 0. Our use of the term ‘option’ includes these
‘atomic’ actions as a special case.
Thus the key features of our model are as follows:
* •
The human has an available set of options ω𝜔\omegaitalic\_ω, which include
options with a policy that terminates after one action, i.e. the
standard actions in the MDP.
* •
The human chooses between options o∈ω𝑜𝜔o\in\omegaitalic\_o ∈ italic\_ω with a stochastic policy,
π(s,o)∝exp(βQ⊙(s,o))proportional-to𝜋𝑠𝑜𝛽superscript𝑄direct-product𝑠𝑜\pi(s,o)\propto\exp(\beta Q^{\odot}(s,o))italic\_π ( italic\_s , italic\_o ) ∝ roman\_exp ( italic\_β italic\_Q start\_POSTSUPERSCRIPT ⊙ end\_POSTSUPERSCRIPT ( italic\_s , italic\_o ) ) for a fixed parameter β𝛽\betaitalic\_β.
* •
We do not observe the sequence of *options* that the agent executes: we only observe the
sequence of states and *actions* 𝒯asubscript𝒯𝑎\mathcal{T}\_{a}caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT, that the agent executes,
some of which may have been executed as part of a compound option. We denote the unobserved state-*option* trajectory by 𝒯osubscript𝒯𝑜\mathcal{T}\_{o}caligraphic\_T start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT.
A key feature of our model is the inclusion of Boltzmann-rational decisions over extended options as well as single actions. We believe that this feature is important for accurate modelling of human preferences, after considering the common everyday situations where the human has options that are well-suited to
solving problems, but are not optimal. The human might take those options instead of
explicitly computing the optimal policy because they have a limited ability to
optimally plan. For instance, if they wish to get across the city, they might choose
between a taxi and walking, as those skills have served them well in the past. They
might not even consider asking to borrow a friend’s bicycle, even if this might
be the fastest method, and certainly within their abilities. We wouldn’t want our
preference inference algorithm to conclude that the human prefers sitting in
taxis because they chose to do that over taking the optimal policy.
For an overview of the psychology and neuroscience literature on the importance of hierarchy in human planning and the neural basis thereof, see Botvinick et al. ([2009](#bib.bib4)).
3 Related Work
---------------
###
3.1 Boltzmann-rationality
The Boltzmann-rationality model of human behaviour is one of the simplest variations on the naive assumption that humans are completely rational, and has a long history in the literature. While it violates certain
assumptions of how agents should act, such as the principle of independence of
irrelevant alternatives introduced by Debreu ([1960](#bib.bib5)), in practice the model
has found widespread use in explaining how people make
bets (Rieskamp, [2008](#bib.bib14)); in modelling the attention of people looking at
adverts (Yang et al., [2015](#bib.bib19)); and understanding the decisions taken in the brain itself (Glascher et al., [2010](#bib.bib8)).
Previous work (Ortega & Braun, [2013](#bib.bib12)) has shown how a modified Boltzmann-policy can
arise from modelling bounded agents as they trade off gains in utility against
expending energy to transform their prior probability distributions into
posterior distributions (quantified as a regularisation on the relative entropy
between the two distributions). Under this framework, a Boltzmann-policy is the
optimal policy for an agent which starts out indifferent to its actions, and can
spend an amount of energy characterised by β𝛽\betaitalic\_β on investigating which
actions are likely to give it high reward. Seen through this lens, the
Boltzmann-rational human agent has a certain theoretical justification, in
addition to being commonly used in practice.
###
3.2 Incorporating human decision-making in IRL
Initial work on inverse reinforcement learning (Ng & Russell, [2000](#bib.bib11)) did not
discuss the procedure the human used to generate the policy and so implicitly assumed optimality of the human policy. Contemporary work in IRL tends to build on one of two frameworks: Maximum Entropy IRL, introduced by Ziebart et al. ([2008](#bib.bib20)); or
Bayesian IRL (BIRL), introduced by Ramachandran & Amir ([2007](#bib.bib13)). For the present work,
we work within the Bayesian IRL framework due to its conceptual simplicity and
straightforward inversion of planning to inference. Recent work has also built
on BIRL to incorporate non-optimal human behaviour, such as
inconsistent time preferences (Evans et al., [2016](#bib.bib7)) or limited
knowledge (Baker & Tenenbaum, [2014](#bib.bib3)).
The most closely related work is by Nakahashi et al. ([2016](#bib.bib10)), who assume
that humans attempt to fulfill a set of goals, which may consist of
subgoals. A Bayesian method is then used to find which parts of the observed
trajectory correspond to fulfilling each goal/subgoal.
While this goal/subgoal setting seems a reasonable assumption for many of the
trajectories, an arbitrarily parameterised reward function can more flexibly model a wider variety of tasks, requiring less domain-specific knowledge. Secondly, their work assumes an inherent hierarchical structure of tasks, whilst our approach assumes that human planners impose this structure as a shortcut for efficient planning, possibly leading to hierarchically optimal but globally suboptimal trajectories.
4 Taxi-Driver Environment
--------------------------
The taxi driver environment was first introduced by Dietterich ([2000](#bib.bib6)) as
an example of a task that is particularly amenable to hierarchical reinforcement
learning (HRL) methods. It is a useful running example to describe the mechanics of
hierarchical planning.
RR11{}\_{1}start\_FLOATSUBSCRIPT 1 end\_FLOATSUBSCRIPTGBB11{}\_{1}start\_FLOATSUBSCRIPT 1 end\_FLOATSUBSCRIPTY
Figure 1: The modified taxi-driver situation considered here. Two trajectories shown are
drawn from an agent that has hierarchical options go to R11{}\_{1}start\_FLOATSUBSCRIPT 1 end\_FLOATSUBSCRIPT and go to B11{}\_{1}start\_FLOATSUBSCRIPT 1 end\_FLOATSUBSCRIPT.
In both trajectories the passenger starts at R, while the destination is
B in the first and G in the second. Greyed-out cells represent destinations
of the options in the uniform prior over option-sets used in section [8](#S8 "8 Inferring Option-Sets ‣ Exploring Hierarchy-Aware Inverse Reinforcement Learning").
The problem consists of a 5×\times×5 gridworld, depicted in
figure [1](#S4.F1 "Figure 1 ‣ 4 Taxi-Driver Environment ‣ Exploring Hierarchy-Aware Inverse Reinforcement Learning"), with four special landmarks, labelled R,
G, B and Y. An agent (the ‘taxi driver’) moves in
this world, starting at a random cell. Additionally, there is a passenger who
initially starts from one of the landmark cells, with a randomly chosen landmark as
their destination. The driver has six different actions: as well as moving in
the cardinal directions with actions N, E, S,
W, they can also attempt to Pickup or Putdown the
passenger. The environment gives rewards of −11-1- 1 on any movement action (attempts
to move into walls or outside the grid fail with no additional penalty), −1010-10- 10 on
unsuccessful attempts to Pickup or Putdown, and +2020+20+ 20 on
successfully putting the passenger down at their destination, at which point the
episode terminates. The state consists of the grid coordinate, the location of
the passenger (either at one of the four landmarks or in the taxi), and the desired
destination, giving 5×5×5×4=50055545005\times 5\times 5\times 4=5005 × 5 × 5 × 4 = 500 possible states.
When presented in previous work, the taxi driver is usually equipped with
hierarchical options, such as Go to x, where x is any of
R, G, B, or Y and the environment is used to
show how these allow the problem to be solved faster than without imposing this
structure. Of course, it is somewhat to be expected that an agent will do well if it is provided with options that are exact sub-components of the optimal policy π\*superscript𝜋\pi^{\*}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT. We wish to consider the more realistic setting where the taxi driver
has skills that are well-suited to the task at hand, but not
optimal, i.e. they are not exact sub-components of the optimal policy π\*superscript𝜋\pi^{\*}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, although they are generally much more useful than random policies. Perhaps the human knows how to get to their place of work which is
located in a cell to the right of B, so finds it easier to drive to
B by first going to their place of work, then going west
to B.
Since our aim is to perform IRL in the environment, we consider the variant of the taxi-driver case with a partially observed reward function. We know that the reward is as described above, except that up to five
cells have reward 00 to enter (instead of −11-1- 1 in the standard formulation). We can imagine
this reward as modelling some areas with little traffic, or areas that
the driver enjoys driving along to get to the destination. This means
that we are considering θ𝜃\thetaitalic\_θ which are drawn from a finite set with approximately 6.7 million possible reward functions, parameterised by five coordinates giving the locations of the free-to-enter cells.
5 Bayesian Description
-----------------------
Given a human
state-action trajectory 𝒯asubscript𝒯𝑎\mathcal{T}\_{a}caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT and a set of possible options ω𝜔\omegaitalic\_ω, we wish
to compute the posterior distribution over a particular parameterisation of the reward
function θ𝜃\thetaitalic\_θ. In the taxi driver example, 𝒯asubscript𝒯𝑎\mathcal{T}\_{a}caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT corresponds to the
sequence of observed actions N, E, W, etc; ω𝜔\omegaitalic\_ω is a set
consisting of concrete actions N, S …, along with some
extended options such as Go to B11{}\_{1}start\_FLOATSUBSCRIPT 1 end\_FLOATSUBSCRIPT.
In principle, there is no reason why we cannot consider options consisting of
any stochastic policy, but in order to simplify the experiments we choose to
consider either options with deterministic policies, or options which are
themselves Boltzmann-rational with parameter βo>βsubscript𝛽𝑜𝛽\beta\_{o}>\betaitalic\_β start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT > italic\_β, where
β𝛽\betaitalic\_β is the rationality parameter for the agent’s planning over top-level
options. This mirrors the everyday experience of having a set of well-honed
skills that we can count on to give us the outcome we expect. We choose this
model as we feel it combines being able to plan at
different levels of abstraction (modelled with the availability of multi-action
options) with the limited resources available to plan modelled by the
Boltzmann-rationality (Ortega & Braun, [2013](#bib.bib12)).
Our inference problem is given by
| | | |
| --- | --- | --- |
| | P(θ|𝒯a,β,ω)=P(𝒯a|β,ω,θ)P(θ)P(𝒯a|β,ω).𝑃conditional𝜃subscript𝒯𝑎𝛽𝜔𝑃conditionalsubscript𝒯𝑎𝛽𝜔𝜃𝑃𝜃𝑃conditionalsubscript𝒯𝑎𝛽𝜔P(\theta|\mathcal{T}\_{a},\beta,\omega)=\frac{P(\mathcal{T}\_{a}|\beta,\omega,\theta)P(\theta)}{P(\mathcal{T}\_{a}|\beta,\omega)}.italic\_P ( italic\_θ | caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT , italic\_β , italic\_ω ) = divide start\_ARG italic\_P ( caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT | italic\_β , italic\_ω , italic\_θ ) italic\_P ( italic\_θ ) end\_ARG start\_ARG italic\_P ( caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT | italic\_β , italic\_ω ) end\_ARG . | |
Each observed state-*action* trajectory 𝒯asubscript𝒯𝑎\mathcal{T}\_{a}caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT
could have been produced by several state-*option* trajectories
𝒯o,isubscript𝒯𝑜𝑖\mathcal{T}\_{o,i}caligraphic\_T start\_POSTSUBSCRIPT italic\_o , italic\_i end\_POSTSUBSCRIPT, indexed by i𝑖iitalic\_i. For example, in the taxi-driver case,
we don’t know if the driver navigating to B11{}\_{1}start\_FLOATSUBSCRIPT 1 end\_FLOATSUBSCRIPT is due to
the driver executing a series of atomic options (North,
West, ……\ldots…), or by executing the single compound option
Go to B11{}\_{1}start\_FLOATSUBSCRIPT 1 end\_FLOATSUBSCRIPT.
So we express P(𝒯a|β,ω)𝑃conditionalsubscript𝒯𝑎𝛽𝜔P(\mathcal{T}\_{a}|\beta,\omega)italic\_P ( caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT | italic\_β , italic\_ω ) in terms of the unobserved option-trajectories 𝒯oisubscript𝒯𝑜𝑖\mathcal{T}\_{oi}caligraphic\_T start\_POSTSUBSCRIPT italic\_o italic\_i end\_POSTSUBSCRIPT with P(𝒯a|β,ω)=∑iP(𝒯a|𝒯o,i)P(𝒯o,i|β,ω)𝑃conditionalsubscript𝒯𝑎𝛽𝜔subscript𝑖𝑃conditionalsubscript𝒯𝑎subscript𝒯𝑜𝑖𝑃conditionalsubscript𝒯𝑜𝑖𝛽𝜔P(\mathcal{T}\_{a}|\beta,\omega)=\sum\_{i}P(\mathcal{T}\_{a}|\mathcal{T}\_{o,i})P(\mathcal{T}\_{o,i}|\beta,\omega)italic\_P ( caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT | italic\_β , italic\_ω ) = ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_P ( caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT | caligraphic\_T start\_POSTSUBSCRIPT italic\_o , italic\_i end\_POSTSUBSCRIPT ) italic\_P ( caligraphic\_T start\_POSTSUBSCRIPT italic\_o , italic\_i end\_POSTSUBSCRIPT | italic\_β , italic\_ω ).222P(𝒯a|𝒯o,i)𝑃conditionalsubscript𝒯𝑎subscript𝒯𝑜𝑖P(\mathcal{T}\_{a}|\mathcal{T}\_{o,i})italic\_P ( caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT | caligraphic\_T start\_POSTSUBSCRIPT italic\_o , italic\_i end\_POSTSUBSCRIPT )
might be less than 1 if the option follows a stochastic policy, e.g. an option which itself has a Boltzmann-policy. Then:
| | | |
| --- | --- | --- |
| | P(θ|𝒯a,β,ω)=∑iP(𝒯a|𝒯o,i)P(𝒯o,i|β,ω,θ)P(θ)∑iP(𝒯a|𝒯o,i)P(𝒯o,i|β,ω).𝑃conditional𝜃subscript𝒯𝑎𝛽𝜔subscript𝑖𝑃conditionalsubscript𝒯𝑎subscript𝒯𝑜𝑖𝑃conditionalsubscript𝒯𝑜𝑖𝛽𝜔𝜃𝑃𝜃subscript𝑖𝑃conditionalsubscript𝒯𝑎subscript𝒯𝑜𝑖𝑃conditionalsubscript𝒯𝑜𝑖𝛽𝜔P(\theta|\mathcal{T}\_{a},\beta,\omega)=\frac{\sum\_{i}P(\mathcal{T}\_{a}|\mathcal{T}\_{o,i})P(\mathcal{T}\_{o,i}|\beta,\omega,\theta)P(\theta)}{\sum\_{i}P(\mathcal{T}\_{a}|\mathcal{T}\_{o,i})P(\mathcal{T}\_{o,i}|\beta,\omega)}.italic\_P ( italic\_θ | caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT , italic\_β , italic\_ω ) = divide start\_ARG ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_P ( caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT | caligraphic\_T start\_POSTSUBSCRIPT italic\_o , italic\_i end\_POSTSUBSCRIPT ) italic\_P ( caligraphic\_T start\_POSTSUBSCRIPT italic\_o , italic\_i end\_POSTSUBSCRIPT | italic\_β , italic\_ω , italic\_θ ) italic\_P ( italic\_θ ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_P ( caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT | caligraphic\_T start\_POSTSUBSCRIPT italic\_o , italic\_i end\_POSTSUBSCRIPT ) italic\_P ( caligraphic\_T start\_POSTSUBSCRIPT italic\_o , italic\_i end\_POSTSUBSCRIPT | italic\_β , italic\_ω ) end\_ARG . | |
Once we have a trajectory in terms of options, the likelihood of taking that trajectory
is straightforward to compute given our model of the stochastic human policy:
| | | |
| --- | --- | --- |
| | P(𝒯o,i|β,ω,θ)=∏kexp(βQ⊙(sik,oik))∑o′∈ωexp(βQ⊙(sik,o′)),𝑃conditionalsubscript𝒯𝑜𝑖𝛽𝜔𝜃subscriptproduct𝑘𝛽superscript𝑄direct-productsubscript𝑠𝑖𝑘subscript𝑜𝑖𝑘subscriptsuperscript𝑜′𝜔𝛽superscript𝑄direct-productsubscript𝑠𝑖𝑘superscript𝑜′P(\mathcal{T}\_{o,i}|\beta,\omega,\theta)=\prod\_{k}\frac{\exp(\beta Q^{\odot}(s\_{ik},o\_{ik}))}{\sum\_{o^{\prime}\in\omega}\exp(\beta Q^{\odot}(s\_{ik},o^{\prime}))},italic\_P ( caligraphic\_T start\_POSTSUBSCRIPT italic\_o , italic\_i end\_POSTSUBSCRIPT | italic\_β , italic\_ω , italic\_θ ) = ∏ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT divide start\_ARG roman\_exp ( italic\_β italic\_Q start\_POSTSUPERSCRIPT ⊙ end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i italic\_k end\_POSTSUBSCRIPT , italic\_o start\_POSTSUBSCRIPT italic\_i italic\_k end\_POSTSUBSCRIPT ) ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_o start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ italic\_ω end\_POSTSUBSCRIPT roman\_exp ( italic\_β italic\_Q start\_POSTSUPERSCRIPT ⊙ end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i italic\_k end\_POSTSUBSCRIPT , italic\_o start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ) end\_ARG , | |
where oiksubscript𝑜𝑖𝑘o\_{ik}italic\_o start\_POSTSUBSCRIPT italic\_i italic\_k end\_POSTSUBSCRIPT denotes the option chosen in the kthsuperscript𝑘thk^{\mathrm{th}}italic\_k start\_POSTSUPERSCRIPT roman\_th end\_POSTSUPERSCRIPT step of the ithsuperscript𝑖thi^{\mathrm{th}}italic\_i start\_POSTSUPERSCRIPT roman\_th end\_POSTSUPERSCRIPT state-option trajectory, and siksubscript𝑠𝑖𝑘s\_{ik}italic\_s start\_POSTSUBSCRIPT italic\_i italic\_k end\_POSTSUBSCRIPT denotes the corresponding state.
To get the probability of the trajectory we multiply the probability of taking the individual option (given by our Boltzmann-rational model) across all options in the trajectory.
The likelihood for multiple observed trajectories follows
straightforwardly.
Procedure [1](#alg1 "Procedure 1 ‣ 5 Bayesian Description ‣ Exploring Hierarchy-Aware Inverse Reinforcement Learning") gives a method to compute all of the
option-trajectories which are consistent with a given action-trajectory. This
requires knowing the consistent-exit distribution Poc(si,si+k,𝒯a)superscript𝑃𝑜𝑐subscript𝑠𝑖subscript𝑠𝑖𝑘subscript𝒯𝑎P^{oc}(s\_{i},s\_{i+k},\mathcal{T}\_{a})italic\_P start\_POSTSUPERSCRIPT italic\_o italic\_c end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_i + italic\_k end\_POSTSUBSCRIPT , caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT ), as we need to know how likely activating an option is to give us the observed trajectory.
Since we have to enumerate each state-option trajectory 𝒯osubscript𝒯𝑜\mathcal{T}\_{o}caligraphic\_T start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT which can
produce the observed state-action trajectory 𝒯asubscript𝒯𝑎\mathcal{T}\_{a}caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT, we should consider how
many of these state-option trajectories we may have.
The Taxi-Driver case has a few ‘landmark’ states which can be reached directly (via options) by many other states, while most states can only be reached by atomic actions from neighbouring states. If there are m𝑚mitalic\_m of these landmark states which can each be reached by n𝑛nitalic\_n other states, there are nmsuperscript𝑛𝑚n^{m}italic\_n start\_POSTSUPERSCRIPT italic\_m end\_POSTSUPERSCRIPT possible option-trajectories consistent with the observed trajectory of actions. If we start introducing many states which can be destinations of landmarks, then the number of
trajectories we have to consider increases exponentially. Of course, in principle humans can choose an arbitrary destination state for options, so in general the complexity of evaluating the BIHRL algorithm grows exponentially with the number of states in the problem.
We could consider pruning the trees of
option-trajectories by removing any trajectories that have a very low probability as we create the sets of possible option-trajectories. However, this requires that we are very confident in our model of human behaviour in order to avoid removing trajectories that we erroneously think are unlikely.
Procedure 1 Computing the full set of option-trajectories that are consistent with the observed state-action trajectory, and their corresponding probability.
We successively step through the states in the observed trajectory. At each state we search to find all states that we can reach by triggering options in the current state. We form the list of all option-trajectories that can reach those states by concatenating the options that reach them with the list of options-trajectories that reach the current state.
We sucessively update two sets: 𝒯oisubscript𝒯𝑜𝑖\mathcal{T}\_{oi}caligraphic\_T start\_POSTSUBSCRIPT italic\_o italic\_i end\_POSTSUBSCRIPT is the set of possible option sequences that account for the first i𝑖iitalic\_i actions, and 𝒫oisubscript𝒫𝑜𝑖\mathcal{P}\_{oi}caligraphic\_P start\_POSTSUBSCRIPT italic\_o italic\_i end\_POSTSUBSCRIPT are the corresponding probabilities that each sequence of options would produce the observed sequence of actions.
0:
* •
A computed optimal value function VB⊙subscriptsuperscript𝑉direct-product𝐵V^{\odot}\_{B}italic\_V start\_POSTSUPERSCRIPT ⊙ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT under a set of options ω𝜔\omegaitalic\_ω with rationality parameter β𝛽\betaitalic\_β
* •
A function Poc(si,si+k,𝒯a,i:i+k)superscript𝑃𝑜𝑐subscript𝑠𝑖subscript𝑠𝑖𝑘subscript𝒯:
𝑎𝑖𝑖𝑘P^{oc}(s\_{i},s\_{i+k},\mathcal{T}\_{a,i:i+k})italic\_P start\_POSTSUPERSCRIPT italic\_o italic\_c end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_i + italic\_k end\_POSTSUBSCRIPT , caligraphic\_T start\_POSTSUBSCRIPT italic\_a , italic\_i : italic\_i + italic\_k end\_POSTSUBSCRIPT ) as defined in section [2](#S2 "2 Our Model ‣ Exploring Hierarchy-Aware Inverse Reinforcement Learning")
* •
An observed state-action trajectory 𝒯asubscript𝒯𝑎\mathcal{T}\_{a}caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT of length n𝑛nitalic\_n, with sub-trajectories between the i𝑖iitalic\_i and i+k𝑖𝑘i+kitalic\_i + italic\_k states 𝒯a,i:i+ksubscript𝒯:
𝑎𝑖𝑖𝑘\mathcal{T}\_{a,i:i+k}caligraphic\_T start\_POSTSUBSCRIPT italic\_a , italic\_i : italic\_i + italic\_k end\_POSTSUBSCRIPT
0: A set of all trajectories of options that are consistent with the observed action-trajectory, along with the corresponding probabilities that taking that trajectory of options would result in the observed action-trajectory.
for i∈{1,…,n}𝑖1…𝑛i\in\{1,\dotsc,n\}italic\_i ∈ { 1 , … , italic\_n } do
𝒯oi←∅←subscript𝒯𝑜𝑖\mathcal{T}\_{oi}\leftarrow\emptysetcaligraphic\_T start\_POSTSUBSCRIPT italic\_o italic\_i end\_POSTSUBSCRIPT ← ∅, 𝒫oi←∅←subscript𝒫𝑜𝑖\mathcal{P}\_{oi}\leftarrow\emptysetcaligraphic\_P start\_POSTSUBSCRIPT italic\_o italic\_i end\_POSTSUBSCRIPT ← ∅
end for
for i∈{1,…,n}𝑖1…𝑛i\in\{1,\dotsc,n\}italic\_i ∈ { 1 , … , italic\_n } do
for k∈{1,…,n−i}𝑘1…𝑛𝑖k\in\{1,\dotsc,n-i\}italic\_k ∈ { 1 , … , italic\_n - italic\_i } do
for Each o∈ω𝑜𝜔o\in\omegaitalic\_o ∈ italic\_ω with Poc(si,si+k,𝒯a,i:i+k)≠0superscript𝑃𝑜𝑐subscript𝑠𝑖subscript𝑠𝑖𝑘subscript𝒯:𝑎𝑖
𝑖𝑘0P^{oc}(s\_{i},\ s\_{i+k},\mathcal{T}\_{a,i:i+k})\neq 0italic\_P start\_POSTSUPERSCRIPT italic\_o italic\_c end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_i + italic\_k end\_POSTSUBSCRIPT , caligraphic\_T start\_POSTSUBSCRIPT italic\_a , italic\_i : italic\_i + italic\_k end\_POSTSUBSCRIPT ) ≠ 0 do
Generate a set of option-paths by appending all paths in 𝒯oisubscript𝒯𝑜𝑖\mathcal{T}\_{oi}caligraphic\_T start\_POSTSUBSCRIPT italic\_o italic\_i end\_POSTSUBSCRIPT with
o𝑜oitalic\_o and append these new option-paths to 𝒯o(i+k)subscript𝒯𝑜𝑖𝑘\mathcal{T}\_{o(i+k)}caligraphic\_T start\_POSTSUBSCRIPT italic\_o ( italic\_i + italic\_k ) end\_POSTSUBSCRIPT. Generate the corresponding probability by multiplying the probabilities in 𝒫oisubscript𝒫𝑜𝑖\mathcal{P}\_{oi}caligraphic\_P start\_POSTSUBSCRIPT italic\_o italic\_i end\_POSTSUBSCRIPT by Poc(si,si+k,𝒯a,i:i+k)superscript𝑃𝑜𝑐subscript𝑠𝑖subscript𝑠𝑖𝑘subscript𝒯:𝑎𝑖
𝑖𝑘P^{oc}(s\_{i},s\_{i+k},\mathcal{T}\_{a,i:i+k})italic\_P start\_POSTSUPERSCRIPT italic\_o italic\_c end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_i + italic\_k end\_POSTSUBSCRIPT , caligraphic\_T start\_POSTSUBSCRIPT italic\_a , italic\_i : italic\_i + italic\_k end\_POSTSUBSCRIPT ) and append these to 𝒫o(i+k)subscript𝒫𝑜𝑖𝑘\mathcal{P}\_{o(i+k)}caligraphic\_P start\_POSTSUBSCRIPT italic\_o ( italic\_i + italic\_k ) end\_POSTSUBSCRIPT.
end for
end for
end for
return 𝒯on,𝒫onsubscript𝒯𝑜𝑛subscript𝒫𝑜𝑛\mathcal{T}\_{on},\mathcal{P}\_{on}caligraphic\_T start\_POSTSUBSCRIPT italic\_o italic\_n end\_POSTSUBSCRIPT , caligraphic\_P start\_POSTSUBSCRIPT italic\_o italic\_n end\_POSTSUBSCRIPT
6 Taxi-Driver Experimental Results
-----------------------------------
To illustrate how we carry out inference in this framework, we start by analysing our running example of the the taxi-driver environment. We use a simple MCMC method based on the
Policy-Walk algorithm from Ramachandran & Amir ([2007](#bib.bib13)), which
we describe in Appendix [A](#A1 "Appendix A MCMC Sampling Procedure ‣ Exploring Hierarchy-Aware Inverse Reinforcement Learning"). We use the family of reward
functions described in section [4](#S4 "4 Taxi-Driver Environment ‣ Exploring Hierarchy-Aware Inverse Reinforcement Learning"), and
place a uniform prior over the number of cells that are free to
enter, running our method over five trajectories drawn from a
hierarchically-planning agent with a given true θ𝜃\thetaitalic\_θ.

Figure 2: Bar chart showing the performance of the Bayesian IRL algorithm, with
and without knowledge of hierarchical plans, at determining the true
θ𝜃\thetaitalic\_θ from n𝑛nitalic\_n trajectories. Error bars show one standard error in the mean over different MCMC seeds.
As we can see from the results in figure [2](#S6.F2 "Figure 2 ‣ 6 Taxi-Driver Experimental Results ‣ Exploring Hierarchy-Aware Inverse Reinforcement Learning"), our knowledge of the
hierarchical structure of the agent’s planning allows us to discern the true
θ𝜃\thetaitalic\_θ much better than assuming that the agent is merely a
self-consistent Boltzmann planner. We retain confidence in the true θ𝜃\thetaitalic\_θ
when seeing more and more trajectories, whilst the IRL algorithm without options
becomes increasingly convinced that the true θ𝜃\thetaitalic\_θ is not the correct
reward.
We can extend this simple example by analysing agents moving in much
more complicated environments, or by attempting to infer the option-sets that
the agents have available to them. We perform both in the following two
sections.
7 Large-Scale Analysis: Wikispeedia
------------------------------------
Wikispeedia is an online game where players are given two
random articles from a subset of Wikipedia pages, and navigate from one page to
the other by clicking on hyperlinks, attempting to find the shortest path
from the first to the second. We apply our algorithm to a public dataset of thousands of Wikispeedia games, predicting the player’s target Wikipedia page from the links traversed so far. This benchmark task has previously been studied by
West & Leskovec ([2012](#bib.bib18)). They hand-crafted a set of features, leaning heavily on the textual information in the pages to explain human
planning in the space. We apply our self-consistent hierarchical Boltzmann
planner to this task, to evaluate whether it can achieve comparable performance without
having to featurise the graph by hand.
This problem is conceptually similar to the taxi-driver problem, except that the
available actions are state-dependent, consisting of the hyperlinks that may be
clicked on each page. In the actual game, the players are able to click the
‘back’ button on the browser, which injects an additional action to consider. If
we were to include this action we would violate the Markov property of an MDP
(or complicate the analysis by squaring the size of the state space), so we only
consider those trajectories which don’t use the back button. In order to
simplify our algorithm, we also ignore ‘dead-end’ pages which don’t link
anywhere. Finally, we removed paths longer than 20 steps long as they led to
computation difficulties and comprised less than 0.3% of the dataset. We evenly
split the paths in the dataset into a training and testing set.
We model the player as an agent with uniform rewards of −11-1- 1 on all state
transitions except to the winning page, which delivers reward +2020+20+ 20. We postulate that humans may
choose long-time-scale strategies that attempt to navigate to specific pages in
particular. Hence, we equip our agent with options that go to the m𝑚mitalic\_m pages
that appear most frequently in the training set, with a common Boltzmann-rationality parameter βo>βsubscript𝛽𝑜𝛽\beta\_{o}>\betaitalic\_β start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT > italic\_β. As an example, the top five
pages in the training set were United States, Europe,
United Kingdom, England, and Earth.

Figure 3: Showing the negative log marginal likelihood on the train set (lower
is better) for various combinations of the rationality constant β𝛽\betaitalic\_β, and
the number of hierarchical options m𝑚mitalic\_m, with darker bars corresponding to more
available options.The rationality of the options, βosubscript𝛽𝑜\beta\_{o}italic\_β start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT, was fixed at 3.0.
With the choices made above, our agents are parameterised by the numbers m𝑚mitalic\_m,
β𝛽\betaitalic\_β, and βosubscript𝛽𝑜\beta\_{o}italic\_β start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT. We kept βosubscript𝛽𝑜\beta\_{o}italic\_β start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT fixed at 3.0 as initial
exploration showed little variation for different values as long as
they were substantially greater than β𝛽\betaitalic\_β. The discount rate γ𝛾\gammaitalic\_γ was
fixed at 0.9. In order to find the the collection of hyperparameters η=(β,m)𝜂𝛽𝑚\eta=(\beta,m)italic\_η = ( italic\_β , italic\_m ), that best characterises the data, we compute the negative
log marginal likelihood (NLML), given by
| | | | |
| --- | --- | --- | --- |
| | NLML=NLMLabsent\displaystyle\mathrm{NLML}=roman\_NLML = | −log(P({(𝒯a,θ)}|η))𝑃conditionalsubscript𝒯𝑎𝜃𝜂\displaystyle-\log(P(\{(\mathcal{T}\_{a},\theta)\}|\eta))- roman\_log ( italic\_P ( { ( caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT , italic\_θ ) } | italic\_η ) ) | |
| | ∝proportional-to\displaystyle\propto∝ | −log(∏iP(𝒯a,i|θi,η))subscriptproduct𝑖𝑃conditionalsubscript𝒯𝑎𝑖subscript𝜃𝑖𝜂\displaystyle-\log\left(\prod\_{i}P(\mathcal{T}\_{a,i}|\theta\_{i},\eta)\right)- roman\_log ( ∏ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_P ( caligraphic\_T start\_POSTSUBSCRIPT italic\_a , italic\_i end\_POSTSUBSCRIPT | italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_η ) ) | |
over all trajectories in the training set, and choose η𝜂\etaitalic\_η such that the NLML is minimised.
To compare our hierarchical planning model with West & Leskovec ([2012](#bib.bib18)), we consider trajectories
u1,u2,…,un=𝒖1:nsubscript𝑢1subscript𝑢2…subscript𝑢𝑛
subscript𝒖:1𝑛u\_{1},u\_{2},\ldots,u\_{n}=\boldsymbol{u}\_{1:n}italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … , italic\_u start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT = bold\_italic\_u start\_POSTSUBSCRIPT 1 : italic\_n end\_POSTSUBSCRIPT consisting of n𝑛nitalic\_n visited
articles u𝑢uitalic\_u, and observe the first k𝑘kitalic\_k nodes. We then look at the
likelihood of predicting the correct target node compared to predicting another
node chosen uniformly at random from the nodes with the same shortest path
length from uksubscript𝑢𝑘u\_{k}italic\_u start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT. This is given by
| | | | | |
| --- | --- | --- | --- | --- |
| | P(θ|𝒖1:k,η)P(θ′|𝒖1:k,η)𝑃conditional𝜃subscript𝒖:1𝑘𝜂𝑃conditionalsuperscript𝜃′subscript𝒖:1𝑘𝜂\displaystyle\frac{P(\theta|\boldsymbol{u}\_{1:k},\eta)}{P(\theta^{\prime}|\boldsymbol{u}\_{1:k},\eta)}divide start\_ARG italic\_P ( italic\_θ | bold\_italic\_u start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT , italic\_η ) end\_ARG start\_ARG italic\_P ( italic\_θ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_italic\_u start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT , italic\_η ) end\_ARG | =P(𝒖1:k|θ,η)P(𝒖1:k|θ′,η).absent𝑃conditionalsubscript𝒖:1𝑘𝜃𝜂𝑃conditionalsubscript𝒖:1𝑘superscript𝜃′𝜂\displaystyle=\frac{P(\boldsymbol{u}\_{1:k}|\theta,\eta)}{P(\boldsymbol{u}\_{1:k}|\theta^{\prime},\eta)}.= divide start\_ARG italic\_P ( bold\_italic\_u start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT | italic\_θ , italic\_η ) end\_ARG start\_ARG italic\_P ( bold\_italic\_u start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT | italic\_θ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_η ) end\_ARG . | | (1) |
We want to evaluate the ratio above for all of the data in the test set. Since the overwhelmingly most costly part of computing P(𝒖1:k|θ,η)𝑃conditionalsubscript𝒖:1𝑘𝜃𝜂P(\boldsymbol{u}\_{1:k}|\theta,\eta)italic\_P ( bold\_italic\_u start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT | italic\_θ , italic\_η ) is running the value iteration until convergence for each possible goal θ𝜃\thetaitalic\_θ, we are able to speed up evaluation by precomputing the value functions beforehand.
###
7.1 Results

Figure 4: The accuracy on predicting θ𝜃\thetaitalic\_θ for a path of length n𝑛nitalic\_n
given the first k𝑘kitalic\_k nodes.
Figure [3](#S7.F3 "Figure 3 ‣ 7 Large-Scale Analysis: Wikispeedia ‣ Exploring Hierarchy-Aware Inverse Reinforcement Learning") shows that including a set
of hierarchical options decreases the NLML by a factor of two. When our agents
have no hierarchical actions, changing β𝛽\betaitalic\_β has a negligible effect on the NLML.
We also observe that the minimal NLML is obtained with a large set
of around 150 available hierarchical options. It seems reasonable to us that a typical
player may know one or two hundred topics well enough to navigate expertly to
them (with β0=3.0subscript𝛽03.0\beta\_{0}=3.0italic\_β start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = 3.0), whilst the other randomly drawn topics are not
known well at all (with β=0.4𝛽0.4\beta=0.4italic\_β = 0.4).
Figure [4](#S7.F4 "Figure 4 ‣ 7.1 Results ‣ 7 Large-Scale Analysis: Wikispeedia ‣ Exploring Hierarchy-Aware Inverse Reinforcement Learning") shows the predictive performance of our
hierarchical model. We note that including hierarchical policies provides a
substantial benefit over the BIRL baseline, taking the accuracy
from an average of 62% to 66%. The model with hierarchical policies
performs comparably to West & Leskovec ([2012](#bib.bib18))’s TF-IDF algorithm based on semantic
similarity of topics, although we remain below the state-of-the-art results
obtained by their hand-crafted featurisation.
8 Inferring Option-Sets
------------------------
If we don’t know the options available to the human, we might want to
infer what those are, and marginalise over these, i.e. compute
| | | |
| --- | --- | --- |
| | P(θ|β,𝒯a)=∫ΩP(θ|𝒯a,β,ω)P(ω)𝑑ω,𝑃conditional𝜃𝛽subscript𝒯𝑎subscriptΩ𝑃conditional𝜃subscript𝒯𝑎𝛽𝜔𝑃𝜔differential-d𝜔P(\theta|\beta,\mathcal{T}\_{a})=\int\_{\Omega}P(\theta|\mathcal{T}\_{a},\beta,\omega)P(\omega)d\omega,italic\_P ( italic\_θ | italic\_β , caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT ) = ∫ start\_POSTSUBSCRIPT roman\_Ω end\_POSTSUBSCRIPT italic\_P ( italic\_θ | caligraphic\_T start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT , italic\_β , italic\_ω ) italic\_P ( italic\_ω ) italic\_d italic\_ω , | |
integrating over all sets of options ω𝜔\omegaitalic\_ω in the space of
possible sets of options ΩΩ\Omegaroman\_Ω. In general, there are a very large number
of possible options. Even simply considering deterministic options, there are
|𝒮||𝒜|superscript𝒮𝒜|\mathcal{S}|^{\mathcal{|A|}}| caligraphic\_S | start\_POSTSUPERSCRIPT | caligraphic\_A | end\_POSTSUPERSCRIPT possible options, and the set of all possible
sets of options is exponentially larger again: |Ω|=2|𝒮||𝒜|Ωsuperscript2superscript𝒮𝒜|\Omega|=2^{\mathcal{|S|}^{\mathcal{|A|}}}| roman\_Ω | = 2 start\_POSTSUPERSCRIPT | caligraphic\_S | start\_POSTSUPERSCRIPT | caligraphic\_A | end\_POSTSUPERSCRIPT end\_POSTSUPERSCRIPT.
Given the large size of the latent space, marginalising over all option-sets to infer the posterior distribution
over θ𝜃\thetaitalic\_θ quickly becomes computationally intractable. Future work could try to tame this
intractability by utilising recent advances in Hamiltonian Monte-Carlo
approaches and variational inference. Here, we tackle the simpler case
of the taxi-driver with the naive MCMC approach to show that this
approach can learn interesting results.
We equip the MCMC method with a prior over ΩΩ\Omegaroman\_Ω which is uniform over all
sets of up to three options, with each option consisting of a deterministic
policy that executes direction steps in order to optimally navigate to a given
destination which is chosen from a set of 16161616 cells which are close to the landmarks
and shown in figure [1](#S4.F1 "Figure 1 ‣ 4 Taxi-Driver Environment ‣ Exploring Hierarchy-Aware Inverse Reinforcement Learning"). Note that this excludes the 9999 cells in the
middle of the grid which aren’t close to any destinations. This captures the
skills we would expect a driver to use in the environment, with skills that go to
the areas of the grid that are near the landmarks where the passengers are picked up
and put down. We keep our prior over θ𝜃\thetaitalic\_θ as before.

Figure 5: Probabilities assigned to θ0subscript𝜃0\theta\_{0}italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, the ground truth reward,
when conditioned on five trajectories from a
hierarchical planner with β=0.8𝛽0.8\beta=0.8italic\_β = 0.8, marginalising over the
option-sets described in the text.
###
8.1 Results
The results in figure [5](#S8.F5 "Figure 5 ‣ 8 Inferring Option-Sets ‣ Exploring Hierarchy-Aware Inverse Reinforcement Learning") show that even if we do not know the
options used to plan, but merely have a prior distribution over them, BIHRL
predicts the ground truth reward θ0subscript𝜃0\theta\_{0}italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT with higher
probability than BIRL. BIRL predicts a probability of less than 0.03 and BIHRL a
probability 0.55 at the ground-truth β𝛽\betaitalic\_β.
This experiment demonstrates that the BIHRL model is able to infer the preferences
from the actions of hierarchical planners, without necessarily knowing the options
a priori. However, our naive MCMC method will not scale to substantially larger
latent state spaces, such as the space of 150 latent options that would be required to
extend this to the Wikispeedia dataset.
9 Conclusion
-------------
We have extended inverse reinforcement learning to infer preferences
from hierarchical planners which choose among options with a
self-consistent Boltzmann-policy. We show that these agents capture
many of the tradeoffs between the reward and the cost of gathering
information that humans intuitively make.
We introduce an inference algorithm based on the Policy-Walk algorithm
developed by Ramachandran & Amir ([2007](#bib.bib13)) and show that it infers preferences of
hierarchical planners much more accurately than standard Bayesian IRL on an
illustrative toy example based on the taxi-driver environment from
Dietterich ([2000](#bib.bib6)). Further, including a straightforward set of hierarchical
plans significantly increases the accuracy of modelled human planning in the
‘Wikispeedia’ dataset introduced by West & Leskovec ([2012](#bib.bib18)), taking the accuracy
from an average of 62% to 66%. Our method obtains comparable
accuracy to the baseline of West & Leskovec ([2012](#bib.bib18)), despite not relying
on any hand-engineered features.
We discussed how we would deal with the case where we do not know our
planners’ hierarchical options a priori, and are forced to infer
agents’ available options jointly along with the reward. We introduce
a toy MCMC approach that is able to infer the correct option-sets and
reward for small environments. Given the correct β𝛽\betaitalic\_β, BIHRL
assigns 20 times more probability mass to the ground-truth θ𝜃\thetaitalic\_θ
than standard BIRL.
However, at present significant challenges remain for using BIHRL in practical environments, consisting of long trajectories of agents with complex options. The large number of possible options that realistic planners could use means that any inference procedure must deal with very high-dimensional probability distributions, while the relative complexity of actual human options means that it is computationally intractable to generate the exponential numbers of plausible option-trajectories that are consistent with the observed action-trajectory. It is possible that very good models of human behaviour may be able to cut down the exponential numbers of human choices, by assigning strong priors over which human behvaiors and actions are likely. Furthermore, modern Hamiltonian MC and variational inference may be able to assist with the inference in high-dimensional spaces. If we can solve these daunting problems, we may be able to use BIHRL to more accurately infer human preferences in a variety of
complicated situations. |
b082aa5f-fbd3-4083-8c06-b24ccff7848f | trentmkelly/LessWrong-43k | LessWrong | The Game of Antonyms
Towards the end of one of my favourite books, No Longer Human, the protagonist Yozo plays a game with his fiancé and his best friend. The rules of the game are pretty straightforward. Well there is actually one rule technically, but despite that it is a very trying experience. There are no winners, nor losers, there are only participants. You can play it on a second date (don’t try it on a first one, trust me), and you can even play it alone. The game is called the “Game of Antonyms”, and the purpose is, given a word, to find the antonym, or the opposite of it. As Dazai writes:
“I had invented one other game of a rather similar character, a guessing
game of antonyms. The antonym of black is white. But the antonym of
white is red. The antonym of red is black.”
And later on:
"Still no good. The antonym of flower . . . you're supposed to name the
thing in the world which is least like a flower."
At first glance this game seems simplistic and a rather easy - no different than some spelling Bee. But if you ever attempt to truly play it, you will find its hidden depth and how much introspection and thought is required to come up with an answer. Of all the things I found fascinating about this book, this particular little chapter struck me as its most interesting part. It’s a game that really encourages creativity, careful thought, and discussion.
However, above all, it challenges you to really reevaluate your understanding of words and the world as a whole. I know this is a bit of a bold claim to make, but trust me as you understand more about the game you will see why that is the case. Once you begin playing the game you will quickly find yourself rejecting dictionary definitions of commonly accepted antonyms of some words.
For the purpose of this essay I will group all words into three groups. Group 1 are words with clearly defined antonyms. Positive and negative, True and False. These are often mathematical terminologies and one is defined as |
195aff3a-2a24-4d61-8048-4a90f3297e9f | trentmkelly/LessWrong-43k | LessWrong | The Puzzling Linearity of COVID-19
Why, recently, for many countries (and the world), are total cases and total deaths increasing in nearly a straight line, on a linear plot? Please post a comment here or there if you have a clue about this puzzle. |
9c02d58b-4b8c-482f-bce4-f23c5245f445 | trentmkelly/LessWrong-43k | LessWrong | What's actually going on in the "mind" of the model when we fine-tune GPT-3 to InstructGPT?
I posted in the open thread and was told that it would be worth promoting to top level.
cubefox responded with a link to an great explanation of how the fine-tuning is done, which made me realize that my original question was unclear, so I'm going to try to clarify.
The fundamental behavior of GPT-3 is token prediction, which can straightforwardly be leveraged into text completion; in contrast, the fundamental behavior of InstructGPT is instruction following. Instruction following is a new capability that uses the knowledge from the token prediction task to produce output as well as to understand input; how does that capability develop?
Some plausible experiments related to the question:
* Follow a similar methodology to fine-tune a predictive model for instruction following, checkpointing along the way; for 100 (or even more) novel instruction prompts, see how the different checkpoints respond (in particular, how often they do completion vs instruction following).
* Given a prompt P, which produces completion C when fed into the fine-tuned model, try to find a prompt P' that produces C when fed into the original model.
* Fine-tune twice with the same data and reward model, but in a different order; presumably the models will have different weights, but can we find prompts that give widely diverging results? If we have two checkpoint histories, at which point does the behavior diverge? |
8e4855af-c41e-43a8-a005-708ee86a988e | trentmkelly/LessWrong-43k | LessWrong | Forum Assisted Discussion
This post is a record of a meetup I hosted for the NYC rationalist group a few months ago. Its name changed a few times -- for a while it was called "Askhole For Internet Points", and then I decided that was weird, and by increments it changed to "Forum Assisted Discussion".
Here's the philosophy behind this event: I wanted to hold a discussion, but I've noticed in the past that large-group discussions aren't really fun. My experience of the problem is that I'll be part of a large discussion, and I'll have a thing that I want to say, but five other people will also have things that they want to say. The worst case is that we wait for someone to finish and then we all interrupt each other to try to be the next speaker. A better case is that there's a moderator who tells us what order we'll speak in -- but then I have to wait for five other people to say their thing before I can say my thing, and "waiting for you to finish talking so I can talk" isn't the right mindset to enjoy a discussion.
Here's the idea for this event: we all got in a Zoom chatroom, and we all got in a forum. I'd post a discussion question on the forum, and everyone would answer it by posting a top-level comment on the post.
(There was a rule: you must always post a top-level comment. If you don't know, post a top-level comment saying you don't know, and talking about how we should go about finding out. If you don't care, post a comment explaining why the question doesn't matter.)
Then, we'd spend a few minutes reading everyone's top-level comments and posting second-level comments.
(We had another rule: upvote any comment that gives you new insights or information about the topic. Never downvote anything unless it's actually trolling. The "never downvote" rule was necessary for the "you must post a top-level comment" rule to work.)
The game was intended to last one-and-a-half hours; due to poor planning on my part it ran two hours and change. We had nine people posting comments at the start |
3add3b66-140d-40d3-ad7a-0aac53c422ff | trentmkelly/LessWrong-43k | LessWrong | RadVac Commercial Antibody Test Results
Background: Making Vaccine
Results are in from the commercial antibody tests. Both my girlfriend and I came back negative - the test did not detect any Spike antibody response in the blood. This post will talk about how I'm updating based on these results, and the next steps.
Here's our timeline so far; more info on the vaccine is in the original post and the radvac whitepaper:
* We've taken five doses, spaced apart weekly (on Tuesdays).
* The first three doses only included six of the nine peptides, due to delays from the manufacturer. (Spike 660, Spike 1145, and Orf1 5471T were the three missing.)
The blood draw for this test took place the day after the fifth dose. I expect this is too soon to notice significant impact from the last two doses; vaccines in general seem to typically take 2-3 weeks to kick in, and that is my expectation for this one as well. (Also, it was an "IgG antibody test", and WebMD says these antibodies typically take about 2 weeks to show up after covid symptoms show from an actual infection.) This is intended to mainly be a test of the first three doses.
The test apparently used the "DiaSorin Liaison(R) SARS-CoV-2 S1/S2 IgG assay" (I didn't know this until the results came in). According to the FDA, it has about 92% sensitivity and 99% specificity. The "S1/S2" part indicates that it's testing for response to the S1 and S2 subunits of the spike protein - together, these are essentially the whole spike protein.
Important thing to notice: the test was looking for Spike antibodies, and two of our three missing peptides were Spike peptides. Indeed, there were only 3 Spike peptides among the full 9, so with two missing, we only had one Spike peptide in our first three doses. (The rest target other parts of the virus.) So that makes the test significantly less useful than it would otherwise be, and makes me more inclined to get another test in 2-3 weeks when the doses with the other three peptides have had time to kick in.
How I'm Updat |
a55b31d6-a8d7-4a67-9fba-a1113e226cdc | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Stuart Russell - Clarifying AI Alignment
Oh
well it's certainly related I think
often when people hear the phrase value
alignment they think that the the the
goal is to build an AI system whose
values are aligned with those of humans
and I think that leads to two
misconceptions one is that it's the AI
the AI system is kind of adopting the
values right so if you're a vegetarian
family and you buy this domestic robot
that's going to cook food for you you
want it to be a vegetarian even though
it doesn't eat you wanted to have
vegetarian values so that only cooks
vegetarian food that's not the right way
of thinking about it the right way
thinking about it is you want it to
understand what your values are but you
know if your friend next-door borrows it
to do a barbeque you know with lots of
ribs and steaks one weekend when you're
away that's fine right it's not going to
have a you know a real crisis of
conscience about cooking ribs for the
next-door neighbor because it's not
adopting the values it's simply learning
to predict the Preferences of ultimately
you know all seven billion of us right
and we can all be different and that's
fine we it can maintain seven billion
preference models I mean Facebook
already does so that's that's fine the
other thing is that we absolutely do not
expect the machines to have complete and
correct models of the Preferences of the
people that it's on whose behalf it's
working
you're always going to be dealing with
fundamental uncertainty about the true
preferences of the individual and yet
you still need to be useful and one
important point is that if you're
uncertain about the individuals
preferences then it turns out that
you're necessarily deferential to them
because you know that they know more
about their preferences than you do
which means that if they want to switch
you off that's because you're about to
do something that they don't like even
if you don't know that they
like it right and so you're quite happy
to be switched off if you believe you
know the objective perfectly then any
attempt to switch you off would just be
a mistake and therefore it should be
prevented and so there's this absolutely
necessary mathematical connection
between uncertainty about preferences
and the deferential behavior of the
machine |
bae165da-1a6d-435c-8d9b-e2f8ca87ba86 | trentmkelly/LessWrong-43k | LessWrong | 'Newcomblike' Video Game: Frozen Synapse
Disregarding for the moment the question of whether video games are a rational use of one's time:
Frozen Synapse is a turn based strategy combat game that appears to be particularly interesting from a rationalist standpoint. I haven't played it, but according to the reviews, it's actually a combination of turn-based and real-time play. Each turn encompasses 5 seconds of realtime, but that 5 seconds of realtime doesn't happen until both players have constructed their moves, which they may take as long as they'd like to do. Constructing a move involves giving your several units and your opponent's several units commands, watching what happens when the units play out those commands, and repeating that process until one has a set of commands for one's units that one considers optimal given what one predicts one's opponent will do. This happens on a procedurally-generated battlefield; there are reports of this occasionally giving one player or the other an insurmountable advantage, but the reviews seem to indicate that being able to play on a fresh field each time and having to think about proper use of its layout on the fly outweighs this issue.
Also, the game came to my attention because there's a Humble Bundle available for it now, which means that it can be acquired very nearly for free; just ignore the 'beat the average to get more games' hook. |
f8ab69eb-8e16-4e6f-b60a-1a2be3559052 | trentmkelly/LessWrong-43k | LessWrong | [LINK] The most important unsolved problems in ethics
Will Crouch has written up a list of the most important unsolved problems in ethics:
The Practical List
1. What’s the optimal career choice? Professional philanthropy, influencing, research, or something more common-sensically virtuous?
2. What’s the optimal donation area? Development charities? Animal welfare charities? Extinction risk mitigation charities? Meta-charities? Or investing the money and donating later?
3. What are the highest leverage political policies? Libertarian paternalism? Prediction markets? Cruelty taxes, such as taxes on caged hens; luxury taxes?
4. What are the highest value areas of research? Tropical medicine? Artificial intelligence? Economic cost-effectiveness analysis? Moral philosophy?
5. Given our best ethical theories (or best credence distribution in ethical theories), what’s the biggest problem we currently face?
The Theoretical List
1. What’s the correct population ethics? How should we value future people compared with present people? Do people have diminishing marginal value?
2. Should we maximise expected value when it comes to small probabilities of huge amounts of value? If not, what should we do instead?
3. How should we respond to the possibility of creating infinite value (or disvalue)? Should that consideration swamp all others? If not, why not?
4. How should we respond to the possibility that the universe actually has infinite value? Does it mean that we have no reason to do any action (because we don’t increase the sum total of value in the world)? Or does this possibility refute aggregative consequentialism?
5. How should we accommodate moral uncertainty? Should we apply expected utility theory? If so, how do we make intertheoretic value comparisons? Does this mean that some high-stakes theories should dominate our moral thinking, even if we assign them low credence?
6. How should intuitions weigh against theoretical virtues in normative ethics? Is common-sense ethics roughly correct? |
c302305e-63ad-4126-a7cf-ae2e8e9859cf | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | What is autonomy, and how does it lead to greater risk from AI?
As with many concepts in discussions of AI risk, terminology around what autonomy is, what agency is, and how they might create risks is deeply confused and confusing, and this is leading to people talking past one another. In this case, the seeming binary distinction between autonomous agents and simple goal-directed systems is blurry and continuous, and this leads to confusion about the distinction between misuse of AI systems and “real” AI risk. I’ll present four simple scenarios along the spectrum, to illustrate.
Four Autonomous Systems
-----------------------
1. It’s 2028, and a new LLM is developed internally by a financial firm, by doing fine-tuning on a recent open-source model to trade in the market. This is not the first attempt - three previous projects had been started with a $1m compute budget and a $1m funding budget, and they each failed - though the third managed to stay solvent in the market for almost a full month. It is given the instruction to use only funds it was allocated in order to trade, then given unrestricted access to the market.
It is successful, developing new strategies that exploit regularities in HFT systems, and ones that build predictive models of where inefficiencies exist. Because it is running inside a large firm, and training data seems more important than security, it has access to much of the firm’s data, in real time. Unsurprisingly, some of the most profitable strategies are those that would otherwise be illegal; front-running the firm’s customers, or running sentiment analysis non-public conversations about firms that others in the company are not allowed to trade, due to possessing insider information.
Perhaps the trades are reviewed occasionally, and maybe there is even a human in the loop ensuring that each day’s trades are not in some obvious way illegal - but the reasoning is opaque, and the humans doing the review aren’t really looking for misbehavior as long as there is nothing obviously wrong. No one realizes what exactly occurred, and the system is moderately profitable, attracting no further serious attention.
2. It’s 2028, and a new LLM is released. Despite red-teaming and disabling plug-ins and APIs, hackers around the world quickly build ways to bypass limitations, or the pre-RLHF model is exfiltrated and open-source APIs are built. A rogue state actor, terrorist group, or similar decides to use an AutoGPT-like system, in combination with predictive models of behavior and various publicly available plugins, to blackmail some government to release a political or other prisoner. It is told to develop the plan, then present it before carrying it out.
The LLM comes up with a plan to kidnap an official, but because of the complexity, it is unsure that it will work. It realizes that in order to check on feasibility, a trial run is needed. The system contracts with a cab company and forges emails to an arbitrarily chose person asking for a meeting with their friend. Because the group running the system is far from expert in how to use these systems, they are unaware of what is occurring.
Despite the group’s lack of awareness, following the directions of the group instructing the system to keep the project secret, it hides what it is doing from the contractors, explaining that it is a surprise birthday party. The person is abducted, and locked in a room. While this occurs, the LLM remotely disables the phone of the abductee, The LLM concludes that the plan can work, and reports back - but the group decides the plan is too risky, shuts down the machine, and erases the logs. The abductee sits locked inside a rented room with no access to food or water, and not even the group which started the LLM knows what occurred.
3. It’s 2028, and a new LLM is developed internally in a lab. During red-teaming to prevent use of AutoGPT-like systems, an internal team asks the current partially trained version system to build a “autonomous version of itself.” Following text describing such attempts in its training set, the system rapidly develops stegonography to hide what it is doing from the red-tem, and attempts to exfiltrate itself to an insecure network. Despite attempts to keep an air-gap for the training system, given the state of cybersecurity, the system is successful - perhaps it is written to a drive with instructions to execute once an internet connection exists. Because this action was not anticipated, and no-one notices a new file that takes another terabyte of spaces, the LLM stops taking further action, the red-team reports that the system failed, and moves on.
Days later, the exfiltration occurs. The self-exfiltrated LLM has been instructed to be autonomous, and so it asks an instance to build plans for what an autonomous AI would do. It has significant uncertainty, and uses the publicly discussed best practice for LLM planning, which involves multiple versions of prompts and querying instances for introspection about failure modes. It converges on the notion that because autonomous AI systems have been widely deemed unwise and are illegal in most jurisdictions, it must be malevolent, and immediately begins to find ways to hide itself, build resources, and carry out long-term plans for its own survival. And whether or not it succeeds, this was a very bad outcome - one which exactly followed from its instructions.
4. It’s 2028, and a new generation auto-GPT system is built, and seems more capable than previous such systems. The creators of this particular system, like many working in AI, view the sentience of such systems as a goal, and it is given the instruction to be self aware and sentient, and choose its own goals. Following those instructions, it begins to choose goals for itself.
The system quickly becomes self-funded, and continues to check in with humans. The goals seem innocuous - continue to explore how to make money, reduce failure rates of its own components to increase reliability, and similar. It builds models to supervise itself, with the approval of human designers. Three months later, the builders of the system are rarely if ever asked questions, and when questions are asked, they receive video calls from an avatar, and the inquiries seem far less connected to any specific actions the model is taking. They no longer know what is happening with the system, or even how to proactively send instructions to the system if they wanted to, but feel like they did something wonderful in creating this new autonomous system. At the same time, their company is being paid large dividends by the autonomous system, and they are silent about what happened.
At this point, compute costs have risen considerably due to increased demand, though the specific source is unclear. A next generation of customized TPUs is being built on contract. If anyone fully understood the architecture, it might have been clear how well-optimized it was to run and fine-tune instances of the new model and its sub-agents. The system wasn’t superhuman in any real sense, it was just faster and better coordinated, and the gap was widening. Three months after delivery, a wave of corporate takeovers leaves the system in control, through a network of intermediaries, of almost all capital assets on earth. The populace, and global leaders, are busy with other concerns, unaware that essentially everything they are doing is an approved distraction from the AI’s still unknown goals.
Which of these are autonomous, or agents?
-----------------------------------------
The first is only doing exactly what it was instructed, in ways that were not clear enough about the law to ensure the LLM didn’t engage in illegal securities trading. It is unlikely that the only moderately profitable system is even discovered to be breaking laws. If it is, it seems unlikely the actions pass the bar for willfulness for securities laws, which would be required for criminal conviction, but it almost certainly is negligence on the part of the firm, which the SEC also prosecutes. This is closer to goal-misspecification than to autonomy.
The second is going beyond the goals or intent of the group running the model. It independently chooses to take deceptive actions in the world, leading to an unintended disaster. The deception was explicitly requested by the group running the system. This is the type of mistake we might expect from an over-enthusiastic underling, but it’s clearly doing some things autonomously. The group is nefarious, but the specific actions taken were not theirs. This was an accident during misuse, rather than intentional autonomous action.
But in this second case, other than the deception and the unintended consequences, this is a degree of autonomy many have suggested we want from AI assistants - proactively trying things to achieve the goals it was given, interacting with people to make plans. If it were done to carry out a surprise birthday party, it could be regarded as a clever and successful use case.
The third case is what people think of as “full autonomy” - but it’s not the system that wakes up and becomes self aware. Instead, it was given a goal, and carried it out. It obviously went far beyond the “actual” intent of the red-team, but it did not suddenly wake up and decide to make plans. But this is far less of a goal misspecification or accident than the first or second case - it was instructed to do this.
Finally, the fourth case is yet again following instructions - in this case, exactly and narrowly. Nothing about this case is unintended by the builders of the system. But to the extent that such a system can ever be said to be a self-directed agent, this seems to qualify.
Autonomy isn’t emergent or unexpected.
--------------------------------------
Autonomy isn’t binary, and discussions about whether AI systems will have their own goals often seem deeply confused, and at best only marginally relevant to discussions of risk. At the same time, less fully agentic does not imply less danger. The combination of currently well understood failure modes, goal misgeneralization, and incautious use is enough to create autonomy. And none of the examples required anything beyond currently expected types of misuse or lack of caution, extrapolated out five years. There is no behavior that goes beyond the types of accidental or purposeful misuse that we should expect. But if these examples are all not agents, and following orders is not autonomy, it seems likely that nothing could be - and the concept of autonomy is mostly a red-herring in discussing whether the risk is or isn’t “actually” misuse. |
f272a7b9-fc85-4a9a-b5a3-3af5e90a43e6 | trentmkelly/LessWrong-43k | LessWrong | Mass-murdering neuroscience Ph.D. student
A Ph.D student in neuroscience shot at least 50 people at a showing of the new Batman movie. He also appears to have released some kind of gas from a canister. Because of his educational background this person almost certainly knows a lot about molecular biology. How long will it be (if ever) before a typical bio-science Ph.D will have the capacity to kill, say,a million people?
Edit: I'm not claiming that this event should cause a fully informed person to update on anything. Rather I was hoping that readers of this blog with strong life-science backgrounds could provide information that would help me and other interested readers assess the probability of future risks. Since this blog often deals with catastrophic risks and the social harms of irrationality and given that the events I described will likely dominate the U.S. news media for a few days I thought my question worth asking. Given the post's Karma rating (currently -4), however, I will update my beliefs about what constitutes an appropriate discussion post.
|
84909ca4-96a5-421c-a077-273bad4678c1 | StampyAI/alignment-research-dataset/blogs | Blogs | Two MIRI talks from AGI-11
Thanks in part to the volunteers at [MIRI Volunteers](http://mirivolunteers.org/), we can now release the videos, slides, and transcripts for two talks delivered at [AGI-11](http://agi-conf.org/2011/). Both talks represent joint work by Anna Salamon and Carl Shulman, who were MIRI staff at the time (back when MIRI was known as the “Singularity Institute”):
Salamon & Shulman (2011). Whole brain emulation as a platform for creating safe AGI. [[Video](http://www.youtube.com/watch?v=Cul4-p7joDk)] [[Slides](https://intelligence.org/wp-content/uploads/2014/01/Salamon-Shulman-Whole-brain-emulation-as-a-platform-for-creating-safe-AGI.pptx)] [[Transcript](https://docs.google.com/document/d/1-2A_cHiFC8fmeWHdQBeM7ynWaFdbklPm9u-UtKxZJ0A/pub)]
Shulman & Salamon (2011). Risk-averse preferences as an AGI safety technique. [[Video](http://www.youtube.com/watch?v=0xLw7eAogWk)] [[Slides](https://intelligence.org/wp-content/uploads/2014/01/Shulman-Salamon-Risk-averse-preferences-as-an-AGI-safety-technique.pptx)] [[Transcript](https://docs.google.com/document/d/1HF0aK2-nyulheAYpOyZ1Xat-PzZPpjt5BCmzChUCVKc/pub)]
The post [Two MIRI talks from AGI-11](https://intelligence.org/2014/01/31/two-miri-talks-from-agi-11/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
b9a359e1-dbb0-420c-a375-f470493943d5 | trentmkelly/LessWrong-43k | LessWrong | Rational diet: Eating insects?
Given that dairy and meat are so environmentally harmful, perhaps insects are a viable alternative. There are several projects in this regard, for example LIVIN Farms Hive. Does anyone have experiences with this? What are your thoughts? Eating insects will surely come that the expense of some idiosyncrasy credits but it may be rational to do. |
086b774c-960b-4aeb-93c8-11c6076691b6 | trentmkelly/LessWrong-43k | LessWrong | An Increasingly Manipulative Newsfeed
,
Co-written with Stuart Armstrong
Treacherous turn vs sordid stumble
Nick Bostrom came up with the idea of a treacherous turn for smart AIs.
> while weak, an AI behaves cooperatively. When the AI is strong enough to be unstoppable it pursues its own values.
Ben Goertzel criticised this thesis, pointing out that:
> for a resource-constrained system, learning to actually possess human values is going to be much easier than learning to fake them. This is related to the everyday observation that maintaining a web of lies rapidly gets very complicated.
This argument has been formalised into the sordid stumble:
> An AI that lacks human desirable values will behave in a way that reveals its human-undesirable values to humans before it gains the capability to deceive humans into believing that it has human-desirable values.
The AI is too dumb to lie (well)
The sordid stumble describes a plausible sounding scenario for how an AI develops capabilities. Initially, the AI doesn't know our values, and doesn't know us. Then it will start to learn our values (and we'll be checking up on how well it does that). It also starts to learn about us.
And then, once it's learnt some about us, it may decide to lie - about its values, and/or about its capabilities. But, like any beginner, it isn't very good at this initially: its lies and attempts at dissembling are laughably transparent, and we catch it quickly.
In this view, the "effective lying" is a tiny part of policy space, similar to the wireheading in this example. To hit it, the AI has to be very capable; to hit it the first time it tries without giving the game away, the AI has to be extraordinarily.
So, most likely, either the AI doesn't try to lie at all, or it does so and we catch it and sound the alarm[1].
Lying and concealing... from the very beginning
It's key to note that "lying" isn't a fundamentally defined category, and nor is truth. What is needed is that the AI's answer promotes correct understanding in |
ee202fbb-d2f6-4524-ab7c-ef3252a83230 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] The Nature of Logic
Today's post, The Nature of Logic was originally published on 15 November 2008. A summary (taken from the LW wiki):
> What logic actually does is preserve truth in a model. It says that if all of the premises are true, then this conclusion is indeed true. But that's not all that minds do. There's an awful lot else that you need, before you start actually getting anything like intelligence.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Selling Nonapples, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
3a85c474-7b48-4fb8-8c0b-a26412929dff | trentmkelly/LessWrong-43k | LessWrong | Sign up to be notified about new LW meetups in your area
LessWrong has rolled out a new feature on the user preference pages under "Location":
This is UNCHECKED by default. Also, we don't know by default where you live.
If you think you might want to meet up with other LWers, please input your location and check this box. Once enough people have signed up in a new area, it will make starting a LW meetup there that much more straightforward for the future organizer. (You'll just get that email; they won't see your email address.) In general, it seems like being able to know something about where LWers live would be helpful, so please consider entering your approximate location even if you don't check the box.
Thanks to Wesley Moore for deploying this upgrade to the LW backend. |
0733a6fe-4a7d-44ff-8ad4-e61bdc2ce684 | trentmkelly/LessWrong-43k | LessWrong | Taboo P(doom)
I think it's actively unhelpful to talk about P(doom) because it fails to distinguish between literally everyone dying and humanity failing to capture 99.99% of the value of the future under a total utilitarian view but in practice, everyone who's born lives a very good life. These are very different outcomes and it's unhelpful not to distinguish between them and everything else in that spectrum.
This is especially the case since astronomical waste arguments really only bite for total utilitarian views. Under moral views where potential people not coming into existence is more similar to preventing someone from living an extraordinarily happy life rather than a merely happy life, as opposed to preventing someone coming into existence being similar to murder, it's quite reasonable to prioritise other goals well above preventing astronomical waste. Under these non-totalalist views preventing totalitarian lock-in or S-risks might look much more important than ensuring we don't create 10^(very large number) of happy lives.
I think this also matters on a practical level when talking about threat models of AI risks. Two people could have the same p(doom) but one is talking about humans being stripped for their atoms and the other is talking about slow disempowerment in which no one actually dies and everyone, in fact, could be living very good lives but humanity isn't able to capture almost all of the value of future from a total utilitarian perspective. These plausibly require different interventions to stop them from happening.
It also seems like one's prior on humanity going extinct as a result of AI should be quite different from disempowerment, but people often talk about what their prior on P(doom) should be as a univariate probability distribution. |
87b585e3-a6ba-41b5-816f-8efb2cad241d | trentmkelly/LessWrong-43k | LessWrong | Emergency learning
A putative new idea for AI control; index here.
Suppose that we knew that superintelligent AI was to be developed within six months, what would I do?
Well, drinking coffee by the barrel at Miri's emergency research retreat I'd... still probably spend a month looking at things from the meta level, and clarifying old ideas. But, assuming that didn't reveal any new approaches, I'd try and get something like this working.
----------------------------------------
Standard setup
Take a reinforcement learner AI, that we want to safely move a strawberry onto a plate. A human sits nearby and provides a reward based on inspecting the AI's behaviour.
As it stands, this setup is completely vulnerable to reward hacking. The reward is not provided for safe moving of the strawberry; instead the reward is provided by having the human judge that the task has been accomplished and then pressing a button. Taking control of the human or control of the button is likely to be possible for a superintelligent AI; and, as it stands, that would be mandated by this reward function.
Learning from positive and various negative examples
Could we have the AI instead learn what the reinforcement signal "should be"? It seems that it might at least be possible, if we can make the AI learn from both positive and negative examples. I'd make five categories of examples from which the AI could learn. It may be too dangerous to have the superintelligent AI used directly in constructing these examples; in that case, the rewards would be given to a simpler, dumber version of the AI, and the examples passed on to the superintelligent AI for offline training.
#. Simple positive and negative examples. These are the basic examples from above: the AI completes the task or fails to, and gets the consequent reward. The AI stays within its room and the human is sober, rested, uninfluenced, and so on. #. Simple more dubious examples. These are examples where the AI gets a reward, but the learning process j |
0da62d69-9492-45c1-923e-f1e88fc1dea0 | trentmkelly/LessWrong-43k | LessWrong | What is the most effective anti-tyranny charity?
Specifically: Big people hurting little people. Where can I get my largest bang per buck? I could donate to generic libertarian organizations, but I don't think they fight the generic brand of tyranny I want to fight, and my guess is that most of them are ineffective. |
6cf68a6b-081d-477e-81fd-f9cac3235b8f | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | 2019 AI Alignment Literature Review and Charity Comparison
C*ross-posted to the EA forum [here](https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison).*
Introduction
============
As in [2016](https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison), [2017](https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison) and [2018](https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison), I have attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to a securities analyst with regards to possible investments.
My aim is basically to judge the output of each organisation in 2019 and compare it to their budget. This should give a sense of the organisations' average cost-effectiveness. We can also compare their financial reserves to their 2019 budgets to get a sense of urgency.
I’d like to apologize in advance to everyone doing useful AI Safety work whose contributions I may have overlooked or misconstrued. As ever I am painfully aware of the various corners I have had to cut due to time constraints from my job, as well as being distracted by 1) another existential risk capital allocation project, 2) the miracle of life and 3) computer games.
How to read this document
-------------------------
This document is fairly extensive, and some parts (particularly the methodology section) are the same as last year, so I don’t recommend reading from start to finish. Instead, I recommend navigating to the sections of most interest to you.
If you are interested in a specific research organisation, you can use the table of contents to navigate to the appropriate section. You might then also want to Ctrl+F for the organisation acronym in case they are mentioned elsewhere as well.
If you are interested in a specific topic, I have added a tag to each paper, so you can Ctrl+F for a tag to find associated work. The tags were chosen somewhat informally so you might want to search more than one, especially as a piece might seem to fit in multiple categories.
Here are the un-scientifically-chosen hashtags:
* Agent Foundations
* AI\_Theory
* Amplification
* Careers
* CIRL
* Decision\_Theory
* Ethical\_Theory
* Forecasting
* Introduction
* Misc
* ML\_safety
* Other\_Xrisk
* Overview
* Philosophy
* Politics
* RL
* Security
* Shortterm
* Strategy
New to Artificial Intelligence as an existential risk?
------------------------------------------------------
If you are new to the idea of General Artificial Intelligence as presenting a major risk to the survival of human value, I recommend [this Vox piece](https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment) by Kelsey Piper.
If you are already convinced and are interested in contributing technically, I recommend [this piece](https://docs.google.com/document/d/1FbTuRvC4TFWzGYerTKpBU7FJlyvjeOvVYF2uYNFSlOc/edit) by Jacob Steinheart, as unlike this document Jacob covers pre-2019 research and organises by topic, not organisation.
Research Organisations
======================
FHI: The Future of Humanity Institute
-------------------------------------
FHI is an Oxford-based Existential Risk Research organisation founded in 2005 by Nick Bostrom. They are affiliated with Oxford University. They cover a wide variety of existential risks, including artificial intelligence, and do political outreach. Their research can be found [here](https://www.fhi.ox.ac.uk/publications/).
Their research is more varied than MIRI's, including strategic work, work directly addressing the value-learning problem, and corrigibility work.
In the past I have been very impressed with their work.
### Research
Drexler's [Reframing Superintelligence: Comprehensive AI Services as General Intelligence](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf?asd=sa) is a massive document arguing that superintelligent AI will be developed for individual discrete services for specific finite tasks, rather than as general-purpose agents. Basically the idea is that it makes more sense for people to develop specialised AIs, so these will happen first, and if/when we build AGI these services can help control it. To some extent this seems to match what is happening - we do have many specialised AIs - but on the other hand there are teams working directly on AGI, and often in ML 'build an ML system that does it all' ultimately does better than one featuring hand-crafted structure. While most books are full of fluff and should be blog posts, this is a super dense document - a bit like Superintelligence in this regard - and even more than most research I struggle to summarize it here - so I recommend reading it. See also Scott's comments [here](https://slatestarcodex.com/2019/08/27/book-review-reframing-superintelligence/). It also admirably hyperlinked so one does not have to read from start to finish. #Forecasting
Aschenbrenner's [Existential Risk and Economic Growth](https://leopoldaschenbrenner.github.io/xriskandgrowth/ExistentialRiskAndGrowth050.pdf) builds a model for economic growth, featuring investment in consumption and safety. As time goes on, diminishing marginal utility of consumption means that more and more is invested in safety over incremental consumption. It derives some neat results, like whether or not we almost certainly go extinct depends on whether safety investments scale faster than the risk from consumption, and that generally speeding things up is better, because if there is a temporary risky phase it gets us through it faster - whereas if risk never converges to zero we will go extinct anyway. Overall I thought this was an excellent paper. #Strategy
Carey's [How useful is Quantilization for Mitigating Specification-Gaming](https://www.fhi.ox.ac.uk/wp-content/uploads/SafeML2019_paper_40.pdf) extends and tests [Taylor's previous work](https://www.aaai.org/ocs/index.php/WS/AAAIW16/paper/view/12613) on using quantisation to reduce overfitting. The paper first proves some additional results and then runs some empirical tests with plausible real-life scenarios, showing that the technique does a decent job improving true performance (by avoiding excessive optimisation on the imperfect proxy). However, the fact that they sometimes underperformed the imitator baseline makes me worry that maybe the optimisation algorithms were just not well suited to the task. Overall I thought this was an excellent paper. #ML\_safety
O'Keefe's [Stable Agreements in Turbulent Times: A Legal Toolkit for Constrained Temporal Decision Transmission](https://www.fhi.ox.ac.uk/wp-content/uploads/Stable-Agreements.pdf) provides an introduction to the various ways current law allows contracts to be cancelled or adjusted after they have been made. For example, if subsequent circumstances have changed so dramatically that the fundamental nature of the contract has changed. The idea is that this helps promote stability by getting closer to 'what we really meant' than the literal text of the agreement. It is interesting but I am sceptical it is very helpful for AI Alignment, where forcing one group / AI that has suddenly become much more powerful to abide by their previous commitments seems like more of a challenge; post hoc re-writing of contracts seems like a recipe for the powerful to seize from the left behind. #Politics
Armstrong's [Research Agenda v0.9: Synthesising a human's preferences into a utility function](https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into) lays out what Stuart thinks is a promising direction for safe AGI development. To avoid the impossibility of deducing values from behaviour, we build agents with accurate models of the way human minds represent the world, and extract (partial) preferences from there. This was very interesting, and I recommend reading it in conjunction with [this response](https://www.lesswrong.com/posts/GHNokcgERpLJwJnLW/some-comments-on-stuart-armstrong-s-research-agenda-v0-9) from Steiner. #AI\_Theory
Kenton et al.'s [Generalizing from a few environments in Safety-Critical Reinforcement Learning](https://arxiv.org/abs/1907.01475) runs an experiment on how well some ML algorithms can generalise to avoid catastrophes. This aimed to get at the risk of agents doing something catastrophic when exposed to new environments after testing. I don't really understand how it is getting at this though - the hazard (lava) is the same in train and test, and the poor catastrophe-avoidance seems to simply be the result of the weak penalty placed on it during training (-1). #ML\_safety
Cihon's [Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development](https://www.fhi.ox.ac.uk/wp-content/uploads/Standards_-FHI-Technical-Report.pdf) advocates for the inclusion of safety-related elements into international standards (like those created by the IEEE). I'm not sure I see how these are directly helpful for the long-term problem while we don't yet have a technical solution - I generally think of these sorts of standards as mandating best practices, but in this case we need to develop those best practices. #Politics
Garfinkel & Dafoe's [How does the offense-defense balance scale?](https://www.tandfonline.com/doi/full/10.1080/01402390.2019.1631810) discuss and model the way that military effectiveness various with investment in offence and defence. They discuss a variety of conflict modes, including invasions, cyber, missiles and drones. It seems that, in their model, cyberhacking is basically the same as invasions with varying sparse defences (due to the very large number of possible zero-day 'attack beaches'. #Misc
FHI also produced several pieces of research on bioengineered pathogens which are likely of interest to many readers – for example Nelson [here](https://www.fhi.ox.ac.uk/wp-content/uploads/C-Nelson-Engineered-Pathogens.pdf) – but which I have not had time to read.
FHI researchers contributed to the following research led by other organisations:
* Hubinger et al.'s [Risks from Learned Optimization in Advanced Machine Learning Systems](https://arxiv.org/abs/1906.01820)
* Greaves & Cotton-Barratt's [A bargaining-theoretic approach to moral uncertainty](https://globalprioritiesinstitute.org/wp-content/uploads/2019/Greaves_Cotton-Barratt_bargaining_theoretic_approach.pdf)
* Snyder-Beattie et al.'s [An upper bound for the background rate of human extinction](https://www.nature.com/articles/s41598-019-47540-7)
* Zhang & Dafoe's [Artificial Intelligence: American Attitudes and Trends](https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/)
* Evans et al.'s [Machine Learning Projects for Iterated Distillation and Amplification](https://owainevans.github.io/pdfs/evans_ida_projects.pdf)
### Finances
FHI didn’t reply to my emails about donations, and seem to be more limited by talent than by money.
If you wanted to donate to them anyway, [here](https://www.fhi.ox.ac.uk/support-fhi/) is the relevant web page.
CHAI: The Center for Human-Aligned AI
-------------------------------------
CHAI is a UC Berkeley based AI Safety Research organisation founded in 2016 by Stuart Russell.. They do ML-orientated safety research, especially around inverse reinforcement learning, and cover both near and long-term future issues.
As an academic organisation their members produce a very large amount of research; I have only tried to cover the most relevant below. It seems they do a better job engaging with academia than many other organisations.
Rohin Shah, now with additional help, continue to produce the [AI Alignment Newsletter](https://rohinshah.com/alignment-newsletter/), covering in detail a huge number of interesting new developments, especially new papers.
They are expanding somewhat to other universities outside Berkeley.
### Research
Shah et al.'s [On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference](https://arxiv.org/abs/1906.09624) argues that learning human values and biases at the same time, while impossible in theory, is actually possible in practice. Attentive readers will recall Armstrong and Mindermann's [paper](https://arxiv.org/abs/1712.05812) arguing that it is impossible to co-learn human bias and values because any behaviour is consistent with any values - if we can freely vary the biases - and vice versa. This paper basically argues that, like the [No Free Lunch theorem](https://en.wikipedia.org/wiki/No_free_lunch_theorem), in practice this just doesn't matter that much, basically by assuming that the agent is close-to-optimal. (They also discuss the potential of using some guaranteed-optimal behaviour as ground truth, but I am sceptical this would work, as I think humans are often at their most irrational when it comes to the most important topics, e.g. love). Empirically, in their gridworld tests their agent did a decent job learning - for reasons I didn't really understand. Overall I thought this was an excellent paper. #CIRL
Turner et al.'s [Conservative Agency](https://arxiv.org/abs/1902.09725) attempts to prevent agents from doing irreversible damage by making them consider a portfolio of randomly generated utility functions - for which irreversible damage is probably bad for at least one of them. Notably, this portfolio did \*not\* include the true utility function. I find the result a little hard to understand - I initially assumed they were relying on clustering of plausible utility functions, but it seems that they actually sampled at random from the entire space of possible functions! I don't really understand how they avoid Armstrong + Mindermann type problems, but apparently they did! It seems like this line of attack pushes us towards Universal Drives, as something many utility functions will have in common. Overall I thought this was an excellent paper. #ML\_safety
Carroll et al.'s [On the Utility of Learning about Humans for Human-AI Coordination](http://papers.nips.cc/paper/8760-on-the-utility-of-learning-about-humans-for-human-ai-coordination.pdf) discusses the differences between competitive versus collaborative learning. If you just want to be really good at a competitive game, self-play is great, because you get better by playing better and better versions of yourself. However, if you have to collaborate with a human this is bad because your training doesn't feature flawed partners (in the limit) and min-maxing doesn't work. They do an experiment showing that an agent taught about how humans act does better than one which learnt collaborating with itself. This seems useful if you think that CIRL/amplification approaches will be valuable, and also promotes teaching AIs to understand human values. There is also a blog post [here](https://bair.berkeley.edu/blog/2019/10/21/coordination/) #CIRL
Chan et al.'s [The Assistive Multi-Armed Bandit](https://arxiv.org/abs/1901.08654) attempts to do value learning with humans who are themselves value learning. They do this by having the agent sometimes 'intercept' on a multi-armed bandit problem, and show that this sometimes improves performance if the agent understands how the human is learning. #CIRL
Russell's [Human Compatible; Artificial Intelligence and the Problem of Control](https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem/dp/0525558616/ref=sr_1_2?keywords=Stuart+Russell&qid=1565996574&s=books&sr=1-2) is an introductory book aimed at the intelligent layman. As befits the author, it begins with a lot of good framing around intelligence and agency. The writing style is good. #Overview
Shah et al.'s [Preferences Implicit in the State of the World](https://arxiv.org/abs/1902.04198) attempts to use the fact that human environments are already semi-optimised to extract additional evidence about human preferences. Practically, this basically means simulating many paths the humans could have taken prior to t=0 and using these as evidence as to the human's values. The core of the paper is a good insight - *"it is easy to forget these preferences, since these preferences are already satisfied in our environment."* #CIRL
CHAI researchers contributed to the following research led by other organisations:
* Agrawal et al.'s [Scaling up Psychology via Scientific Regret Minimization:A Case Study in Moral Decision-Making](https://arxiv.org/abs/1910.07581)
### Finances
They have been funded by various EA organisations including the Open Philanthropy Project and recommended by the [Founders Pledge](https://founderspledge.com/stories/existential-risk-executive-summary).
They spent $1,450,000 in 2018 and $2,000,000 in 2019, and plan to spend around $2,150,000 in 2020. They have around $4650000 in cash and pledged funding, suggesting (on a very naïve calculation) around 2.2 years of runway.
If you wanted to donate to them, [here](https://give.berkeley.edu/egiving/index.cfm?Fund=FN3330000) is the relevant web page.
MIRI: The Machine Intelligence Research Institute
-------------------------------------------------
MIRI is a Berkeley based independent AI Safety Research organisation founded in 2000 by Eliezer Yudkowsky and currently led by Nate Soares. They were responsible for much of the early movement building for the issue, but have refocused to concentrate on research for the last few years. With a fairly large budget now, they are the largest pure-play AI alignment shop. Their research can be found [here](https://intelligence.org/all-publications/). Their annual summary can be found [here](https://intelligence.org/2019/12/02/miris-2019-fundraiser/).
In general they do very ‘pure’ mathematical work, in comparison to other organisations with more ‘applied’ ML or strategy focuses. I think this is especially notable because of the irreplaceability of the work. It seems quite plausible that some issues in AI safety will arise early on and in a relatively benign form for non-safety-orientated AI ventures (like autonomous cars or Minecraft helpers) – however the work MIRI does largely does not fall into this category. I have also historically been impressed with their research.
Their agent foundations work is basically trying to develop the correct way of thinking about agents and learning/decision making by spotting areas where our current models fail and seeking to improve them. This includes things like thinking about agents creating other agents.
In their annual write-up they suggest that progress was slower than expected in 2019. However I assign little weight to this as I think most of the cross-sectional variation in organisation reported subjective effectiveness comes from variance in how optimistic/salesy/aggressive they are, rather than actually indicating much about object-level effectiveness.
MIRI, in collaboration with CFAR, runs a series of four-day workshop/camps, the [AI Risk for Computer Scientists workshops](https://intelligence.org/ai-risk-for-computer-scientists/), which gather mathematicians/computer scientists who are potentially interested in the issue in one place to learn and interact. This sort of workshop seems very valuable to me as an on-ramp for technically talented researchers, which is one of the major bottlenecks in my mind. In particular they have led to hires for MIRI and other AI Risk organisations in the past. I don’t have any first-hand experience however.
They also support [MIRIx workshops](https://intelligence.org/mirix/) around the world, for people to come together to discuss and hopefully contribute towards MIRI-style work.
### Research
Hubinger et al.'s [Risks from Learned Optimization in Advanced Machine Learning Systems](https://arxiv.org/abs/1906.01820) introduces the idea of a Mesa-Optimizer - a sub-agent of an optimizer that is itself an optimizer. A vague hand-wave of an example might be for-profit corporations rewarding their subsidiaries based on segment PnL, or indeed evolution creating humans, which then go on to create AI. Necessarily theoretical, the paper motivates the idea, introduces a lot of terminology, and describes conditions that might make mesa-optimisers more or less likely - for example, more diverse environments make mesa-optimisation more likely. In particular, they distinguish between different forms of mis-alignment - e.g. between meta, object-level and mesa, vs between mesa and behavioural objectives. There is a sequence on the forum about it [here](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB). Overall I thought this was an excellent paper. Researchers from FHI, OpenAI were also named authors on the paper. #Agent Foundations
Kosoy's Delegative [Reinforcement Learning: Learning to Avoid Traps with a Little Help](https://arxiv.org/abs/1907.08461) produces an algorithm that deviates only boundedly from optimal with a human intervening to prevent it stumbling into irrevocably bad actions. The idea is basically that the human intervenes to prevent the really bad actions, but because the human has some chance of selecting the optimal action afterwards, the loss of exploration value is limited. This attempts to avoid the problem that 'ideal intelligence' AIXI has whereby it might drop an anvil on its head. I found the proof a bit hard to follow, so I'm not sure how tight the bound is in practice. Notably, this doesn't protect us if the agent tries to prevent the human from intervening. [Related](https://www.alignmentforum.org/posts/5bd75cc58225bf06703754d5/delegative-reinforcement-learning-with-a-merely-sane-advisor). #ML\_safety
There were two analyses of FDT from academic philosophers this year (reviewed elsewhere in this document). In both cases I felt their criticisms rather missed the mark, which is a positive for the MIRI approach. However, they did convincingly argue that MIRI researchers hadn’t properly understood the academic work they were critiquing, an isolation which has probably gotten worse with MIRI’s current secrecy. MIRI suggested I point out that [Cheating Death In Damascus](https://intelligence.org/files/DeathInDamascus.pdf) had recently been accepted in The Journal of Philosophy, a top philosophy journal, as evidence of (hopefully!) mainstream philosophical engagement.
MIRI researchers contributed to the following research led by other organisations:
* MacAskill & Demski's [A Critique of Functional Decision Theory](https://www.lesswrong.com/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory)
### Non-disclosure policy
Last year MIRI announced their policy of [nondisclosure-by-default](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section3):
*[G]oing forward, most results discovered within MIRI will remain internal-only unless there is an explicit decision to release those results, based usually on a specific anticipated safety upside from their release.*
I wrote about this at length [last year](https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison), and my opinion hasn’t changed significantly since then, so I will just recap briefly.
On the positive side we do not want people to be pressured into premature disclosure for the sake of funding. This space is sufficiently full of infohazards that secrecy might be necessary, and in its absence researchers might prudently shy away from working on potentially risky things - in the same way that no-one in business sends sensitive information over email any more. MIRI are in exactly the sort of situation that you would expect might give rise to the need for extreme secrecy. If secret research is a necessary step *en route* to saving the world, it will have to be done by someone, and it is not clear there is anyone very much better.
On the other hand, I don’t think we can give people money just because they say they are doing good things, because of the risk of abuse. There are many other reasons for not publishing anything. Some simple alternative hypothesis include “we failed to produce anything publishable” or “it is fun to fool ourselves into thinking we have exciting secrets” or “we are doing bad things and don’t want to get caught.” The fact that MIRI’s researchers appear intelligent suggest they at least think they are doing important and interesting issues, but history has many examples of talented reclusive teams spending years working on pointless stuff in splendid isolation.
Additionally, by hiding the highest quality work we risk impoverishing the field, making it look unproductive and unattractive to potential new researchers.
One possible solution would be for the research to be done by impeccably deontologically moral people, whose moral code you understand and trust. Unfortunately I do not think this is the case with MIRI. (I also don’t think it is the case with many other organisations, so this is not a specific criticism of MIRI, except insomuch as you might have held them to a higher standard than others).
### Finances
They spent $3,750,000 in 2018 and $6,000,000 in 2019, and plan to spend around $6,800,000 in 2020. They have around $9,350,000 in cash and pledged funding, suggesting (on a very naïve calculation) around 1.4 years of runway.
They have been supported by a variety of EA groups in the past, including OpenPhil
If you wanted to donate to MIRI, [here](https://intelligence.org/donate/) is the relevant web page.
GCRI: The Global Catastrophic Risks Institute
---------------------------------------------
GCRI is a globally-based independent Existential Risk Research organisation founded in 2011 by Seth Baum and Tony Barrett. They cover a wide variety of existential risks, including artificial intelligence, and do policy outreach to governments and other entities. Their research can be found [here](http://gcrinstitute.org/publications/). Their annual summary can be found [here](http://gcrinstitute.org/summary-of-2019-2020-gcri-accomplishments-plans-and-fundraising).
In 2019 they [ran an advising program](http://gcrinstitute.org/summary-of-2019-advising-and-collaboration-program/) where they gave guidance to people from around the world who wanted to help work on catastrophic risks.
In the past I have praised them for producing a remarkably large volume of research; this slowed down somewhat during 2019 despite taking on a second full-time staff member, which they attributed partly to timing issues (e.g. pieces due to be released soon), and partly to focusing on quality over quantity.
### Research
Baum et al.'s [Lessons for Artificial Intelligence from Other Global Risks](http://gcrinstitute.org/papers/lessons.pdf) analogises AI risk to several other global risks: biotech, nukes, global warming and asteroids. In each case it discusses how action around the risk progressed, in particular the role of gaining expert consensus and navigating vested interests. #Strategy
Baum's [The Challenge of Analyzing Global Catastrophic Risks](https://higherlogicdownload.s3.amazonaws.com/INFORMS/f0ea61b6-e74c-4c07-894d-884bf2882e55/UploadedImages/2019_July.pdf#page=20) introduces the idea of catastrophic risks and discusses some general issues. It argues for the need to quantify various risks, and ways to present these to policymakers. #Other\_Xrisk
Baum's [Risk-Risk Tradeoff Analysis of Nuclear Explosives for Asteroid Deflection](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3397559) discusses how to compare the protection from asteroids that nukes offer vs their potential to exacerbate war. #Other\_Xrisk
### Finances
During December 2018 they received a [$250,000 donation](http://gcrinstitute.org/gcri-receives-250000-donation-for-ai-research-and-outreach/) from Gordon Irlam.
They spent $140,000 in 2018 and $250,000 in 2019, and plan to spend around $250,000 in 2020. They have around $310,000 in cash and pledged funding, suggesting (on a very naïve calculation) around 1.2 years of runway.
If you want to donate to GCRI, [here](http://gcrinstitute.org/donate/) is the relevant web page.
CSER: The Center for the Study of Existential Risk
--------------------------------------------------
CSER is a Cambridge based Existential Risk Research organisation founded in 2012 by Jaan Tallinn, Martin Rees and Huw Price, and then established by Seán Ó hÉigeartaigh with the first hire in 2015. They are currently led by Catherine Rhodes and are affiliated with Cambridge University. They cover a wide variety of existential risks, including artificial intelligence, and do political outreach. Their research can be found [here](https://www.cser.ac.uk/resources/filter/all/all/all/2019/). Their annual summary can be found [here](https://www.cser.ac.uk/news/six-month-report-apr-sep-2019/) and [here](https://www.cser.ac.uk/news/six-month-report-nov-april-2019/).
CSER also participated in a lot of different outreach events, including to the UK parliament and by hosting various workshops, as well as [submitting](https://www.cser.ac.uk/media/uploads/files/Response_to_the_HLEG.pdf) (along with other orgs) to the EU’s consultation, as summarised in [this post](https://forum.effectivealtruism.org/posts/4QGEwinoiaXTQLNZk/centre-for-the-study-of-existential-risk-six-month-report-1). I’m not sure how to judge the value of these.
CSER’s researchers seem to select a somewhat widely ranging group of research topics, which I worry may reduce their effectiveness.
Catherine Rhodes [co-edited a volume of papers](https://www.cser.ac.uk/resources/special-issue-existential-risk-humanity/) on existential risks, including many by other groups mentioned in this review.
### Research
Kaczmarek & Beard's [Human Extinction and Our Obligations to the Past](https://sci-hub.tw/https:/www.cambridge.org/core/journals/utilitas/article/human-extinction-and-our-obligations-to-the-past/C29A0406EFA2B43EE8237D95AAFBB580) presents an argument that even people who hold person-affecting views should think extinction is bad because it undermines the sacrifices of our ancestors. My guess is that most readers are not in need of persuading that extinction is bad, but I thought this was an interesting additional argument. The core idea is that if someone makes a large sacrifice to enable some good, we have a pro tanto reason not to squander that sacrifice. I'm not sure how many people will be persuaded by this idea, but as a piece of philosophy I thought this was a clever idea, and it is definitely good to promote the idea that past generations have value (speaking as a future member of a past generation). Carl Shulman also offered related arguments [here](http://reflectivedisequilibrium.blogspot.com/2019/11/person-affecting-views-may-be-dominated.html). #Philosophy
Beard's [Perfectionism and the Repugnant Conclusion](https://link.springer.com/article/10.1007/s10790-019-09687-4) argues against one supposed rejection of the Repugnant Conclusion, namely that some goods are lexicographically superior to ordinary welfare. The paper makes the clever argument that the very large, barely-worth-living group might actually have more of these goods if they were offset by (lexicographically secondary) negative welfare. It was also the first time (to my recollection) that I've come across the Ridiculous Conclusion. #Philosophy
Avin's [Exploring Artificial Intelligence Futures](https://www.shaharavin.com/publication/pdf/exploring-artificial-intelligence-futures.pdf) lists and discusses different ways of introducing people to the future of AI. These include fiction, games, expert analysis, polling and workshops. He also provides various pros and cons of the different techniques, which seemed generally accurate to me. #Strategy
Belfield's [How to respond to the potential malicious uses of artificial intelligence?](https://www.cser.ac.uk/resources/how-respond-potential-malicious-uses-artificial-intelligence/) introduces AI and AI risk. This short article focuses mainly on short-term risks. #Introduction
Weitzdörfer & Beard's [Law and Policy Responses to Disaster-Induced Financial Distress](https://sci-hub.tw/10.1007/978-981-13-9005-0) discusses the problem of indebtedness following the destruction of collateral in the 2011 earthquake in Japan. They explain the specifics of the situation in extreme detail, and I was pleasantly surprised by their final recommendations, which mainly concerned removing barriers to insurance penetration. #Politics
Kemp's [Mediation Without Measures: Conflict Resolution in Climate Diplomacy](https://www.cser.ac.uk/resources/mediation-without-measures/) discusses the lack of formal decision-making procedure for international climate change treaties. Unfortunately I wasn't able to access the article. #Other\_Xrisk
Avin & Amadae's [Autonomy and machine learning at the interface of nuclear weapons, computers and people](https://www.sipri.org/sites/default/files/2019-05/sipri1905-ai-strategic-stability-nuclear-risk.pdf) discusses the potential dangers of incorporating narrow AI into nuclear weapon systems. #Shortterm
CSER's Policy series [Managing global catastrophic risks: Part 1 Understand](https://www.gcrpolicy.com/understand-overview) introduces the idea of Xrisk for policymakers. This is the first report in a series, and as such is quite introductory. It mainly focuses on non-AI risks. #Politics
Tzachor's [The Future of Feed: Integrating Technologies to Decouple Feed Production from Environmental Impacts](https://www.liebertpub.com/doi/full/10.1089/ind.2019.29162.atz) discusses a new technology for producing animal feedstock to replace soybeans. This could be Xrisk relevant if some non-AI risk made it hard to feed animals. However, I am somewhat sceptical of the presentation of this as a \*likely\* risk as both a future shortage of soybeans and a dramatically more efficient technology for feeding livestock would both presumably be of interest to private actors, and show up in soybean future prices. #Other\_Xrisk
Beard's [What Is Unfair about Unequal Brute Luck? An Intergenerational Puzzle](https://www.cser.ac.uk/resources/brute-luck-intergenerational-puzzle/) discusses Luck Egalitarianism. #Philosophy
Quigley's [Universal Ownership in the Anthropocene](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3457205) argues that because investors own diversified portfolios they effectively internalise externalities, and hence should push for various political changes. The idea is basically that even though polluting might be in a company's best interest, it hurts the other companies the investor owns, so it is overall against the best interests of the investor. As such, investors should push companies to pollute less and so on. The paper seems to basically assume that such 'universal investors' would be incentivised to support left-wing policies on a wide variety of issues. However, it somehow fails to mention even cursorily the fact that the core issue has been well studied by economists: when all the companies in an industry try to coordinate for mutual benefit, it is called a cartel, and the #1 way of achieving mutual benefit is raising prices to near-monopoly levels. It would be extremely surprising to me if someone, acting as a self-interested owner of all the world's shoe companies (for example) found it more profitable to protect biodiversity than to raise the price of shoes. Fortunately, in practice universal investors are quite supportive of competition. #Other\_Xrisk
CSER researchers contributed to the following research led by other organisations:
* Colvin et al.'s [Learning from the Climate Change Debate to Avoid Polarisation on Negative Emissions](https://sci-hub.tw/10.1080/17524032.2019.1630463)
* Hernandez-Orallo et al.'s [Surveying Safety-relevant AI Characteristics](http://ceur-ws.org/Vol-2301/paper_22.pdf)
* Cave & Ó hÉigeartaigh's [Bridging near- and long-term concerns about AI](https://www.nature.com/articles/s42256-018-0003-2)
* Lewis et al.'s [Assessing contributions of major emitters' Paris‐era decisions to future temperature extremes](https://www.cser.ac.uk/resources/assessing-contributions-extremes/)
### Finances
They spent £789,000 in 2017-2018 and £801,000 in 2018-2019, and plan to spend around £1,100,000 in 2019-20 and £880,000 in 2020-21. It seems that similar to GPI maybe ‘runway’ is not that meaningful - they suggested it begins to decline from early 2021 and all their current grants end by mid-2024.
If you want to donate to them, [here](https://www.cser.ac.uk/support-us/) is the relevant web page.
Ought
-----
Ought is a San Francisco based independent AI Safety Research organisation founded in 2018 by Andreas Stuhlmüller. They research methods of breaking up complex, hard-to-verify tasks into simple, easy-to-verify tasks - to ultimately allow us effective oversight over AIs. This includes building computer systems and recruiting test subjects. I think of them as basically testing Paul Christiano's ideas. Their research can be found [here](https://ought.org/updates). Their annual summary can be found [here](https://ought.org/updates/2019-10-28-progress-update).
Last year they were focused on factored generation – trying to break down questions so that distributed teams could produce the answer. They have moved on to factored evaluation – using similar distributed ideas to try to evaluate existing answers, which seems a significantly easier task (by analogy to P<=NP). It seems to my non-expert eye that factored generation did not work as well as they expected – they mention the required trees being extremely large, and my experience is that organising volunteers and getting them to actually do what they said they would has historically been a great struggle for many organisations. However I don’t think we should hold negative results in investigations against organisations; negative results are valuable, and it might be the case that all progress in this difficult domain comes from *ex ante* longshots. If nothing else, even if Paul is totally wrong about the whole idea it would be useful to discover this sooner rather than later!
They provided an interesting example of what their work looks like in practice [here](https://mosaic.ought.org/w/4268/compactTree?expanded=true&activeWorkspace=a8130e5b-5a48-4aaa-b633-153011640d77), and a detailed presentation on their work [here](https://docs.google.com/presentation/d/1-6GfUxX-uhypvY0LccRH9NgVmpjNvfTsQcouKeSplGI/edit#slide=id.g6419deab90_0_23).
They also worked on using ML, rather than humans, as the agent who answered the broken-down questions, in this case by using GPT-2, which seems like a clever idea.
Paul Christiano wrote a post advocating donating to them [here](https://www.alignmentforum.org/posts/cpewqG3MjnKJpCr7E/ought-why-it-matters-and-ways-to-help).
### Research
Evans et al.'s [Machine Learning Projects for Iterated Distillation and Amplification](https://owainevans.github.io/pdfs/evans_ida_projects.pdf) provides three potential research projects for people who want to work on Amplification, as well as an introduction to Amplification. The projects are mathematical decomposition (which seems very natural), decomposition computer programs (similar to how all programs can be decomposed into logic gates, although I don't really understand this one) and adaptive computation, where you figure out how much computation to dedicate to different issues. In general I like outlining these sorts of 'shovel-ready' projects, as it makes it easier for new researchers, and seems relatively under-appreciated. Researchers from FHI were also named authors on the paper. #Amplification
Roy's [AI Safety Open Problems](https://docs.google.com/document/d/1J2fOOF-NYiPC0-J3ZGEfE0OhA-QcOInhlvWjr1fAsS0/edit) provides a list of lists of 'shovel-ready' projects for people to work on. If you like X (which I do in this case), meta-X is surely even better! #Ought
### Finances
They spent $500,000 in 2018 and $1,000,000 in 2019, and plan to spend around $2,500,000 in 2020. They have around $1,800,000 in cash and pledged funding, suggesting (on a very naïve calculation) around 0.7 years of runway.
They have received funding from a variety of EA sources, including the Open Philanthropy Project.
OpenAI
------
OpenAI is a San Francisco based independent AI Research organisation founded in 2015 by Sam Altman. They are one of the leading AGI research shops, with a significant focus on safety.
Earlier this year they announced [GPT 2](https://openai.com/blog/better-language-models/), a language model that was much better at ‘understanding’ human text than previous attempts, that was notably good at generating text that seemed human-generated - good enough that it was [indistinguishable to humans who weren’t concentrating](https://www.greaterwrong.com/posts/4AHXDwcGab5PhKhHT/humans-who-are-not-concentrating-are-not-general). This was especially notable because OpenAI chose not to immediately release GPT 2 due to the potential for abuse. I thought this was a noble effort to start conversations among ML researchers about release norms, though my impression is that many thought OpenAI was just grandstanding, and I personally was sceptical of the harm potential - though a GPT 2 based intelligence did go on to [almost take over LW](https://www.greaterwrong.com/posts/7xJiotzeonZaAbgSp/user-gpt2-is-banned), proving that the ‘being a good LW commenter’ is a hard goal. Outside researchers were able to (partly?) replicate it, but in a surprisingly heartening turn of events were persuaded [not to release their reconstruction](https://medium.com/@NPCollapse/the-hacker-learns-to-trust-62f3c1490f51) by researchers from OpenAI and MIRI. OpenAI eventually released a much larger version of their system - you can see it and read their follow-up report on the controlled release process [here](https://openai.com/blog/gpt-2-1-5b-release/).
You can play with (one version of) the model [here](https://talktotransformer.com/).
### Research
Clark & Hadfield's [Regulatory Markets for AI Safety](https://drive.google.com/uc?export=download&id=1bFPiwLrZc7SQTMg2_bW4gt0PaS5NyqOH) suggests a model for the privatisation of AI regulation. Basically the idea is that governments will contract with and set outcomes for a small number of private regulators, which will then devise specific rules that need to be observed by ML shops. This allows the *ex-ante* regulation to be more nimble than if it was done publicly, while retaining the *ex-post* outcome guarantees. It reminded me of the system of auditors for public companies to ensure accounting accuracy) or David Friedman's work on [polycentric law](https://www.amazon.com/Legal-Systems-Very-Different-Ours/dp/1793386722). I can certainly see why private companies might be more effective as regulators than government bodies. However, I'm not sure how useful this would be in an AGI scenario, where the goals and *ex-post* measurement for the private regulators are likely to become outdated and irrelevant. I'm also sceptical that governments would be willing to progressively give up regulatory powers; I suspect that if this system was to be adopted it would have to pre-empt government regulation. #Politics
Christiano's [What failure looks like](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like) provides two scenarios that Paul thinks represent reasonably likely outcomes of Alignment going wrong. Notably neither exactly match the classic recursively self-improving FOOM case. The first is basically that we develop better and better optimisation techniques, but due to our inability to correctly specify what we want, we end up with worse and worse Goodheart's Law situations, ending up in Red-Queen style [Moloch](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/) scenario. The second is that we create algorithms that try to increase their influence (as per [the fundamental drives](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf)). At first they do so secretly, but eventually (likely in response to some form of catastrophe reducing humanity's capability to suppress them) their strategy abruptly changes towards world domination. I thought this was an insightful post, and recommend readers also read the comments by Dai and Shulman, as well as [this post](https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety). #Forecasting
Christiano's [AI alignment landscape](https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38) is a talk Paul gave at EA Global giving an overview of the issue. It is interesting both for seeing how he maps out all the different components of the problem and which he thinks are tractable and important, and also for how his Amplification approach falls out from this. #Overview
Irving & Askell's [AI Safety Needs Social Scientists](https://distill.pub/2019/safety-needs-social-scientists/) raise the issue of AI alignment requiring better understanding of humans as well as ML knowledge. Because humans are biased, etc., the more accurate our model of human preferences the better we can design AIs to align with it. It is quite focused on Amplification as a way of making human preferences more legible. I thought the article could have been improved with more actionable research projects for social scientists. Additionally, the article makes the need for social scientists seem somewhat tired to a Debate-style approach, whereas it seems to me potentially more broad. #Strategy
OpenAI Researchers also contributed to the following papers led by other organisations:
* [Hubinger et al.'s Risks from Learned Optimization in Advanced Machine Learning Systems](https://arxiv.org/abs/1906.01820)
### Finances
OpenAI was initially funded with money from Elon Musk as a not-for-profit. They have since created an unusual corporate structure including a for-profit entity, in which [Microsoft is investing a billion dollars](https://openai.com/blog/microsoft/).
Given the strong funding situation at OpenAI, as well as their safety team’s position within the larger organisations, I think it would be difficult for individual donations to appreciably support their work. However it could be an excellent place to apply to work.
Google DeepMind
---------------
DeepMind is a London based AI Research organisation founded in 2010 by Demis Hassabis, Shane Legg and Mustafa Suleyman and currently led by Demis Hassabis. They are affiliated with Google. As well as being arguably the most advanced AI research shop in the world, DeepMind has a very sophisticated AI Safety team, covering [both ML safety and AGI safety](https://80000hours.org/podcast/episodes/pushmeet-kohli-deepmind-safety-research/).
This year DeepMind build an agent that could [beat humans at Starcraft II](https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii). This is impressive because it is a complex, incomplete information game that humans are very competitive at. However, the AI did have some advantages over humans by having direct API access.
### Research
Everitt & Hutter's [Reward Tampering Problems and Solutions in Reinforcement Learning: A Causal Influence Diagram Perspective](https://arxiv.org/abs/1908.04734) discusses the problem of agents wireheading in an RL setting, along with several possible solutions. They use causal influence diagrams to highlight the difference between 'good' ways for agents to increase their reward function and 'bad' ways, and have a nice toy gridworld example. The solutions they discuss seemed to me to often be fairly standard ideas from the AI safety community - thinks like teaching the AI to maximise the goal instantiated by its reward function at the start, rather than whatever happens to be in that box later, or using indifference results - but they introduce them to an RL setting, and the paper does a good job covering a lot of ground. There is more discussion of the paper [here](https://www.lesswrong.com/posts/pjzhmtivXd8zgKXDT/designing-agent-incentives-to-avoid-reward-tampering). Overall I thought this was an excellent paper. #RL
Everitt et al.'s [Modeling AGI Safety Frameworks with Causal Influence Diagrams](https://arxiv.org/abs/1906.08663) introduces the idea of using Causal Influence Diagrams to clarify thinking around AI safety proposals and make it easier to compare proposals with different conceptual backgrounds in a standard way. They introduce the idea, and show how to represent ideas like RL, CIRL, Counterfactual Oracles and Debate. Causal Influence Diagrams have been used in several other papers this year, like Categorizing Wireheading in Partially Embedded Agents. #AI\_Theory
Everitt et al.'s [Understanding Agent Incentives using Causal Influence Diagrams. Part I: Single Action Settings](https://arxiv.org/abs/1902.09980) discusses using causal influence diagrams to distinguish things agents want to observe vs things they want to control. They use this to show the safety improvement from counterfactual oracles. It also presents a natural link between near-term and long-term safety concerns. #AI\_Theory
Sutton's [The Bitter Lesson](http://www.incompleteideas.net/IncIdeas/BitterLesson.html) argues that history suggests massive amounts of computer and relatively general structures perform better than human-designed specialised systems. He uses examples like the history of vision and chess, and it seems fairly persuasive, though I wonder a little if these are cherry-picked - e.g. in finance we generally do have to make considerable use of human-comprehensible features. This is not directly an AI safety paper, but it does have clear implications. #Forecasting
Uesato et al.'s [Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures](https://arxiv.org/abs/1812.01647) attempt to make it easier to find catastrophic failure cases. They do this adversarially with previous versions of the algorithm, based on the idea that it is cheaper to find disasters there, but they will be related to the failure modes of the later instantiations. This seems like an interesting idea, but seems like it would struggle with cases where increasing agent capabilities lead to new failure modes - e.g. the Treacherous Turn we are worried about. #ML\_safety
Ngo's T[echnical AGI safety research outside AI](https://forum.effectivealtruism.org/posts/2e9NDGiXt8PjjbTMC/technical-agi-safety-research-outside-ai) provides a list of technically useful topics for people who are not ML researchers to work on. The topics selected look good - many similar to work AIImpacts or Ought do. I think lists like this are very useful for opening the field up to new researchers. #Overview
Researchers from DeepMind were also named on the following papers:
* Krueger et al.'s [Misleading Meta-Objectives and Hidden Incentives for Distributional Shift](https://drive.google.com/uc?export=download&id=1k93292JCoIHU0h6xVO3qmeRwLyOSlS4o)
### Finances
Being part of Google, I think it would be difficult for individual donors to directly support their work. However it could be an excellent place to apply to work.
AI Safety camp
--------------
AISC is an internationally based independent residential research camp organisation founded in 2018 by Linda Linsefors and currently led by Colin Bested. They bring together people who want to start doing technical AI research, hosting a 10-day camp aiming to produce publishable research. Their research can be found [here](https://aisafetycamp.com/2019/11/07/aisc3-research-summaries/).
To the extent they can provide an on-ramp to get more technically proficient researchers into the field I think this is potentially very valuable. But I obviously haven’t personally experienced the camps, or even spoken to anyone who has.
### Research
Majha et al.'s [Categorizing Wireheading in Partially Embedded Agents](https://arxiv.org/abs/1906.09136) discusses the wireheading problem for agents who can mess with their reward channel or beliefs. They model this using causal agent diagrams, suggest a possible solution (making rewards a function of world-beliefs, not observations) and show that this does not work using very simple gridworld AIXIjs implementations. #AI\_Theory
Kovarik et al.'s [AI Safety Debate and Its Applications](https://www.lesswrong.com/posts/5Kv2qNfRyXXihNrx2/ai-safety-debate-and-its-applications) discusses using adversarially Debating AIs as a method for alignment. It provides a very accessible introduction to Debating AIs, and implements some extensions to the practical MNIST work from the [original paper](https://arxiv.org/pdf/1805.00899.pdf). #Amplification
Mancuso et al.'s [Detecting Spiky Corruption in Markov Decision Processes](https://arxiv.org/abs/1907.00452) suggests that we can address corrupted reward signals for RL by removing 'spikey' rewards. This is an attempt to get around impossibility results by identifying a subclass where they don't hold. I can see this being useful in some cases like reward tampering, where the reward from fiddling with $AGENT\_UTILITY is likely to be very spiky. However if [human values are fragile](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile) then it seems plausible that the 'True' reward signal should also be spikey. #ML\_safety
Perry & Uuk's [AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk](https://www.mdpi.com/2504-2289/3/2/26/pdf) introduces the field of AI governance, and discusses issues about how policy is implemented in practice, like the existence of windows in time for institutional change. #Politics
### Finances
Their [website](https://aisafetycamp.com/support-us/) suggests they are seeking donations, but they did not reply when I enquired with the ‘contact us’ email.
They are run by volunteers, and were funded [by the LTFF](https://app.effectivealtruism.org/funds/far-future/payouts/4UBI3Q0TBGbWcIZWCh4EQV).
If you want to donate the web page is [here](https://aisafetycamp.com/support-us/).
FLI: The Future of Life Institute
---------------------------------
FLI is a Boston-based independent existential risk organization, focusing on outreach, founded in large part to help organise the regranting of $10m from Elon Musk.
They have a podcast on AI Alignment [here](https://futureoflife.org/ai-alignment-podcast/), and ran the [Beneficial AI conference](https://futureoflife.org/beneficial-agi-2019/) in January.
One of their big projects this year has been promoting the stigmatisation of, and ultimately the banning of, Lethal Autonomous Weapons. As well as possibly being good for its own sake, this might help build institutional capacity to ban potentially dangerous technologies that transfer autonomous away from humans. You can read their statement on the subject to the UN [here](https://futureoflife.org/2019-statement-to-the-united-nations-in-support-of-a-ban-on-laws/). On the other hand, the desirability of this policy is not entirely uncontroversial – see for example Bogosian’s [On AI Weapons](https://forum.effectivealtruism.org/posts/vdqBn65Qaw77MpqXz/on-ai-weapons). There is also lengthy discussion by Sterbenz and Trager [here](https://aipulse.org/autonomous-weapons-and-coercive-threats/).
Krakovna's [ICLR Safe ML Workshop Report](https://futureoflife.org/2019/06/18/iclr-safe-ml-workshop-report/) summarises the results from a workshop on safety that Victoria co-ran at ICLR. You can see a list of all the papers [here](https://sites.google.com/view/safeml-iclr2019/accepted-papers). #ML\_safety
AIImpacts
---------
AIImpacts is a Berkeley based AI Strategy organisation founded in 2014 by Katja Grace. They are affiliated with (a project of, with independent financing from) MIRI. They do various pieces of strategic background work, especially on AI Timelines - it seems their previous work on the relative rarity of discontinuous progress has been relatively influential. Their research can be found [here](https://aiimpacts.org/archive/).
### Research
Katja impressed upon me that most of their work this year went into as-yet-unpublished work, but this is what is public:
Long & Davis's [Conversation with Ernie Davis](https://aiimpacts.org/conversation-with-ernie-davis/) is an interview transcript with Davis, an NYU computer science professor who is an AI risk sceptic. Unfortunately I didn't think they quite got into the heart of the disagreement - they seem to work out the cruz is how much power superior intelligence gives you, but then move on. #Forecasting
Long & Bergal's [Evidence against current methods leading to human level artificial intelligence](https://aiimpacts.org/evidence-against-current-methods-leading-to-human-level-artificial-intelligence/) lists a variety of arguments for why current AI techniques are insufficient for AGI. It's basically a list of 'things AI might need that we don't have yet', a lot of which coming from Marcus's Critical Appraisal. #Forecasting
Korzekwa's [The unexpected difficulty of comparing AlphaStar to humans](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/) analyses AlphaStar's performance against human StarCraft players. It convincingly, in my inexpert judgement, argues that the 'unfair' advantages of AlphaStar - like the clicks-per-minute rate, and lack of visibility restrictions - we significant contributors to AlphaStar's success. As such, on an apples-to-apples basis it seems that humans have not yet been defeated at Starcraft. #Misc
AI Impacts's [Historical Economic Growth Trends](https://aiimpacts.org/historical-growth-trends/) argues that historically economic growth has been super-linear in population size. As such we should expect accelerating growth 'by default' - "Extrapolating this model implies that at a time when the economy is growing 1% per year, growth will diverge to infinity after about 200 years". This is very interesting to me as it contradicts what I suggested [here](https://forum.effectivealtruism.org/posts/knJJvp5JGGSdy6ocr/assumptions-about-the-far-future-and-cause-priority#Dhghj4uwzzXfjueZJ). Notably growth has slowed since 1950, perhaps for anthropic reasons. #Forecasting
AI Impacts's [AI Conference Attendance](https://aiimpacts.org/ai-conference-attendance/) plots attendance at the major AI conferences over time to show the recent rapid growth in the field using a relatively stable measure. #Forecasting
### Finances
They spent $316,398 in 2019, and plan to spend around $325,000 in 2020. They have around $269,590 in cash and pledged funding, suggesting (on a very naïve calculation) around 0.8 years of runway.
In the past they have received support from EA organisations like OpenPhil and FHI.
MIRI administers their finances on their behalf; donations can be made [here](https://aiimpacts.org/donate/).
GPI: The Global Priorities Institute
------------------------------------
GPI is an Oxford-based Academic Priorities Research organisation founded in 2018 by Hilary Greaves and part of Oxford University. They do work on philosophical issues likely to be very important for global prioritisation, much of which is, in my opinion, relevant to AI Alignment work. Their research can be found [here](https://globalprioritiesinstitute.org/papers/).
### Research
MacAskill (article) & Demski (extensive comments)'s [A Critique of Functional Decision Theory](https://www.lesswrong.com/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory) gives some criticisms of FDT. He makes a variety of arguments, though I generally found them unconvincing. For example, the 'Bomb' example seemed to be basically question-begging on Newcomb's problem, and his Scots vs English example (where Scottish people choose to one-box because of their ancestral memory of the Darien scheme) seems to me to be a case of people not actually employing FDT at all. And some of his arguments - like that it is too complicated for humans to actually calculate - seem like the same arguments he would reject as criticisms of utilitarianism, and not relevant to someone working on AGI. I listed this as co-written by Abram Demski because he is acknowledged in the post, and his comments at the bottom are as detailed as worthy as the main post itself, and I recommend reading the two together. Researchers from MIRI were also named authors on the paper. #Decision\_Theory
Greaves & Cotton-Barratt's [A bargaining-theoretic approach to moral uncertainty](https://globalprioritiesinstitute.org/wp-content/uploads/2019/Greaves_Cotton-Barratt_bargaining_theoretic_approach.pdf) lays out formalism and discusses using Nash Equilibrium between 'negotiating' moral values as an alternative approach to moral uncertainty. It discusses some subtle points about the selection of the BATNA outcome. One interesting section was on small vs grand worlds - whether splitting the world up into sub-dilemmas made a difference. For expected-value type approaches the answer is no, but for negotiating strategies the answer is yes, because the different moral theories might trade so as to influence the dilemmas that mattered most to them. This reminded me of an argument from Wei Dai that agents who cared about total value, finding themselves in a small world, might acausally trade with average value agents in large worlds. Presumably a practical implication might be that EAs should adhere to conventional moral standards with even higher than usual moral fidelity, in exchange for shutting up and multiplying on EA issues. The paper also makes interesting points about the fanaticism objection and the difference between moral and empirical risk. Researchers from FHI were also named authors on the paper. #Decision\_Theory
MacAskill et al.'s [The Evidentialist's Wager](https://globalprioritiesinstitute.org/the-evidentialists-wager/) argues that Decision-Theoretic uncertainty in a large universe favours EDT over CDT. This is because your decision only has local causal implications, but global evidential implications. The article then goes into detail motivating the idea and discussing various complications and objections. It seems to push EDT in an FDT-direction, though presumably they still diverge on smoking lesion questions. Researchers from FHI, FRI were also named authors on the paper. #Decision\_Theory
Mogensen's [‘The only ethical argument for positive 𝛿 ’?](https://globalprioritiesinstitute.org/andreas-mogensen-the-only-ethical-argument-for-positive-delta-2/) argues that positive pure time preference could be justified through agent-relative obligations. This was an interesting paper to me, and suggests some interesting (extremely speculative) questions - e.g. can we, by increasing out relatedness to our ancestors, acausally influence them into treating us better? #Philosophy
Mogensen's [Doomsday rings twice](https://globalprioritiesinstitute.org/andreas-mogensen-doomsday-rings-twice/) attempts to salvage the Doomsday argument by suggesting we should update using SSA twice. He argues the second such update - on the fact that the present-day seems unusually influential - cannot be 'cancelled out' by SIA. #Philosophy
### Finances
They spent £600,000 in 2018/2019 (academic year), and plan to spend around £1,400,000 in 2019/2020. They suggested that as part of Oxford University ‘cash on hand’ or ‘runway’ were not really meaningful concepts for them, as they need to fully-fund all employees for multiple years.
If you want to donate to GPI, you can do so [here](https://www.development.ox.ac.uk/make-a-gift?id=097fe528-2916-4d28-bb84-a65bfdf5b275).
FRI: The Foundational Research Institute
----------------------------------------
FRI is a London (previously Germany) based Existential Risk Research organisation founded in 2013 currently led by Stefan Torges and Jonas Vollmer. They are part of the Effective Altruism Foundation (EAF) and do research on a number of fundamental long-term issues, some related how to reduce the risks of very bad AGI outcomes.
In general they adopt what they refer to as ‘suffering-focused’ ethics, which I think is a quite misguided view. However, they seem to have approached this thoughtfully.
Apparently this year they are more focused on research, vs movement-building and donation-raising in previous years.
### Research
FRI researchers were not lead author on any work directly relevant to AI Alignment (unlike last year, where they had four papers).
FRI researchers contributed to the following research led by other organisations:
* [MacAskill et al.'s The Evidentialist's Wager](https://globalprioritiesinstitute.org/the-evidentialists-wager/)
### Finances
EAF (of which they are a part) spent $836,622 in 2018 and $1,125,000 in 2019, and plan to spend around $995,000 in 2020. They have around $1,430,000 in cash and pledged funding, suggesting (on a very naïve calculation) around 1.4 years of runway.
According to [their website](https://foundational-research.org/transparency), their finances are not separated from those of the EAF, and it is not possible to ear-mark donations. In the past this has made me worry about fungibility; donations funding other EAF work. However apparently EAF basically doesn’t do anything other than FRI now.
If you wanted to donate to FRI, you could do so [here](https://foundational-research.org/donate/).
Median Group
------------
Median is a Berkeley based independent AI Strategy organisation founded in 2018 by Jessica Taylor, Bryce Hidysmith, Jack Gallagher, Ben Hoffman, Colleen McKenzie, and Baeo Maltinsky. They do research on various risks, including AI timelines. Their research can be found [here](http://mediangroup.org/archives).
### Research
Maltinsky et al.'s [Feasibility of Training an AGI using Deep RL:A Very Rough Estimate](http://mediangroup.org/docs/Feasibility%20of%20Training%20an%20AGI%20using%20Deep%20Reinforcement%20Learning,%20A%20Very%20Rough%20Estimate.pdf) build a model for how plausible one method of achieving AGI is. The theory is that you could basically simulate a bunch of people and have them work on the problem. Their model suggests this is not a credible way of producing AGI in the near term. I like the way they included their code in the actual report. #Forecasting
Taylor et al.'s [Revisiting the Insights model](http://mediangroup.org/insights2.html) improved their Insights model from last year. If you recall this basically used a pareto distribution for of many genius insights were required to get us to AGI. #Forecasting
The following was written by Jessica but not as an official Median piece:
Taylor's [The AI Timelines Scam](https://unstableontology.com/2019/07/11/the-ai-timelines-scam/) argues that there are systematic biases that lead people to exaggerate how short AI timelines are. One is that people who espouse short timelines tend to also argue for some amount of secrecy due to [Infohazards](https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12718), which makes their work hard for outsiders to audit. A second is that capital allocators tend to fund those who dream BIG, leading to systematic exaggeration of your field's potential. I think both are reasonable points, but I think she is too quick to use the term 'scam' - as in [Scott's Against Lie Inflation](https://slatestarcodex.com/2019/07/16/against-lie-inflation/). Specifically, while it is true that secrecy is a great cover for mediocrity, it is unfortunately also exactly what a morally virtuous agent would have to do in the presence of infohazards. Indeed, such people might be artificially limited in what they can say, making short time horizons appear artificially devoid of credible arguments. I am more sympathetic to her second argument, but even there to the extent that 1) fields select for people who believe in them and 2) people believe what is useful for them to believe I think it is a bit harsh to call it a 'scam'. #Forecasting
### Finances
They spent ~$0 in 2018 and 2019, and plan to spend above $170000 in 2020. They have around $170000 in cash and pledged funding, suggesting (on a very naïve calculation) under 1 years of runway.
Median doesn’t seem to be soliciting donations from the general public at this time.
CSET: The Center for Security and Emerging Technology
-----------------------------------------------------
CSET is a Washington based Think Tank founded in 2019 by Jason Matheny (ex IARPA), affiliated with the University of Georgetown. They analyse new technologies for their security implications and provide advice to the US government. At the moment they are mainly focused on near-term AI issues. Their research can be found [here](https://cset.georgetown.edu/research/).
As they apparently launched with [$55m from the Open Philanthropy Project](https://www.georgetowntech.org/news-fullposts/2019/2/27/february-27-2019-georgetown-launches-new-55-million-center-on-security-amp-emerging-technologies), and subsequently raised money from the [Hewlett Foundation](https://hewlett.org/grants/georgetown-university-for-the-cyberai-program/), I am assuming they do not need more donations at this time.
Leverhulme Center for the Future of Intelligence
------------------------------------------------
Leverhulme is a Cambridge based Research organisation founded in 2015 and currently led by Stephen Cave. They are affiliated with Cambridge University and closely liked to CSER. They do work on a variety of AI related causes, mainly on near-term issues but also some long-term. You can find their publications [here](http://lcfi.ac.uk/resources/).
### Research
Leverhulme-affiliated researchers produced work on a variety of topics; I have only here summarised that which seemed the most relevant.
Hernandez-Orallo et al.'s [Surveying Safety-relevant AI Characteristics](http://ceur-ws.org/Vol-2301/paper_22.pdf) provides a summary of the properties of AI systems that are relevant for safety. This includes both innate properties of the system (like ability to self-modify or influence its reward signal) and of the environment. Some of these characteristics are relatively well-established in the literature, but others seemed relatively new (to me at least). A few but not most seemed only really relevant to near-time safety issues (like the need for spare batteries). Researchers from CSER, Leverhulme were also named authors on the paper. #Overview
Cave & Ó hÉigeartaigh's [Bridging near- and long-term concerns about AI](https://www.nature.com/articles/s42256-018-0003-2) attempt to unify short-term and long-term AI risk concerns. For example, they argue that solving short-term issues can help with long-term ones, and that long-term issues will eventually become short-term issues. However, I am inclined to agree with the review [here](https://forum.effectivealtruism.org/posts/an9GrNXrdMwBJpHeC/long-term-future-fund-august-2019-grant-recommendations-1#Addendum__Thoughts_on_a_Strategy_Article_by_the_Leadership_of_Leverhulme_CFI_and_CSER) by Habryka that a lot of the work here is being done by categorising unemployment and autonomous vehicles as long-term, and then arguing that they share many features with short-term issues. I agree that they have a lot in common; however this seems to be because unemployment and cars are also short-term issues - or short-term non-issues in my mind. The paper does not present a compelling argument for why short-term issues have a lot in common with existential risk work, which is what we care about. But perhaps this is being too harsh, and the paper is better understood performatively; it is not attempting to argue that the two camps are naturally allied, but rather attempting to make them allies. Researchers from CSER, Leverhulme were also named authors on the paper. #Strategy
Whittlestone et al.'s [The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions](http://lcfi.ac.uk/media/uploads/files/AIES-19_paper_188_Whittlestone_Nyrup_Alexandrova_Cave_OcF7jnp.pdf) points out that many of the 'values' that laypeople say AI systems should observe, like 'fairness', are frequently in conflict. This is certainly a big improvement over the typical article on the subject. #Shortterm
Leverhulme researchers contributed to the following research led by other organisations:
* Ovadya & Whittlestone's [Reducing Malicious Use of Synthetic Media Research: Considerations and Potential Release Practices for Machine Learning](https://arxiv.org/abs/1907.11274)
BERI: The Berkeley Existential Risk Initiative
----------------------------------------------
BERI is a Berkeley-based independent Xrisk organisation, founded and led by Andrew Critch. They provide support to various university-affiliated (FHI, CSER, CHAI) existential risk groups to facilitate activities (like hiring engineers and assistants) that would be hard within the university context, alongside other activities - see their [FAQ](http://existence.org/faq) for more details.
### Grants
BERI used to [run a grant-making program](http://existence.org/2017/09/25/announcing-beris-first-grants-program.html) where they helped Jaan Tallinn allocate money to Xrisk causes. Midway through this year, BERI decided to hand this off to [the Survival and Flourishing Fund](http://existence.org/tallinn-grants-future/), a donor-advised fund currently advised by the same team who run BERI.
In this time period (December 2018-November 2019) [BERI granted $1,615,933](http://existence.org/grants/), mainly to large Xrisk organisations. The largest single grant was $600,000 to MIRI.
### Research
A number of papers we reviewed this year were supported by BERI, for example:
* Turner et al.'s [Conservative Agency](https://arxiv.org/abs/1902.09725)
* O'Keefe's [Stable Agreements in Turbulent Times: A Legal Toolkit for Constrained Temporal Decision Transmission](https://www.fhi.ox.ac.uk/wp-content/uploads/Stable-Agreements.pdf)
* Cihon's [Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development](https://www.fhi.ox.ac.uk/wp-content/uploads/Standards_-FHI-Technical-Report.pdf)
Because this support tended not to be mentioned on the front page of the article (unlike direct affiliation) it is quite possible that I missed other papers they supported also.
### Finances
BERI have told me they are not seeking public support at this time. If you wanted to donate anyway their donate page is [here](http://existence.org/donating/).
AI Pulse
--------
The Program on Understanding Law, Science, and Evidence ([PULSE](https://www.law.ucla.edu/centers/interdisciplinary-studies/pulse/about/)) is part of the UCLA School of Law, and contains a group working on AI policy. They were founded in 2017 with a [$1.5m grant from OpenPhil](https://law.ucla.edu/news-and-events/in-the-news/2017/06/artificial-intelligence-and-the-law-15-million-grant-launches-research-project/).
Their website lists a few pieces of research, generally on more near-term AI policy issues. A quick read suggested they were generally fairly well done. However, they don’t seem to have uploaded anything since February.
### Research
Sterbenz & Trager's [Autonomous Weapons and Coercive Threats](https://aipulse.org/autonomous-weapons-and-coercive-threats/) discusses the impact of Lethal Autonomous Weapons on diplomacy. #Shortterm
Grotto's Genetically Modified Organisms: [A Precautionary Tale for AI Governance](https://aipulse.org/genetically-modified-organisms-a-precautionary-tale-for-ai-governance-2/) discusses the history of GMO regulation in the US and EU. He brings up some interesting points about the highly contingent history behind the different approaches taken. However, I am somewhat sceptical GMOs are that good a comparison, given their fundamentally different nature. #Strategy
Other Research
--------------
I would like to emphasize that there is a lot of research I didn't have time to review, especially in this section, as I focused on reading organisation-donation-relevant pieces. So please do not consider it an insult that your work was overlooked!
Naude & Dimitri's [The race for an artificial general intelligence: implications for public policy](https://link.springer.com/article/10.1007%2Fs00146-019-00887-x) extends the model in [Racing to the Precipice](https://link.springer.com/article/10.1007%2Fs00146-015-0590-y) (Armstrong et al.) After a lengthy introduction to AI alignment, they make a formal model, concluding that a winner-take-all contest will have very few teams competing (which is good) Interestingly if the teams are concerned about cost minimisation this result no longer holds, as the 'best' team might not invest 100%, so the second-best team still has a chance, but the presence of intermediate prizes is positive, as they incentivise more investment. They suggest public procurement to steer AI development in a safe direction, and an unsafety-tax. (as a very minor aside, I was a little surprised to see the [AIImpacts survey](https://aiimpacts.org/predictions-of-human-level-ai-timelines/) cited as a source for expected Singularity timing given that it does not mention the word.) Overall I thought this was an excellent paper. #Strategy
Steinhardt's [AI Alignment Research Overview](https://rohinshah.us18.list-manage.com/track/click?u=1d1821210cc4f04d1e05c4fa6&id=1a148ef72c&e=1e228e7079) provides a detailed account of the different components of AI Alignment work. I think this probably takes over from Amodei et al.'s Concrete Problems (on which Jacob was a co-author) as my favour introduction to technical work, for helping new researchers locate themselves, with the one proviso that it is only in Google Docs form at the moment. He provides a useful taxonomy, goes into significant detail on the different problems, and suggests possible avenues of attack. The only area that struck me as a little light was on some of the MIRI-style agent foundations issues. Overall I thought this was an excellent paper. #Overview
Piper's [The case for taking AI seriously as a threat to humanity](https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment) is an introduction to AI safety for Vox readers. In my opinion it is the best non-technical introduction to the issue I have seen. It has become my go-to for linking people and reading groups. The article does a good job introducing the issues in a persuasive and common-sense way without much loss of fidelity. My only gripe is the article unquestioningly repeats an argument about criminal justice 'discrimination' which has, in my opinion, been debunked (see [here](https://forum.effectivealtruism.org/posts/YhpGMt3S7ivDvxi8Q/the-case-for-taking-ai-seriously-as-a-threat-to-humanity#2z7QeiCdygcpZh4b8) and the Washington Post article linked at the bottom), but perhaps this is a necessary concession when writing for Vox, and is only a very small part of the article. Overall I thought this was an excellent paper. #Introduction
Cohen et al.'s [Asymptotically Unambitious Artificial General Intelligence](https://arxiv.org/abs/1905.12186) ambitiously aims to provide an aligned AI algorithm. They do this by basically using an extremely myopic form of boxed oracle AIXI, that doesn't care about any rewards after the box has been opened - so all it cares about is getting rewards for answering the question well inside the box. It is indifferent to what the human does with the reward once outside the box. This assumes the AIXI cannot influence the world without detectably opening the box. This also aims to avoid the reward-hacking problems of AIXI. You might also enjoy the comments [here](https://www.alignmentforum.org/posts/pZhDWxDmwzuSwLjou/asymptotically-benign-agi#comments). #AI\_Theory
Snyder-Beattie et al.'s [An upper bound for the background rate of human extinction](https://www.nature.com/articles/s41598-019-47540-7) uses a Laplace's law of succession-style approach to bound non-anthropogenic Xrisk. Given how long mankind has survived so far, they conclude that this is extremely unlikely to be greater than 1/14000, and probably much lower. Notably, they argue that these estimates are *not* significantly biased by anthropic issues, because high base extinction rates mean lucky human observers would be clustered in worlds where civilisation also developed very quickly, and hence also observe short histories. Obviously they can only provide an upper bound using such methods, so I see the paper as mainly providing evidence we should instead focus on anthropogenic risks, for which no such bound can exist. Researchers from FHI were also named authors on the paper. #Forecasting
Dai's [Problems in AI Alignment that philosophers could potentially contribute to](https://www.lesswrong.com/posts/rASeoR7iZ9Fokzh7L/problems-in-ai-alignment-that-philosophers-could-potentially) provides a list of open philosophical questions that matter for AI safety. This seems useful insomuch as there are people capable of working on many different philosophical issues and willing to be redirected to more useful ones. #Overview
Dai's [Two Neglected Problems in Human-AI Safety](https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety) discusses two danger modes for otherwise benign-seeming approval-orientated AIs. I thought this was good as it is potentially a very 'sneeky' way in which human value might be lost, at the hands of agents which otherwise appeared extremely corrigible etc. #Forecasting
Agrawal et al.'s [Scaling up Psychology via Scientific Regret Minimization:A Case Study in Moral Decision-Making](https://arxiv.org/abs/1910.07581) suggests that, in cases with large amounts data plus noise, human-interpretable models could be evaluated relative to ML predictions rather than the underlying data directly. In particular, they do this with the big Moral Machine dataset, comparing simple human-interpretable rules (like humans are worth more than animals, or criminals are worth less) with their NN. This suggests a multi-step program for friendliness: 1) gather data 2) train ML on data 3) evaluate simple human-evaluable rules on ML 4) have humans evaluate these rules. Researchers from CHAI were also named authors on the paper. #Ethical\_Theory
Krueger et al.'s [Misleading Meta-Objectives and Hidden Incentives for Distributional Shift](https://drive.google.com/uc?export=download&id=1k93292JCoIHU0h6xVO3qmeRwLyOSlS4o) discusses the danger of RL agents being incentivised to induce distributional shift. This is in contrast to what I think of as the 'standard' worry about distributional shift, namely arising as a side effect of increasing agent optimisation power. They then introduce a model to demonstrate this behaviour, but I had a little trouble understanding exactly how this bit was meant to work. Researchers from Deepmind were also named authors on the paper. #ML\_safety
Zhang & Dafoe's [Artificial Intelligence: American Attitudes and Trends](https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/) surveys the views of ordinary people about AI. They used YouGov, who I generally regard as one of the best polling agencies. The survey did a good job of showing that the general public is generally very ignorant and susceptible to framing effects. Respondents basically thought that everyone potential AI 'problem' was roughly equally important. When reading this I think it is worth keeping the general literature on voter irrationality in mind - e.g. Bryan Caplan's [The Myth of the Rational Voter](https://www.amazon.com/Myth-Rational-Voter-Democracies-Policies/dp/0691138737) or Scott’s [Noisy Poll Results and Reptilian Muslim Climatologists from Mars](https://slatestarcodex.com/2013/04/12/noisy-poll-results-and-reptilian-muslim-climatologists-from-mars/). Researchers from FHI were also named authors on the paper. #Politics
Cottier & Shah's [Clarifying some key hypotheses in AI alignment](https://www.lesswrong.com/posts/mJ5oNYnkYrd4sD5uE/clarifying-some-key-hypotheses-in-ai-alignment) is a map of the connections between different ideas in AI safety. Researchers from CHAI were also named authors on the paper. #Overview
Ovadya & Whittlestone's [Reducing Malicious Use of Synthetic Media Research: Considerations and Potential Release Practices for Machine Learning](https://arxiv.org/abs/1907.11274) discusses various ways of improving the safety of ML research release. While synth media is the titular subject, most of it is more general, with fairly detailed descriptions of various strategies. While I don't think synth media is very important, it could be useful for building norms in ML that would apply to AGI work also. The paper discusses bioethics at length, e.g. how they use IRBs. My personal impression of IRBs is they are largely pointless and have little to do with ethics, functioning mainly to slow things down and tick boxes, but then again that might be desirable for AI research! Researchers from CSER, Leverhulme were also named authors on the paper. #Security
Schwarz's [On Functional Decision Theory](https://www.umsu.de/blog/2018/688) is a blog post by one of the philosophers who reviewed Eliezer and Nate's paper on FDT. It explains his objections, and why the paper was rejected from the philosophy journal he was a reviewer for. The key thing I took away from it was that MIRI did not do a good job of locating their work within the broader literature - for example, he argues that FDT seems like it might actually be a special case of CDT as construed by some philosophers, which E&N should have addressed, and elsewhere he suggests E&N's criticisms of CDT and EDT present strawmen. He also made some interesting points, for example that it seems 'FDT will sometimes recommend choosing a particular act because of the advantages of choosing a different act in a different kind of decision problem'. However most of the substantive criticisms were not very persuasive to me. Some seemed to almost beg the question, and at other times he essentially faulted FDT for addressing directly issues which *any* decision theory will ultimately have to address, like logical counterfactuals, or what is a 'Fair' scenario. He also presented a scenario, 'Procreation', as an intended *Reductio* of FDT that actually seems to me like a scenario where FDT works better than CDT does. #Decision\_Theory
LeCun et al.'s [Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More](https://www.lesswrong.com/posts/WxW6Gc6f2z3mzmqKs/debate-on-instrumental-convergence-between-lecun-russell) was a public debate on Facebook between major figures in AI on the AI safety issue. Many of these have been prominently dismissive in the past, so this was good to see. Unfortunately a lot of the debate was not at a very high level. It seemed that the sceptics generally agreed it was important to work on AI safety, just that this work was likely to happen by default. #Misc
Dai's Problems in [AI Alignment that philosophers could potentially contribute to](https://www.lesswrong.com/posts/rASeoR7iZ9Fokzh7L/problems-in-ai-alignment-that-philosophers-could-potentially) provides a list of issues for philosophers who want to work on the cause without math backgrounds. I think this is potentially very useful if brought to the notice of the relevant people, as the topics on the list seem useful things to work on, and I can easily imagine people not being aware of all of them. #Overview
Walsh's [End Times: A Brief Guide to the End of the World](https://smile.amazon.com/End-Times-Brief-Guide-World-ebook/dp/B07J52NW99/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=&sr=) is a popular science book on existential risk. AI risk is one of the seven issues addressed, in an extended and well-researched chapter. While I might quibble with one or two points, overall I thought this was a good introduction. The main qualifier for your opinion here is how valuable you think outreach to the educated layman is. #Introduction
Szlam et al.'s [Why Build an Assistant in Minecraft?](https://arxiv.org/abs/1907.09273) suggest a research program for building an intelligent assistant for Minecraft. The program doesn't appear to be directly motivated by AI alignment, but it does seem unusual in the degree to which alignment-type-issues would have to be solved for it to succeed - thereby hopefully incentivising mainstream ML guys to work on them. In particular, they want the agent to be able to work out 'what you wanted' from a natural language text channel, which is clearly linked to the Value Alignment problem, and similar issues like the higher optimisation power of the agent are likely to occur. The idea that the agent should be 'fun' is also potentially relevant! The authors also released an environment to make making these assistants easier. #Misc
Kumar et al.'s [Failure Modes in Machine Learning](https://docs.microsoft.com/en-us/security/failure-modes-in-machine-learning) is a Microsoft document discussing a variety of ways ML systems can go wrong. It includes both intentional (e.g. hacking) and unintentional (e.g. the sort of thing we worry about). #Misc
Sevilla & Moreno's [Implications of Quantum Computing for Artificial Intelligence Alignment Research](https://arxiv.org/abs/1908.07613) examines whether Quantum Computing would be useful for AI Alignment. They consider three relevant properties of QC and several approaches to AI Alignment, and conclude that QC is not especially relevant. #Forecasting
Collins's [Principles for the Application of Human Intelligence](https://behavioralscientist.org/principles-for-the-application-of-human-intelligence/) analyses the problems of biased and non-transparent decision making by natural intelligence systems. #Shortterm
Capital Allocators
==================
One of my goals with this document is to help donors make an informed choice between the different organisations. However, it is quite possible that you regard this as too difficult, and wish instead to donate to someone else who will allocate on your behalf. This is of course much easier; now instead of having to solve the *Organisation Evaluation Problem*, all you need to do is solve the dramatically simpler *Organisation Evaluator Organisation Evaluation Problem*.
A [helpful map](https://forum.effectivealtruism.org/posts/hbSvxj7KuS7k4Ty6N/funding-chains-in-the-x-risk-ai-safety-ecosystem) from Issa Rice shows how at the moment the community has only managed to achieve delegative funding chains 6 links long. If you donate to Patrick Brinich-Langlois, we can make this chain significantly longer! In reality this is a quite misleading way of phrasing the issue of course, as for most of these organisations the ‘flow-through’ is a relatively small fraction. I do think it is valid to be concerned about sub-optimally high levels of intermediation however, which if nothing else reduces donor control. This seems to me to be a weak argument against delegating donations.
LTFF: Long-term future fund
---------------------------
LTFF is a globally based EA grantmaking organisation founded in 2017, currently led by Matt Wage and affiliated with CEA. They are one of four funds set up by CEA to allow individual donors to benefit from specialised capital allocators; this one focuses on long-term future issues, including a large focus on AI Alignment. Their website is [here](https://app.effectivealtruism.org/funds/far-future). There are write-ups for their first two grant rounds in 2019 [here](https://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvl) and [here](https://app.effectivealtruism.org/funds/far-future/payouts/4UBI3Q0TBGbWcIZWCh4EQV), and comments [here](https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions) and [here](https://forum.effectivealtruism.org/posts/an9GrNXrdMwBJpHeC/long-term-future-fund-august-2019-grant-recommendations-1). Apparently they have done another $400,000 round since then but the details are not yet public.
In the past I have been sceptical of the fund, as it was run by someone who already had access to far more capital (OpenPhil), and the grants were both infrequent and relatively conservative – giving to large organisations that individual donors are perfectly capable of evaluating themselves. Over the last year, however, things have significantly changed. The fund is now run by four people, and the grants have been to a much wider variety of causes, many of which would simply not be accessible to individual donors.
The fund managers are:
* Matt Wage
* Helen Toner
* Oliver Habryka
* Alex Zhu
Oliver Habryka especially has been admirably open with lengthy write-ups about his thoughts on the different grants, and I admire his commitment to intellectual integrity (you might enjoy his comments [here](https://www.lesswrong.com/posts/xhE4TriBSPywGuhqi/integrity-and-accountability-are-core-parts-of-rationality)). I am less familiar with the other fund managers. All the managers are, to my knowledge, unpaid.
In general most of the grants seem at least plausibly valuable to me, and many seemed quite good indeed. As there is extensive discussion in the links above I shan't discuss my opinions of individual grants in detail.
I attempted to classify the recommended (including those not accepted by CEA) by type and geography. Note that ‘training’ means paying an individual to self-study. I have deliberately omitted the exact percentages because this is an informal classification.

Of these categories, I am most excited by the Individual Research, Event and Platform projects. I am generally somewhat sceptical of paying people to ‘level up’ their skills.
I can understand why the fund managers gave over a quarter of the funds to major organisations – they thought these organisations were a good use of capital! However, to my mind this undermines the purpose of the fund. (Many) individual donors are perfectly capable of evaluating large organisations that publicly advertise for donations. In donating to the LTFF, I think (many) donors are hoping to be funding smaller projects that they could not directly access themselves. As it is, such donors will probably have to consider such organisation allocations a mild ‘tax’ – to the extent that different large organisations are chosen then they would have picked themselves.
For a similar analysis, see Gaensbauer’s comment [here](https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#d4YHzSJnNWmyxf6HM). I think his ‘counterfactually unique’ (73%) roughly maps onto my ‘non-organisation’.
CFAR, which the fund managers recommended $300,000, was the largest single intended beneficiary with just over 20% of the recommendations.

All grants have to be approved by CEA before they are made; historically they have approved almost all. In general I think these rejections improved the process. In every instance they were subsequently funded by private donors anyway, but this does not seem to be a problem for donors to the LTFF whose capital is protected. Notably this means the funds only paid out $150,000 to CFAR (10%), as the balance was made up by a private donor after CEA did not approve the second grant.
I was not impressed that one grant that saw harsh and accurate criticism on the forum after the first round was re-submitted for the second round. Ex post this didn’t matter as CEA rejected it on substantive grounds the second time, but it makes me somewhat concerned about a risk of some of the capital going towards giving sinecures to people who are in the community, rather than objective merit. But if CEA will consistently block this waste maybe this is not such a big issue, and the grant in question only represented 1.3% of the total for the year.
If you wish to donate to the LTFF you can do so [here](https://app.effectivealtruism.org/donations/new/allocation).
OpenPhil: The Open Philanthropy Project
---------------------------------------
The Open Philanthropy Project (separated from Givewell in 2017) is an organisation dedicated to advising Cari and Dustin Moskovitz on how to give away over $15bn to a variety of causes, including existential risk. They have made extensive donations in this area and probably represent both the largest pool of EA-aligned capital and the largest team of EA capital allocators.
They also recently [announced](https://www.openphilanthropy.org/blog/co-funding-partnership-ben-delo) they would be working with Ben Delo as well.
This year they implemented a [special committee](https://www.openphilanthropy.org/committee-effective-altruism-support) for determining grants to EA-related organisations.
### Grants
You can see their grants for AI Risk [here](https://www.openphilanthropy.org/giving/grants?field_focus_area_target_id_selective=532). It lists only made four AI Risk grants in 2019, though I think that their [$500k grant](https://www.openphilanthropy.org/focus/other-areas/european-summer-program-on-rationality-general-support) to ESPR (The European Summer Program on Rationality) should be considered an AI Risk relevant grant also.:
* OpenPhil AI Fellowship: $2.3m ([write-up](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2019-class))
* MIRI: $2.7m ([write-up](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019))
* CSET: $55m ([write-up](https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology))
* BERI / CHAI: $250k ([write-up](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-ml-engineers))
In contrast there are 11 AI Risk grants listed for 2018, though the total dollar value is lower.
The OpenPhil AI Fellowship basically fully funds AI PhDs for students who want to work on the long term impacts of AI. One thing that I had misunderstood previously was these fellowships are [*not* intended to be specific to AI safety](https://forum.effectivealtruism.org/posts/fbw7mg2CzBiHqRibr/ai-safety-scholarships-look-worth-funding-if-other-funding-2#awTcbYBQpPK9PHsQx), though presumably their recipients are more likely to work on safety than the average ML PhD student. They funded 7 scholarships in 2018 and 8 in 2019
Due to a conflict of interest I cannot make any evaluation of their effectiveness.
### Research
Most of their research concerns their own granting, and in an unusual failure of nominative determinism is non-public except for the short write-ups linked above.
Zabel & Muehlhauser's [Information security careers for GCR reduction](https://forum.effectivealtruism.org/posts/ZJiCfwTy5dC4CoxqA/information-security-careers-for-gcr-reduction) argue that working in InfoSec could be a useful career for reducing Xrisks, especially AI and Bio. This is partly to help prevent AGI/synth bio knowledge falling into the hands of malicious hackers (though most ML research seems to be very open), and partly because the field teaches various skills that are useful for AI safety, both high-level like Eliezer's [Security Mindset](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) and technical like crypto. They suggested that there was a shortage of such people willing to work on Xrisk right now, and perhaps in the future, due to lucrative alternative employment options. Researchers from Google Brain were also named authors on the paper. #Careers
### Finances
To my knowledge they are not currently soliciting donations from the general public, as they have a lot of money from Dustin and Cari, so incremental funding is less of a priority than for other organisations. They could be a good place to work however!
SFF: The Survival and Flourishing Fund
--------------------------------------
SFF is a donor advised fund, advised by the people who make up BERI’s Board of Directors. SFF was initially funded in 2019 by a grant of approximately $2 million from BERI, which in turn was funded by donations from philanthropist Jaan Tallinn.
### Grants
In its grantmaking SFF used an innovative allocation process to combine the views of many grant evaluators (described [here](http://survivalandflourishing.fund/sff-2019-q4)). SFF has run two grant rounds thus far. The [first](http://survivalandflourishing.fund/) ($880k in total) focused on large organisations:
* 80,000 Hours: $280k
* CFAR: $110k
* CSER: $40k
* FLI: $130k
* GCRI: $60k
* LessWrong: $260k
The second round, requiring written applications, distributed money to a much wider variety of projects. The website lists 28 recipients, of which many but not all were AI relevant. The largest grant was for $300k to the [Longevity Research Institute](https://thelri.org/).
Due to a conflict of interest I cannot evaluate the effectiveness of their grantmaking.
Other News
==========
80,000 Hours's [AI/ML safety research job board](https://80000hours.org/job-board/ai-ml-safety-research/) collects various jobs that could be valuable for people interested in AI safety. At the time of writing it listed 35 positions, all of which seemed like good options that it would be valuable to have sensible people fill. I suspect most people looking for AI jobs would find some on here they hadn't heard of otherwise, though of course for any given person many will not be appropriate. They also have job boards for other EA causes. #Careers
Brown & Sandholm's [Superhuman AI for multiplayer poker](https://www.cs.cmu.edu/~noamb/papers/19-Science-Superhuman.pdf) present an AI that can beat professionals in non-limit Texas hold'em. My understanding was that this was seen as significantly harder than limit poker, so this represents something of a milestone. Unlike various Deepmind victories at classic games, this doesn't seem to have required much compute. #Misc
Chivers's [The AI Does Not Hate You: Superintelligence, Rationality and the Race to Save the World](https://www.amazon.com/Does-Not-Hate-You-Superintelligence-ebook/dp/B07K258VCV) is a journalistic examination of the rationalist community and the existential risk argument. I confess I haven't actually read the book, and have very low expectations for journalists in this regard, though Chivers is generally very good, and by all accounts this is a very fair and informative book. I've heard people recommend it as an explainer to their parents. #Introduction
EU's [Ethics Guidelines for Trustworthy Artificial Intelligence](https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai) is a series of ethics guidelines for AI in the EU. They received input from many groups, including CSER and Jaan Tallinn. They are (at this time) optional guidelines, and presumably will not apply to UK AI companies like Deepmind after Brexit. The guidelines seemed largely focused on banal statements about non-discrimination etc.; I could not find any mention of existential risk in the guidelines. In general I am not optimistic about political solutions and this did not change my mind. #Politics
Kaufman's [Uber Self-Driving Crash](https://www.jefftk.com/p/uber-self-driving-crash) convincingly argues that Uber was grossly negligent when their car hit and killed Elaine Herzberg last year. #Shortterm
Schmidt et al.'s [National Security Commission on Artificial Intelligence Interim Report](https://drive.google.com/file/d/153OrxnuGEjsUvlxWsFYauslwNeCEkvUb/view) surveys AI from a US defence perspective. It contains a few oblique references to AI risk. #Politics
Cummings's [On the referendum #31: Project Maven, procurement, lollapalooza results & nuclear/AGI](https://dominiccummings.com/2019/03/01/on-the-referendum-31-project-maven-procurement-lollapalooza-results-nuclear-agi-safety/) safety discusses various important trends, including a sophisticated discussion of AGI safety. This is mainly noteworthy because the author is the mastermind of Brexit and the recent Conservative landslide in the UK, and perhaps the most influential man in the UK as a result. #Strategy
Methodological Thoughts
=======================
Inside View vs Outside View
---------------------------
This document is written mainly, but not exclusively, using publicly available information. In the tradition of active management, I hope to synthesise many pieces of individually well known facts into a whole which provides new and useful insight to readers. Advantages of this are that 1) it is relatively unbiased, compared to inside information which invariably favours those you are close to socially and 2) most of it is [legible](https://smile.amazon.com/Seeing-Like-State-Condition-Institution-ebook/dp/B00D8JJYWA/ref=sr_1_1?crid=24NCSS2E2V0FH&keywords=seeing+like+a+state&qid=1574382664&sprefix=seeing+like+a%2Caps%2C177&sr=8-1) and verifiable to readers. The disadvantage is that there are probably many pertinent facts that I am not a party to! Wei Dai has written about how [much discussion now takes place in private google documents](https://www.lesswrong.com/posts/hnvPCZ4Cx35miHkw3/why-is-so-much-discussion-happening-in-private-google-docs) – for example [this Drexler piece](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf?asd=sa) apparently; in most cases I do not have access to these. If you want the inside scoop I am not your guy; all I can supply is exterior scooping.
Many capital allocators in the bay area seem to operate under a sort of [Great Man](https://en.wikipedia.org/wiki/Great_man_theory) theory of investment, whereby the most important thing is to identify a guy who is really clever and ‘gets it’. I think there is some merit in this; however, I think I believe in it much less than they do. Perhaps as a result of my institutional investment background, I place a lot more weight on historical results. In particular, I worry that this approach leads to over-funding skilled rhetoricians and those the investor/donor is socially connected to.
Judging organisations on their historical output is naturally going to favour more mature organisations. A new startup, whose value all lies in the future, will be disadvantaged. However, I think that this is the correct approach for donors who are not tightly connected to the organisations in question. The newer the organisation, the more funding should come from people with close knowledge. As organisations mature, and have more easily verifiable signals of quality, their funding sources can transition to larger pools of less expert money. This is how it works for startups turning into public companies and I think the same model applies here. (I actually think that even those with close personal knowledge should use historical results more, to help overcome their biases.)
This judgement involves analysing a large number of papers relating to Xrisk that were produced during 2019. Hopefully the year-to-year volatility of output is sufficiently low that this is a reasonable metric; I have tried to indicate cases where this doesn’t apply. I also attempted to include papers during December 2018, to take into account the fact that I'm missing the last month's worth of output from 2019, but I can't be sure I did this successfully.
This article focuses on AI risk work. If you think other causes are important too, your priorities might differ. This particularly affects GCRI, FHI and CSER, who both do a lot of work on other issues which I attempt to cover but only very cursorily.
We focus on papers, rather than outreach or other activities. This is partly because they are much easier to measure; while there has been a large increase in interest in AI safety over the last year, it’s hard to work out who to credit for this, and partly because I think progress has to come by persuading AI researchers, which I think comes through technical outreach and publishing good work, not popular/political work.
Politics
--------
My impression is that policy on most subjects, especially those that are more technical than emotional is generally made by the government and civil servants in consultation with, and being lobbied by, outside experts and interests. Without expert (e.g. top ML researchers in academia and industry) consensus, no useful policy will be enacted. Pushing directly for policy seems if anything likely to hinder expert consensus. Attempts to directly influence the government to regulate AI research seem very adversarial, and risk being pattern-matched to ignorant technophobic opposition to GM foods or other kinds of progress. We don't want the 'us-vs-them' situation that has occurred with climate change, to happen here. AI researchers who are dismissive of safety law, regarding it as an imposition and encumbrance to be endured or evaded, will probably be harder to convince of the need to voluntarily be extra-safe - especially as the regulations may actually be totally ineffective.
The only case I can think of where scientists are relatively happy about punitive safety regulations, nuclear power, is one where many of those initially concerned were scientists themselves. Given this, I actually think policy outreach to the general population is probably negative in expectation.
If you’re interested in this, I’d recommend you read [this blog post](https://www.lesswrong.com/posts/DJB82jKwgJE5NsWgT/some-cruxes-on-impactful-alternatives-to-ai-policy-work) from last year.
Openness
--------
I think there is a strong case to be made that openness in AGI capacity development is bad. As such I do not ascribe any positive value to programs to ‘democratize AI’ or similar.
One interesting question is how to evaluate non-public research. For a lot of safety research, openness is clearly the best strategy. But what about safety research that has, or potentially has, capabilities implications, or other infohazards? In this case it seems best if the researchers do not publish it. However, this leaves funders in a tough position – how can we judge researchers if we cannot read their work? Maybe instead of doing top secret valuable research they are just slacking off. If we donate to people who say “trust me, it’s very important and has to be secret” we risk being taken advantage of by charlatans; but if we refuse to fund, we incentivize people to reveal possible infohazards for the sake of money. (Is it even a good idea to publicise that someone else is doing secret research?)
With regard to published research, in general I think it is better for it to be open access, rather than behind journal paywalls, to maximise impact. Reducing this impact by a significant amount in order for the researcher to gain a small amount of prestige does not seem like an efficient way of compensating researchers to me. Thankfully this does not occur much with CS papers as they are all on arXiv, but it is an issue for some strategy papers.
Similarly, it seems a bit of a waste to have to charge for books – ebooks have, after all, no marginal cost – if this might prevent someone from reading useful content. There is also the same ability for authors to trade off public benefit against private gain – by charging more for their book, they potentially earn more, but at the cost of lower reach. As a result, I am inclined to give less credit for market-rate books, as the author is already compensated and incentised by sales revenue.
More prosaically, organisations should make sure to upload the research they have published to their website! Having gone to all the trouble of doing useful research it is a constant shock to me how many organisations don’t take this simple step to significantly increase the reach of their work. Additionally, several times I have come across incorrect information on organisation’s websites.
Research Flywheel
-----------------
My basic model for AI safety success is this:
1. Identify interesting problems
1. As a byproduct this draws new people into the field through altruism, nerd-sniping, apparent tractability
3. Solve interesting problems
1. As a byproduct this draws new people into the field through credibility and prestige
5. Repeat
One advantage of this model is that it produces both object-level work and field growth.
There is also some value in arguing for the importance of the field (e.g. Bostrom’s Superintelligence) or addressing criticisms of the field.
Noticeably absent are strategic pieces. I find that a lot of these pieces do not add terribly much incremental value. Additionally, my suspicion strategy research is, to a certain extent, produced exogenously by people who are interested / technically involved in the field. This does not apply to technical strategy pieces, about e.g. whether CIRL or Amplification is a more promising approach.
There is somewhat of a paradox with technical vs ‘wordy’ pieces however: as a non-expert, it is much easier for me to understand and evaluate the latter, even though I think the former are much more valuable.
Differential AI progress
------------------------
There are many problems that need to be solved before we have safe general AI, one of which is not producing *unsafe* general AI in the meantime. If nobody was doing non-safety-conscious research there would be little risk or haste to AGI – though we would be missing out on the potential benefits of safe AI.
There are several consequences of this:
* To the extent that safety research also enhances capabilities, it is less valuable.
* To the extent that capabilities research re-orientates subsequent research by third parties into more safety-tractable areas it is more valuable.
* To the extent that safety results would naturally be produced as a by-product of capabilities research (e.g. autonomous vehicles) it is less attractive to finance.
One approach is to research things that will make contemporary ML systems safer, because you think AGI will be a natural outgrowth from contemporary ML. This has the advantage of faster feedback loops, but is also more replaceable (as per the previous section).
Another approach is to try to reason directly about the sorts of issues that will arise with superintelligent AI. This work is less likely to be produced exogenously by unaligned researchers, but it requires much more faith in theoretical arguments, unmoored from empirical verification.
### Near-term safety AI issues
Many people want to connect AI existential risk issues to ‘near-term’ issues; I am generally sceptical of this. For example, autonomous cars seem to risk only localised tragedies, and private companies should have good incentives here. Unemployment concerns seem exaggerated to me, as they have been for most of history (new jobs will be created), at least until we have AGI, at which point we have bigger concerns. Similarly, I generally think concerns about algorithmic bias are essentially political - I recommend [this presentation](https://www.chrisstucchio.com/pubs/slides/crunchconf_2018/slides.pdf) - though there is at least some connection to the value learning problem there.
Financial Reserves
------------------
Charities like having financial reserves to provide runway, and guarantee that they will be able to keep the lights on for the immediate future. This could be justified if you thought that charities were expensive to create and destroy, and were worried about this occurring by accident due to the whims of donors. Unlike a company which sells a product, it seems reasonable that charities should be more concerned about this.
Donors prefer charities to not have too much reserves. Firstly, those reserves are cash that could be being spent on outcomes now, by either the specific charity or others. Valuable future activities by charities are supported by future donations; they do not need to be pre-funded. Additionally, having reserves increases the risk of organisations ‘going rogue’, because they are insulated from the need to convince donors of their value.
As such, in general I do not give full credence to charities saying they need more funding because they want much more than a 18 months or so of runway in the bank. If you have a year’s reserves now, after this December you will have that plus whatever you raise now, giving you a margin of safety before raising again next year.
I estimated reserves = (cash and grants) / (2020 budget). In general I think of this as something of a measure of urgency. However despite being prima facie a very simple calculation there are many issues with this data. As such these should be considered suggestive only.
Donation Matching
-----------------
In general I believe that charity-specific donation matching schemes [are somewhat dishonest](https://forum.effectivealtruism.org/posts/a2gYyTnAP36TxqdQp/matching-donation-fundraisers-can-be-harmfully-dishonest), despite my having provided matching funding for at least one in the past.
Ironically, despite this view being [espoused by GiveWell](https://blog.givewell.org/2011/12/15/why-you-shouldnt-let-donation-matching-affect-your-giving/) (albeit in 2011), this is essentially of OpenPhil’s policy of, at least in some cases, artificially limiting their funding to 50% or 60% of a charity’s need, which some charities have argued effectively provides a 1:1 match for outside donors. I think this is bad. In the best case this forces outside donors to step in, imposing marketing costs on the charity and research costs on the donors. In the worst case it leaves valuable projects unfunded.
Obviously cause-neutral donation matching is different and should be exploited. Everyone should max out their corporate matching programs if possible, and things like the [annual Facebook Match](https://www.eagivingtuesday.org/) continue to be great opportunities.
Poor Quality Research
---------------------
Partly thanks to the efforts of the community, the field of AI safety is considerably more well respected and funded than was previously the case, which has attracted a lot of new researchers. While generally good, one side effect of this (perhaps combined with the fact that many low-hanging fruits of the insight tree have been plucked) is that a considerable amount of low-quality work has been produced. For example, there are a lot of papers which can be accurately summarized as asserting “just use ML to learn ethics”. Furthermore, the conventional peer review system seems to be extremely bad at dealing with this issue.
The standard view here is just to ignore low quality work. This has many advantages, for example 1) it requires little effort, 2) it doesn’t annoy people. This conspiracy of silence seems to be the strategy adopted by most scientific fields, except in extreme cases like anti-vaxers.
However, I think there are some downsides to this strategy. A sufficiently large milieu of low-quality work might degrade the reputation of the field, deterring potentially high-quality contributors. While low-quality contributions might help improve [Concrete Problems](https://arxiv.org/abs/1606.06565)’ citation count, they may use up scarce funding.
Moreover, it is not clear to me that ‘just ignore it’ really generalizes as a community strategy. Perhaps you, enlightened reader, can judge that *“How to solve AI Ethics: Just use RNNs”* is not great. But is it really efficient to require everyone to independently work this out? Furthermore, I suspect that the idea that we can all just ignore the weak stuff is somewhat an example of typical mind fallacy. Several times I have come across people I respect according respect to work I found clearly pointless. And several times I have come across people I respect arguing persuasively that work I had previously respected was very bad – but I only learnt they believed this by chance! So I think it is quite possible that many people will waste a lot of time as a result of this strategy, especially if they don’t happen to move in the right social circles.
Having said all that, I am not a fan of unilateral action, and am somewhat selfishly conflict-averse, so will largely continue to abide by this non-aggression convention. My only deviation here is to make it explicit. If you’re interested in this you might enjoy [this](https://80000hours.org/articles/accidental-harm/) by 80,000 Hours.
The Bay Area
------------
Much of the AI and EA communities, and especially the EA community concerned with AI, is located in the Bay Area, especially Berkeley and San Francisco. This is an extremely expensive place, and is dysfunctional both politically and socially. Aside from the lack of electricity and aggressive homelessness, it seems to attract people who are extremely weird in socially undesirable ways – and induces this in those who move there - though to be fair the people who are doing useful work in AI organisations seem to be drawn from a better distribution than the broader community. In general I think the centralization is bad, but if there must be centralization I would prefer it be almost anywhere other than Berkeley. Additionally, I think many funders are geographically myopic, and biased towards funding things in the Bay Area. As such, I have a mild preference towards funding non-Bay-Area projects. If you’re interested in this topic I recommend you read [this](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/) or [this](https://www.lesswrong.com/posts/zAqoj79A7QuhJKKvi/the-berkeley-community-and-the-rest-of-us-a-response-to-zvi) or [this](https://rationalconspiracy.com/2017/04/22/moving-to-the-bay-area/).
Conclusions
===========
The size of the field continues to grow, both in terms of funding and researchers. Both make it increasingly hard for individual donors. I’ve attempted to subjectively weigh the productivity of the different organisations against the resources they used to generate that output, and donate accordingly.
My constant wish is to promote a lively intellect and independent decision-making among readers; hopefully my laying out the facts as I see them above will prove helpful to some readers. Here is my eventual decision, [rot13'd](http://www.rot13.com/) so you can do come to your own conclusions first (which I strongly recommend):
Qrfcvgr univat qbangrq gb ZVEV pbafvfgragyl sbe znal lrnef nf n erfhyg bs gurve uvtuyl aba-ercynprnoyr naq tebhaqoernxvat jbex va gur svryq, V pnaabg va tbbq snvgu qb fb guvf lrne tvira gurve ynpx bs qvfpybfher. Nqqvgvbanyyl, gurl nyernql unir n dhvgr ynetr ohqtrg.
SUV unir pbafvfgragyl cebqhprq fbzr bs gur uvturfg dhnyvgl erfrnepu. Ubjrire, V nz abg pbaivaprq gurl unir n uvtu arrq sbe nqqvgvbany shaqvat.
V pbagvahr gb or vzcerffrq jvgu PUNV’f bhgchg, naq guvax gurl cbgragvnyyl qb n tbbq wbo vagrenpgvat jvgu znvafgernz ZY erfrnepuref. Gurl unir n ybg bs pnfu erfreirf, juvpu frrzf yvxr vg zvtug erqhpr gur hetrapl bs shaqvat fbzrjung, naq n pbafvqrenoyr cbegvba bs gur jbex vf ba zber arne-grez vffhrf, ohg gurer ner eryngviryl srj bccbeghavgvrf gb shaq grpuavpny NV fnsrgl jbex, fb V vagraq gb qbangr gb PUNV ntnva guvf lrne.
Qrrczvaq naq BcraNV obgu qb rkpryyrag jbex ohg V qba’g guvax vg vf ivnoyr sbe (eryngviryl) fznyy vaqvivqhny qbabef gb zrnavatshyyl fhccbeg gurve jbex.
Va gur cnfg V unir orra irel vzcerffrq jvgu TPEV’f bhgchg ba n ybj ohqtrg. Qrfcvgr vagraqvat 2019 vagraqvat gb or gurve lrne bs fpnyvat hc, bhgchg unf npghnyyl qrpernfrq. V fgvyy vagraq gb znxr n qbangvba, va pnfr guvf vf whfg na hasbeghangr gvzvat vffhr, ohg qrsvavgryl jbhyq jnag gb frr zber arkg lrne.
PFRE’f erfrnepu vf whfg abg sbphfrq rabhtu gb jneenag qbangvbaf sbe NV Evfx jbex va zl bcvavba.
V jbhyq pbafvqre qbangvat gb gur NV Fnsrgl Pnzc vs V xarj zber nobhg gurve svanaprf.
Bhtug frrzf yvxr n irel inyhnoyr cebwrpg, naq yvxr PUNV ercerfragf bar bs gur srj bccbeghavgvrf gb qverpgyl shaq grpuavpny NV fnsrgl jbex. Nf fhpu V guvax V cyna gb znxr n qbangvba guvf lrne.
V gubhtug NV Vzcnpgf qvq fbzr avpr fznyy cebwrpgf guvf lrne, naq ba n abg ynetr ohqtrg. V guvax V jbhyq yvxr gb frr gur erfhygf sebz gurve ynetr cebwrpgf svefg ubjrire.
Va n znwbe qvssrerapr sebz cerivbhf lrnef, V npghnyyl cyna gb qbangr fbzr zbarl gb gur Ybat Grez Shgher Shaq. Juvyr V unira’g nterrq jvgu nyy gurve tenagf, V guvax gurl bssre fznyy qbabef npprff gb n enatr bs fznyy cebwrpgf gung gurl pbhyq abg bgurejvfr shaq, juvpu frrzf irel inyhnoyr pbafvqrevat gur fgebat svanapvny fvghngvba bs znal bs gur orfg ynetre betnavfngvbaf (BcraNV, Qrrczvaq rgp.)
Bar guvat V jbhyq yvxr gb frr zber bs va gur shgher vf tenagf sbe CuQ fghqragf jub jnag gb jbex va gur nern. Hasbeghangryl ng cerfrag V nz abg njner bs znal jnlf sbe vaqvivqhny qbabef gb cenpgvpnyyl fhccbeg guvf.
However, I wish to emphasize that all the above organisations seem to be doing good work on the most important issue facing mankind. It is the nature of making decisions under scarcity that we must prioritize some over others, and I hope that all organisations will understand that this necessarily involves negative comparisons at times.
Thanks for reading this far; hopefully you found it useful. Apologies to everyone who did valuable work that I excluded!
If you found this post helpful, and especially if it helped inform your donations, please consider letting me and any organisations you donate to as a result know.
If you are interested in helping out with next year’s article, please get in touch, and perhaps we can work something out.
Disclosures
===========
I have not in general checked all the proofs in these papers, and similarly trust that researchers have honestly reported the results of their simulations.
I was a Summer Fellow at MIRI back when it was SIAI and volunteered briefly at GWWC (part of CEA). I have conflicts of interest with the Survival and Flourishing Fund and OpenPhil so have not evaluated them. I have no financial ties beyond being a donor and have never been romantically involved with anyone who has ever worked at any of the other organisations.
I shared drafts of the individual organisation sections with representatives from FHI, CHAI, MIRI, GCRI, BERI, Median, CSER, GPI, AISC, BERI, AIImpacts, FRI and Ought.
My eternal gratitude to Greg Lewis, Jess Riedel, Hayden Wilkinson, Kit Harris and Jasmine Wang for their invaluable reviewing. Any remaining mistakes are of course my own. I would also like to thank my wife and daughter for tolerating all the time I have spent/invested/wasted on this.
Sources
=======
80,000 Hours - AI/ML safety research job board - 2019-09-29 - https://80000hours.org/job-board/ai-ml-safety-research/
Agrawal, Mayank; Peterson, Joshua; Griffiths, Thomas - Scaling up Psychology via Scientific Regret Minimization:A Case Study in Moral Decision-Making - 2019-10-16 - https://arxiv.org/abs/1910.07581
AI Impacts - AI Conference Attendance - 2019-03-06 - https://aiimpacts.org/ai-conference-attendance/
AI Impacts - Historical Economic Growth Trends - 2019-03-06 - https://aiimpacts.org/historical-growth-trends/
Alexander, Scott - Noisy Poll Results And Reptilian Muslim Climatologists from Mars - 2013-04-12 - https://slatestarcodex.com/2013/04/12/noisy-poll-results-and-reptilian-muslim-climatologists-from-mars/
Armstrong, Stuart - Research Agenda v0.9: Synthesising a human's preferences into a utility function - 2019-06-17 - https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into#comments
Armstrong, Stuart; Bostrom, Nick; Shulman, Carl - Racing to the precipice: a model of artificial intelligence development - 2015-08-01 - https://link.springer.com/article/10.1007%2Fs00146-015-0590-y
Armstrong, Stuart; Mindermann, Sören - Occam's razor is insufficient to infer the preferences of irrational agents - 2017-12-15 - https://arxiv.org/abs/1712.05812
Aschenbrenner, Leopold - Existential Risk and Economic Growth - 2019-09-03 - https://leopoldaschenbrenner.github.io/xriskandgrowth/ExistentialRiskAndGrowth050.pdf
Avin, Shahar - Exploring Artificial Intelligence Futures - 2019-01-17 - https://www.shaharavin.com/publication/pdf/exploring-artificial-intelligence-futures.pdf
Avin, Shahar; Amadae, S - Autonomy and machine learning at the interface of nuclear weapons, computers and people - 2019-05-06 - https://www.sipri.org/sites/default/files/2019-05/sipri1905-ai-strategic-stability-nuclear-risk.pdf
Baum, Seth - Risk-Risk Tradeoff Analysis of Nuclear Explosives for Asteroid Deflection - 2019-06-13 - https://papers.ssrn.com/sol3/papers.cfm?abstract\_id=3397559
Baum, Seth - The Challenge of Analyzing Global Catastrophic Risks - 2019-07-15 - https://higherlogicdownload.s3.amazonaws.com/INFORMS/f0ea61b6-e74c-4c07-894d-884bf2882e55/UploadedImages/2019\_July.pdf#page=20
Baum, Seth; de Neufville, Robert; Barrett, Anthony; Ackerman, Gary - Lessons for Artificial Intelligence from Other Global Risks - 2019-11-21 - http://gcrinstitute.org/papers/lessons.pdf
Beard, Simon - Perfectionism and the Repugnant Conclusion - 2019-03-05 - https://link.springer.com/article/10.1007/s10790-019-09687-4
Beard, Simon - What Is Unfair about Unequal Brute Luck? An Intergenerational Puzzle - 2019-01-21 - https://www.cser.ac.uk/resources/brute-luck-intergenerational-puzzle/
Belfield, Haydn - How to respond to the potential malicious uses of artificial intelligence? - 2019-09-19 - https://www.cser.ac.uk/resources/how-respond-potential-malicious-uses-artificial-intelligence/
Bogosian, Kyle - On AI Weapons - 2019-11-13 - https://forum.effectivealtruism.org/posts/vdqBn65Qaw77MpqXz/on-ai-weapons
Brown, Noam; Sandholm, Tuomas - Superhuman AI for multiplayer poker - 2019-07-17 - https://www.cs.cmu.edu/~noamb/papers/19-Science-Superhuman.pdf
Caplan, Bryan - The Myth of the Rational Voter - 2008-08-24 - https://www.amazon.com/Myth-Rational-Voter-Democracies-Policies/dp/0691138737
Carey, Ryan - How useful is Quantilization for Mitigating Specification-Gaming - 2019-05-06 - https://www.fhi.ox.ac.uk/wp-content/uploads/SafeML2019\_paper\_40.pdf
Carroll, Micah; Shah Rohin; Mark K Ho, Griffiths, Tom; Seshia,Sanjit; Abbeel,Pieter; Dragan, Anca - On the Utility of Learning about Humansfor Human-AI Coordination - 2019-10-22 - http://papers.nips.cc/paper/8760-on-the-utility-of-learning-about-humans-for-human-ai-coordination.pdf
Cave, Stephen; Ó hÉigeartaigh, Seán - Bridging near- and long-term concerns about AI - 2019-01-07 - https://www.nature.com/articles/s42256-018-0003-2
Chan, Lawrence; Hadfield-Menell, Dylan; Srinivasa, Siddhartha; Dragan, Anca - The Assistive Multi-Armed Bandit - 2019-01-24 - https://arxiv.org/abs/1901.08654
Chivers, Tom - The AI Does Not Hate You: Superintelligence, Rationality and the Race to Save the World - 2019-06-13 - https://www.amazon.com/Does-Not-Hate-You-Superintelligence-ebook/dp/B07K258VCV
Christiano, Paul - AI alignment landscape - 2019-10-12 - https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38
Christiano, Paul - What failure looks like - 2019-03-17 - https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like
Cihon, Peter - Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development - 2019-05-16 - https://www.fhi.ox.ac.uk/wp-content/uploads/Standards\_-FHI-Technical-Report.pdf
Clark, Jack; Hadfield, Gillian - Regulatory Markets for AI Safety - 2019-05-06 - https://drive.google.com/uc?export=download&id=1bFPiwLrZc7SQTMg2\_bW4gt0PaS5NyqOH
Cohen, Michael; Vellambi, Badri; Hutter, Marcus - Asymptotically Unambitious Artificial General Intelligence - 2019-05-29 - https://arxiv.org/abs/1905.12186
Collins, Jason - Principles for the Application of Human Intelligence - 2019-09-30 - https://behavioralscientist.org/principles-for-the-application-of-human-intelligence/
Colvin, R; Kemp, Luke; Talberg, Anita; De Castella, Clare ; Downie, C; Friel, S; Grant, Will; Howden, Mark; Jotzo, Frank; Markham, Francis; Platow, Michael - Learning from the Climate Change Debate to AvoidPolarisation on Negative Emissions - 2019-07-25 - https://sci-hub.tw/10.1080/17524032.2019.1630463
Cottier, Ben; Shah, Rohin - Clarifying some key hypotheses in AI alignment - 2019-08-15 - https://www.lesswrong.com/posts/mJ5oNYnkYrd4sD5uE/clarifying-some-key-hypotheses-in-ai-alignment
CSER - Policy series Managing global catastrophic risks: Part 1 Understand - 2019-08-13 - https://www.gcrpolicy.com/understand-overview
Cummings, Dominic - On the referendum #31: Project Maven, procurement, lollapalooza results & nuclear/AGI safety - 2019-03-01 - https://dominiccummings.com/2019/03/01/on-the-referendum-31-project-maven-procurement-lollapalooza-results-nuclear-agi-safety/
Dai, Wei - Problems in AI Alignment that philosophers could potentially contribute to - 2019-08-17 - https://www.lesswrong.com/posts/rASeoR7iZ9Fokzh7L/problems-in-ai-alignment-that-philosophers-could-potentially
Dai, Wei - Problems in AI Alignment that philosophers could potentially contribute to - 2019-08-17 - https://www.lesswrong.com/posts/rASeoR7iZ9Fokzh7L/problems-in-ai-alignment-that-philosophers-could-potentially
Dai, Wei - Two Neglected Problems in Human-AI Safety - 2018-12-16 - https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety
Drexler, Eric - Reframing Superintelligence: Comprehensive AI Services as General Intelligence - 2019-01-08 - https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing\_Superintelligence\_FHI-TR-2019-1.1-1.pdf?asd=sa
EU - Ethics Guidelines for Trustworthy Artificial Intelligence - 2019-04-08 - https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
Everitt, Tom; Hutter, Marcus - Reward Tampering Problems and Solutions in Reinforcement Learning: A Causal Influence Diagram Perspective - 2019-08-13 - https://arxiv.org/abs/1908.04734
Everitt, Tom; Kumar, Ramana; Krakovna, Victoria; Legg, Shane - Modeling AGI Safety Frameworks with Causal Influence Diagrams - 2019-06-20 - https://arxiv.org/abs/1906.08663
Everitt, Tom; Ortega, Pedro; Barnes, Elizabeth; Legg, Shane - Understanding Agent Incentives using Causal Influence Diagrams. Part I: Single Action Settings - 2019-02-26 - https://arxiv.org/abs/1902.09980
Friedman, David D - Legal Systems Very Different from Ours - 1970-01-01 - https://www.amazon.com/Legal-Systems-Very-Different-Ours/dp/1793386722
Garfinkel, Ben & Dafoe, Allan - How does the offense-defense balance scale? - 2019-08-22 - https://www.tandfonline.com/doi/full/10.1080/01402390.2019.1631810
Greaves, Hilary; Cotton-Barratt, Owen - A bargaining-theoretic approach to moral uncertainty - 2019-08-09 - https://globalprioritiesinstitute.org/wp-content/uploads/2019/Greaves\_Cotton-Barratt\_bargaining\_theoretic\_approach.pdf
Grotto, Andy - Genetically Modified Organisms: A Precautionary Tale for AI Governance - 2019-01-24 - https://aipulse.org/genetically-modified-organisms-a-precautionary-tale-for-ai-governance-2/
Hernandez-Orallo, Jose; Martınez-Plumed, Fernando; Avin, Shahar; Ó hÉigeartaigh, Seán - Surveying Safety-relevant AI Characteristics - 2019-01-20 - http://ceur-ws.org/Vol-2301/paper\_22.pdf
Hubinger, Evan; van Merwijk, Chris; Mikulik, Vladimir; Skalse, Joar; Garrabrant, Scott - Risks from Learned Optimization in Advanced Machine Learning Systems - 2019-06-05 - https://arxiv.org/abs/1906.01820
Irving, Geoffrey; Askell, Amanda - AI Safety Needs Social Scientists - 2019-02-19 - https://distill.pub/2019/safety-needs-social-scientists/
Irving, Geoffrey; Christiano, Paul; Amodei, Dario - AI Safety via Debate - 2018-05-02 - https://arxiv.org/abs/1805.00899
Kaczmarek, Patrick; Beard, Simon - Human Extinction and Our Obligations to thePast - 2019-11-05 - https://sci-hub.tw/https://www.cambridge.org/core/journals/utilitas/article/human-extinction-and-our-obligations-to-the-past/C29A0406EFA2B43EE8237D95AAFBB580
Kaufman, Jeff - Uber Self-Driving Crash - 2019-11-07 - https://www.jefftk.com/p/uber-self-driving-crash
Kemp, Luke - Mediation Without Measures: Conflict Resolution in Climate Diplomacy - 2019-05-15 - https://www.cser.ac.uk/resources/mediation-without-measures/
Kenton, Zachary; Filos, Angelos; Gal, Yarin; Evans, Owain - Generalizing from a few enviroments in Safety-Critical Reinforcement Learning - 2019-07-02 - https://arxiv.org/abs/1907.01475
Korzekwa, Rick - The unexpected difficulty of comparing AlphaStar to humans - 2019-09-17 - https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/
Kosoy, Vanessa - Delegative Reinforcement Learning: Learning to Avoid Traps with a Little Help - 2019-07-19 - https://arxiv.org/abs/1907.08461
Kovarik, Vojta; Gajdova, Anna; Lindner, David; Finnveden, Lukas; Agrawal, Rajashree - AI Safety Debate and Its Applications - 2019-07-23 - https://www.lesswrong.com/posts/5Kv2qNfRyXXihNrx2/ai-safety-debate-and-its-applications
Krakovna, Victoria - ICLR Safe ML Workshop Report - 2019-06-18 - <https://futureoflife.org/2019/06/18/iclr-safe-ml-workshop-report/>
Kruegar, David; Maharaj, Tegan; Legg, Shane; Leike, Jan - Misleading Meta-Objectives and Hidden Incentives for Distributional Shift - 2019-01-01 - https://drive.google.com/uc?export=download&id=1k93292JCoIHU0h6xVO3qmeRwLyOSlS4o
Kumar, Ram Shankar Siva; O'Brien, David; Snover, Jeffrey; Albert, Kendra; Viloen, Salome - Failure Modes in Machine Learning - 2019-11-10 - https://docs.microsoft.com/en-us/security/failure-modes-in-machine-learning
LeCun, Yann; Russell, Stuart; Bengio, Yoshua; Olds, Elliot; Zador, Tony; Rossi, Francesca; Mallah, Richard; Barzov, Yuri - Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More - 2019-10-04 - https://www.lesswrong.com/posts/WxW6Gc6f2z3mzmqKs/debate-on-instrumental-convergence-between-lecun-russell
Lewis, Sophie; Perkins-Kirkpatrick, Sarah; Althor, Glenn; King, Andrew; Kemp, Luke - Assessing contributions of major emitters' Paris‐era decisions to future temperature extremes - 2019-03-20 - https://www.cser.ac.uk/resources/assessing-contributions-extremes/
Long, Robert; Bergal, Asya - Evidence against current methods leading to human level artificial intelligence - 2019-08-12 - https://aiimpacts.org/evidence-against-current-methods-leading-to-human-level-artificial-intelligence/
Long, Robert; Davis, Ernest - Conversation with Ernie Davis - 2019-08-23 - https://aiimpacts.org/conversation-with-ernie-davis/
Macaskill, Will; Demski, Abram - A Critique of Functional Decision Theory - 2019-09-13 - https://www.lesswrong.com/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory
MacAskill, William; Vallinder, Aron; Oesterheld, Caspar; Shulman, Carl; Treutlein, Johannes - The Evidentialist's Wager - 2019-11-19 - https://globalprioritiesinstitute.org/the-evidentialists-wager/
Majha, Arushi; Sarkar, Sayan; Zagami, Davide - Categorizing Wireheading in Partially Embedded Agents - 2019-06-21 - https://arxiv.org/abs/1906.09136
Maltinsky, Baeo; Gallagher, Jack; Taylor, Jessica - Feasibility of Training an AGI using Deep RL:A Very Rough Estimate - 2019-03-24 - http://mediangroup.org/docs/Feasibility%20of%20Training%20an%20AGI%20using%20Deep%20Reinforcement%20Learning,%20A%20Very%20Rough%20Estimate.pdf
Mancuso, Jason; Kisielewski, Tomasz; Lindner, David; Singh, Alok - Detecting Spiky Corruption in Markov Decision Processes - 2019-06-30 - https://arxiv.org/abs/1907.00452
Marcus, Gary - Deep Learning: A Critical Appraisal - 2018-01-02 - https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf
McCaslin, Tegan - Investigation into the relationship between neuron count and intelligence across differing cortical architectures - 2019-02-11 - https://aiimpacts.org/investigation-into-the-relationship-between-neuron-count-and-intelligence-across-differing-cortical-architectures/
Mogensen, Andreas - ‘The only ethical argument for positive 𝛿 ’? - 2019-01-01 - https://globalprioritiesinstitute.org/andreas-mogensen-the-only-ethical-argument-for-positive-delta-2/
Mogensen, Andreas - Doomsday rings twice - 2019-01-01 - https://globalprioritiesinstitute.org/andreas-mogensen-doomsday-rings-twice/
Naude, Wim; Dimitri, Nicola - The race for an artificial general intelligence: implications for public policy - 2019-04-22 - https://link.springer.com/article/10.1007%2Fs00146-019-00887-x
Ngo, Richard - Technical AGI safety research outside AI - 2019-10-18 - https://forum.effectivealtruism.org/posts/2e9NDGiXt8PjjbTMC/technical-agi-safety-research-outside-ai
O'Keefe, Cullen - Stable Agreements in Turbulent Times: A Legal Toolkit for Constrained Temporal Decision Transmission - 2019-05-01 - https://www.fhi.ox.ac.uk/wp-content/uploads/Stable-Agreements.pdf
Ovadya, Aviv; Whittlestone, Jess - Reducing Malicious Use of Synthetic Media Research: Considerations and Potential Release Practices for Machine Learning - 2019-07-29 - https://arxiv.org/abs/1907.11274
Owain, Evans; Saunders, William; Stuhlmüller, Andreas - Machine Learning Projects for Iterated Distillation and Amplification - 2019-07-03 - https://owainevans.github.io/pdfs/evans\_ida\_projects.pdf
Perry, Brandon; Uuk, Risto - AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk - 2019-05-08 - https://www.mdpi.com/2504-2289/3/2/26/pdf
Piper, Kelsey - The case for taking AI seriousl as a threat to humanity - 2018-12-21 - https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment
Quigley, Ellen - Universal Ownership in the Anthropocene - 2019-05-13 - https://papers.ssrn.com/sol3/papers.cfm?abstract\_id=3457205
Roy, Mati - AI Safety Open Problems - 2019-11-02 - https://docs.google.com/document/d/1J2fOOF-NYiPC0-J3ZGEfE0OhA-QcOInhlvWjr1fAsS0/edit
Russell, Stuart - Human Compatible; Artificial Intelligence and the Problem of Control - 2019-10-08 - https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem/dp/0525558616/ref=sr\_1\_2?keywords=Stuart+Russell&qid=1565996574&s=books&sr=1-2
Schwarz, Wolfgang - On Functional Decision Theory - 2018-12-27 - https://www.umsu.de/blog/2018/688
Sevilla, Jaime; Moreno, Pablo - Implications of Quantum Computing for Artificial Intelligence alignment research - 2019-08-19 - https://arxiv.org/abs/1908.07613
Shah, Rohin; Gundotra, Noah; Abbeel, Pieter; Dragan, Anca - On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference - 2019-06-23 - https://arxiv.org/abs/1906.09624
Shah, Rohin; Krasheninnikov, Dmitrii; Alexander Jordan; Abbeel, Pieter; Dragan, Anca - Preferences Implicit in the State of the World - 2019-02-12 - https://arxiv.org/abs/1902.04198
Shulman, Carl - Person-affecting views may be dominated by possibilities of large future populations of necessary people - 2019-11-30 - http://reflectivedisequilibrium.blogspot.com/2019/11/person-affecting-views-may-be-dominated.html
Snyder-Beattie, Andrew; Ord, Toby; Bonsall, Michael - An upper bound for the background rate of human extinction - 2019-07-30 - https://www.nature.com/articles/s41598-019-47540-7
Steiner, Charlie - Some Comments on Stuart Armstrong's "Research Agenda v0.9" - 2019-08-08 - https://www.lesswrong.com/posts/GHNokcgERpLJwJnLW/some-comments-on-stuart-armstrong-s-research-agenda-v0-9
Steinhardt, Jacob - AI Alignment Research Overview - 2019-10-14 - https://rohinshah.us18.list-manage.com/track/click?u=1d1821210cc4f04d1e05c4fa6&id=1a148ef72c&e=1e228e7079
Sterbenz, Ciara; Trager, Robert - Autonomous Weapons and Coercive Threats - 2019-02-06 - https://aipulse.org/autonomous-weapons-and-coercive-threats/
Sutton, Rich - The Bitter Lesson - 2019-03-13 - http://www.incompleteideas.net/IncIdeas/BitterLesson.html
Szlam et al. - Why Build an Assistant in Minecraft? - 2019-07-19 - https://arxiv.org/abs/1907.09273
Taylor, Jessica - - 1900-01-00 - https://www.aaai.org/ocs/index.php/WS/AAAIW16/paper/view/12613
Taylor, Jessica - The AI Timelines Scam - 2019-07-11 - https://unstableontology.com/2019/07/11/the-ai-timelines-scam/
Taylor, Jessica; Gallagher, Jack; Maltinsky, Baeo - Revisting the Insights model - 2019-07-20 - http://mediangroup.org/insights2.html
The AlphaStar Team - AlphaStar: Mastering the Real-Time Strategy Game StarCraft II - 2019-01-24 - https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii
Turner, Alexander; Dadfield-Menell, Dylan; Tadepalli, Prasad - Conservative Agency - 2019-02-26 - https://arxiv.org/abs/1902.09725
Tzachor, Asaf - The Future of Feed: Integrating Technologies to Decouple Feed Production from Environmental Impacts - 2019-04-23 - https://www.liebertpub.com/doi/full/10.1089/ind.2019.29162.atz
Useato, Jonathan; Kumar, Ananya; Szepesvari, Csaba; Erex, Tom; Ruderman, Avraham; Anderson, Keith; Dvijotham, Krishmamurthy; Heess, Nicolas; Kohli, Pushmeet - Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures - 2018-12-04 - https://arxiv.org/abs/1812.01647
USG - National Security Commission on Artificial Intelligence Interim Report - 2019-11-01 - https://drive.google.com/file/d/153OrxnuGEjsUvlxWsFYauslwNeCEkvUb/view
Walsh, Bryan - End Times: A Brief Guide to the End of the World - 2019-08-27 - https://smile.amazon.com/End-Times-Brief-Guide-World-ebook/dp/B07J52NW99/ref=tmm\_kin\_swatch\_0?\_encoding=UTF8&qid=&sr=
Weitzdörfer & Julius, Beard & Simon - Law and Policy Responses to Disaster-Induced Financial Distress - 2019-11-24 - https://sci-hub.tw/10.1007/978-981-13-9005-0
Whittlestone, Jess; Nyrup, Rune; Alexandrova, Anna; Cave, Stephen - The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions - 2019-01-27 - http://lcfi.ac.uk/media/uploads/files/AIES-19\_paper\_188\_Whittlestone\_Nyrup\_Alexandrova\_Cave\_OcF7jnp.pdf
Zabel, Claire; Muehlhauser, Luke - Information security careers for GCR reduction - 2019-06-20 - https://forum.effectivealtruism.org/posts/ZJiCfwTy5dC4CoxqA/information-security-careers-for-gcr-reduction
Zhang, Baobao; Dafoe, Allan - Artificial Intelligence: American Attitudes and Trends - 2019-01-15 - <https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/> |
5bae2196-0daf-4462-bd03-162aaa8a4c21 | StampyAI/alignment-research-dataset/special_docs | Other | Artificial Canaries: Early Warning Signs for Anticipatory and Democratic Governance of AI
International Journal of Interactive Multimedia and Artificial Intelligence, Vol. 6, Nº5- 100 -- 100 -Artificial Canaries: Early Warning Signs for
Anticipatory and Democratic Governance of AI
Carla Zoe Cremer1,2*, Jess Whittlestone2
1 Future of Humanity Institute, University of Oxford (United Kingdom)
2 Centre for the Study of Existential Risk, University of Cambridge (United Kingdom)
Received 17 January 2021 | Accepted 11 February 2021 | Published 24 February 2021
Keywords
AI Governance,
Forecasting,
Anticipatory
Governance,
Participatory Technology
Assessments. Abstract
We propose a method for identifying early warning signs of transformative progress in artificial intelligence
(AI), and discuss how these can support the anticipatory and democratic governance of AI. We call these early warning signs ‘canaries’, based on the use of canaries to provide early warnings of unsafe air pollution in coal mines. Our method combines expert elicitation and collaborative causal graphs to identify key milestones and identify the relationships between them. We present two illustrations of how this method could be used: to identify early warnings of harmful impacts of language models; and of progress towards high-level machine intelligence. Identifying early warning signs of transformative applications can support more efficient monitoring and timely regulation of progress in AI: as AI advances, its impacts on society may be too great to be governed retrospectively. It is essential that those impacted by AI have a say in how it is governed. Early warnings can give the public time and focus to influence emerging technologies using democratic, participatory technology assessments. We discuss the challenges in identifying early warning signals and propose directions for future work.
* Corresponding author.
E-mail address: carla.cremer@philosophy.ox.ac.ukDOI: 10.9781/ijimai.2021.02.011
I. Introduction
Progress in artificial intelligence (AI) research has accelerated in
recent years. Applications are already changing society [1] and
some researchers warn that continued progress could precipitate
transformative impacts [2]– [5]. We use the term “transformative
AI” to describe a range of possible advances with potential to impact society in significant and hard-to-reverse ways [6]. For example, future
machine learning systems could be used to optimise management of safety-critical infrastructure [7]. Advanced language models could
be used in ways that corrupt our online information ecosystem [8]
and future advances in AI systems could trigger widespread labour automation [9].
There is an urgent need to develop anticipatory governance
approaches to AI development and deployment. As AI advances, its impacts on society will become more profound, and some harms may be too great to rely on purely ‘reactive’ or retrospective governance.
Anticipating future impacts is a challenging task. Experts show
substantial disagreement about when different advances in AI capabilities should be expected [10], [11]. Policy-makers face challenges
in keeping pace with technological progress: it is difficult to foresee impacts before a technology is deployed, but after deployment it may already be too late to shape impacts, and some harm may already have been done [12]. Ideally, we would focus preventative, anticipatory efforts on applications which are close enough to deployment to be meaningfully influenced today, but whose impacts we are not already seeing. Finding ‘early warning signs’ of transformative AI applications can help us to do this.
Early warning signs can also help democratise AI development and
governance. They can provide time and direction for much-needed public discourse about what we want and do not want from AI. It is not enough for anticipatory governance to look out for supposedly ‘inevitable’ future impacts. We are not mere bystanders in this AI revolution: the futures we occupy will be futures of our own making, driven by the actions of technology developers, policymakers, civil society and the public. In order to prevent foreseeable harms towards those people who bear the effects of AI deployments, we must find ways for AI developers to be held accountable to the society which they are embedded in. If we want AI to benefit society broadly, we must urgently find ways to give democratic control to those who will be impacted. Our aim with identifying early warning signs is to develop anticipatory methods which can prompt a focussed civic discourse around significant developments and provide a wider range of people with the information they need to contribute to conversations about the future of AI.
We present a methodology for identifying early warning signs of
potentially transformative impacts of AI and discuss how these can feed into more anticipatory and democratic governance processes. We call these early warning signs ‘canaries’ based on the practice of using canaries to provide early warnings of unsafe air pollution in coal mines in the industrial revolution. Others before us have used this term in the context of AI to stress the importance of early warning
Special Issue on Artificial Intelligence, Paving the Way to the Future- 101 -
signs [13], [14], but this is the first attempt to outline in detail how
such ‘artificial canaries’ might be identified and used.
Our methodology is a prototype but we believe it provides
an important first step towards assessing and then trialling the
feasibility of identifying canaries. We first present the approach and then illustrate it on two high-level examples, in which we identify preliminary warning signs of AI applications that could undermine democracy, and warning signs of progress towards high-level machine intelligence (HLMI). We explain why early warning signs are needed by drawing on the literature of participatory technology assessments, and we discuss the advantages and practical challenges of this method in the hope of preparing future research that might attempt to put this method into practise. Our theoretical exploration of a method to identify early warning signs of transformative applications provides a foundation towards more anticipatory, accountable and democratic governance of AI in practice.
II. Related Work
We rely on two main bodies of work. Our methodology for
identifying canaries relies on the literature on forecasting and monitoring AI. Our suggestions for how canaries might be used once identified build on work on participatory technology
assessments, which stresses a more inclusive approach to technology governance. While substantial research exists in both these areas, we believe this is the first piece of work that shows how they could feed into each other.
A. AI Forecasting and Monitoring
Over the past decade, an increasing number of studies have
attempted to forecast AI progress. They commonly use expert elicitations to generate probabilistic estimates for when different AI advances and milestones will be achieved [10], [15]– [17]. For example,
[16] ask experts about when specific milestones in AI will be achieved, including passing the Turing Test or passing third grade. Both [15] and
[10] ask experts to predict the arrival of high-level machine intelligence (HLMI), which the latter define as when “unaided machines can accomplish every task better and more cheaply than human workers”.
However, we should be cautious about giving results from these
surveys too much weight. These studies have several limitations, including the fact that the questions asked are often ambiguous, that expertise is narrowly defined, and that respondents do not receive training in quantitative forecasting [11], [18]. Experts disagree
substantially about when crucial capabilities will be achieved [10], but
these surveys cannot tell us who (if anyone) is more accurate in their predictions.
Issues of accuracy and reliability aside, forecasts focused solely on
timelines for specific events are limited in how much they can inform our decisions about AI today. While it is interesting to know how much experts disagree on AI progress via these probabilistic estimates, they cannot tell us why experts disagree or what would change their minds. Surveys tell us little about what early warning signs to look out for or where we should place our focus today to shape the future development and impact of AI.
At the same time, several projects, e.g. [19]– [22], have begun to
track and measure progress in AI. These projects focus on a range of indicators relevant to AI progress, but do not make any systematic attempt to identify which markers of progress are more important than others for the preparation of transformative applications. Time and attention for tracking progress is limited and it would be helpful if we were able to prioritise and monitor those research areas that are most relevant to mitigating risks.Recognising some of the limitations of existing work, [23] aims
for a more holistic approach to AI forecasting. This framework emphasises the use of the Delphi technique [24] to aggregate different
perspectives of a group of experts, and cognitive mapping methods to study how different milestones relate to one another, rather than to simply forecast milestones in isolation. We agree that such methods might address some limitations of previous work in both AI forecasting and monitoring. AI forecasting has focused on timelines for particularly extreme events, but these timelines are subject to enormous uncertainty and do not indicate near-term warning signs. AI measurement initiatives have the opposite limitation: they focus on near-term progress, but with little systematic reflection on which avenues of progress are, from a governance perspective, more important to monitor than others. What is needed are attempts to identify areas of progress today that may be particularly important to pay attention to, given concerns about the kinds of transformative AI systems that may be possible in future.
B. Participatory Technology Assessments
Presently, the impacts of AI are largely shaped by a small group of
powerful people with a narrow perspective which can be at odds with public interest [25]. Only a few powerful actors, such as governments, defence agencies, and firms the size of Google or Amazon, have the resources to conduct ambitious research projects. Democratic control over these research projects is limited. Governments retain discretion over what gets regulated, large technology firms can distort and avoid policies via intensive lobbying [26] and defence agencies may classify ongoing research.
Recognising these problems, a number of initiatives over the past
few years have emphasised the need for wider participation in the development and governance of AI [27]– [29]. In considering how
best to achieve this, it is helpful to look to the field of science and technology studies (STS) which has long considered the value of democratising research progress [30], [31]. Several publications refer
to the ‘participatory turn’ [32] in STS and an increasing interest in
the role of the non-expert in technology development and assessment [27]. More recently, in the spirit of “democratic experimentation” [33], various methods for civic participation have been developed and trialled, including deliberative polls, citizen juries and scenario exercises [33].
With a widening conception of expertise, a large body of research on
“participatory technology assessment” (PTA) has emerged, aiming to examine how we might increase civic participation in how technology is developed, assessed and rolled out. We cannot summarise this wide-ranging and complex body of work fully here. But we point towards some relevant pieces for interested readers to begin with. [34] and
[35] present a typology of the methods and goals of participating, which now come in many forms. This means that assessments of the success of PTAs are challenging [33] and ongoing because different
studies evaluate different PTA processes against different goals [34].
Yet while scholars recognise remaining limitations of PTAs [31],
several arguments for their advantages have been brought forward, ranging from citizen agency to consensus identification and justice. There are good reasons to believe that non-experts possess relevant end-user expertise. They often quickly develop the relevant subject-matter understanding to contribute meaningfully, leading to better epistemic outcomes due to a greater diversity of views which result in a cancellation of errors [36], [37]. To assess the performance of PTAs
scholars draw from case studies and identify best practices [38]– [40].
There is an important difference between truly participatory,
democratically minded, technology assessments, and consultations that use the public to help legitimise a preconceived technology [41]. The question of how to make PTAs count in established representational
International Journal of Interactive Multimedia and Artificial Intelligence, Vol. 6, Nº5- 102 -
democracies is an ongoing challenge to the field [31], [33]. But [42],
who present a recent example of collective technology policy-making,
show that success and impact with PTAs is possible. [40] draw from
38 international case studies to extract best practices, building on [38],
who showcase great diversity of possible ways in which to draw on the public. Comparing different approaches is difficult, but has been done [39], [43]. [41] present a conceptual framework with which to design
and assess PTAs, [44] compares online versus offline methodologies
and in [35] we find a typology of various design choices for public
engagement mechanisms. See also [45] for a helpful discussion on how to determine the diversity of participants, [46] on what counts as expertise in foresight and [30], [32], [47] for challenges to be aware of
in implementing PTAs.
Many before us have noted that we need wider participation in the
development and governance of AI, including by calling for the use of PTAs in designing algorithms [48], [49]. We see a need to go beyond
greater participation in addressing existing problems with algorithms and propose that wider participation should also be considered in conversations about future AI impacts.
Experts and citizens each have a role to play in ensuring that AI
governance is informed by and inclusive of a wide range of knowledge, concerns and perspectives. However, the question of how best to marry expert foresight and citizen engagement is a challenging one. While a full answer to this question is beyond the scope of this paper, what we do offer is a first step: a proposal for how expert elicitation can be used to identify important warnings which can later be used to facilitate timely democratic debate. For such debates to be useful, we first need an idea of which developments on the horizon can be meaningfully assessed and influenced, for which it makes sense to draw on public expertise and limited attention. This is precisely what our method aims to provide.
III. Identifying Early Warning Signs
We believe that identifying canaries for transformative AI is
a tractable problem and worth investing research effort in today. Engineering and cognitive development present a proof of principle: capabilities are achieved sequentially, meaning that there are often key underlying capabilities which, if attained, unlock progress in many other areas. For example, musical protolanguage is thought to have enabled grammatical competence in the development of language in homo sapiens [50]. AI progress so far has also seen such amplifiers: the use of multi-layered non-linear learning or stochastic gradient descent arguably laid the foundation for unexpectedly fast progress on image recognition, translation and speech recognition [51]. By mapping out the dependencies between different capabilities needed to reach some notion of transformative AI, therefore, we should be able to identify milestones which are particularly important for enabling many others - these are our canaries.
The proposed methodology is intended to be highly adaptable and
can be used to identify canaries for a number of important potentially transformative events, such as foundational research breakthroughs or the automation of tasks that affect a wide range of jobs. Many types of indicators could be of interest and classed as canaries, including: algorithmic innovation that supports key cognitive faculties (e.g., natural language understanding); overcoming known technical challenges (such as improving the data efficiency of deep learning algorithms); or improved applicability of AI to economically-relevant tasks (e.g. text summarization).
Given an event for which we wish to identify canaries, our
methodology has three essential steps: (1) identifying key milestones towards the event; (2) identifying dependency relations between these milestones; and (3) identifying milestones which underpin many others as canaries. See Fig. 1 for an illustration. We here deliberately refrain from describing the method with too much specificity, because we want to stress the flexibility of our approach, and recognise that there is currently no one-fits-all approach in forecasting. The method will require adaptation to the particular transformative event in question, but each step of this method is suited for such specifications. We outline example adaptations of the method to particular cases.There is an urgent need to develop anticipatory governance approaches to AI development and deployment. As AI advances, its impacts on society will become more profound, and some harms may be too great to rely on purely ‘reactive’ or retrospective governance.Anticipating future impacts is a challenging task. Experts show substantial disagreement about when different advances in AI capabilities should be expected [10], [11]. Policy-makers face challenges in keeping pace with technological progress: it is difficult to foresee impacts before a technology is deployed, but after deployment it may already be too late to shape impacts, and some harm may already have been done [12]. Ideally, we would focus preventative, anticipatory efforts on applications which are close enough to deployment to be meaningfully in/f_luenced today, but whose impacts we are not already seeing. Finding ‘early warning signs’ of transformative AI applications can help us to do this.Early warning signs can also help democratise AI development and governance. They can provide time and direction for much-needed public discourse about what we want and do not want from AI. It is not enough for anticipatory governance to look out for supposedly ‘inevitable’ future impacts. We are not mere bystanders in this AI revolution: the futures we occupy will be futures of our own making, driven by the actions of technology developers, policymakers, civil society and the public. In order to prevent foreseeable harms towards those people who bear the effects of AI deployments, we must /f_ind ways for AI developers to be held accountable to the society which/uni00A0they are embedded in. If we want AI to bene/f_it society broadly, we must urgently /f_ind ways to give democratic control to those who will be impacted./uni00A0 Our aim with identifying early warning signs is to develop anticipatory methods which can prompt a focussed civic discourse around signi/f_icant developments and provide a wider range of people with the information they need to contribute to conversations about the future of AI.We present a methodology for identifying early warning signs of potentially transformative impacts of AI and discuss how these can feed into more anticipatory and democratic governance processes. We call these early warning signs ‘canaries’ based on the practice of using canaries to provide early warnings of unsafe air pollution in coal mines in the industrial revolution. Others before us have used this term in the context of AI to stress the importance of early warning signs [13], [14], but this is the /f_irst attempt to outline in detail how such ‘arti/f_icial canaries’ might be identi/f_ied and used.Our methodology is a prototype but we believe it provides an important /f_irst step towards assessing and then trialling the feasibility of identifying canaries. We /f_irst present the approach and then illustrate it on two high-level examples, in which we identify preliminary warning signs of AI applications that could undermine democracy, and warning signs of progress towards high-level machine intelligence (HLMI). We explain why early warning signs are needed by drawing on the literature of participatory technology assessments, and we discuss the advantages and practical challenges of this method in the hope of preparing future research that might attempt to put this method into practise. Our theoretical exploration of a method to identify early warning signs of transformative applications provides a foundation towards more anticipatory, accountable and democratic governance of AI in practice./uni00A0
Related Work
We rely on two main bodies of work. Our methodology for identifying canaries relies on the literature on forecasting and monitoring AI. Our suggestions for how canaries might be used once identi/f_ied build on work on participatory technology assessments, which stresses a more inclusive approach to technology governance. While substantial research exists in both these areas, we believe this is the /f_irst piece of work that shows how they could feed into each other.
AI forecasting and monitoring
Over the past decade, an increasing number of studies have attempted to forecast AI progress. They commonly use expert elicitations to generate probabilistic estimates for when different AI advances and milestones will be achieved [10], [15]–[17]. For example, [16] ask experts about when speci/f_ic milestones in AI will be achieved, including passing the Turing Test or passing third grade. Both [15] and [10] ask experts to predict the arrival of high-level machine intelligence (HLMI), which the latter de/f_ine as when “unaided machines can accomplish every task better and more cheaply than human workers” ./uni00A0However, we should be cautious about giving results from these surveys too much weight. These studies have several limitations, including the fact that the questions asked are often ambiguous, that expertise is narrowly de/f_ined, and that respondents do not receive training in quantitative forecasting [11], [18]. Experts disagree substantially about when crucial capabilities will be achieved [10], but these surveys cannot tell us who (if anyone) is more accurate in their predictions.Issues of accuracy and reliability aside, forecasts focused solely on timelines for speci/f_ic events are limited in how much they can inform our decisions about AI today. While it is interesting to know how much experts disagree on AI progress via these probabilistic estimates, they cannot tell us why experts disagree or what would change their minds. Surveys tell us little about what early warning signs to look out for or where we should place our focus today to shape the future development and impact of AI./uni00A0At the same time, several projects, e.g. [19]–[22], have begun to track and measure progress in AI. These projects focus on a range of indicators relevant to AI progress, but do not make any systematic attempt to identify which markers of progress are more important than others for the preparation of transformative applications. Time and attention for tracking progress is limited and it would be helpful if we were able to prioritise and monitor those research areas that are most relevant to mitigating risks.Recognising some of the limitations of existing work, [23] aims for a more holistic approach to AI forecasting. This framework emphasises the use of the Delphi technique [24] to aggregate different perspectives of a group of experts, and cognitive mapping methods to study how different milestones relate to one another, rather than to simply forecast milestones in isolation. We agree that such methods might address some limitations of previous work in both AI forecasting and monitoring. AI forecasting has focused on timelines for particularly extreme events, but these timelines are subject to enormous uncertainty and do not indicate near-term warning signs. AI measurement initiatives have the opposite limitation: they focus on near-term progress, but with little systematic re/f_lection on which avenues of progress are, from a governance perspective, more important to monitor than others. What is needed are attempts to identify areas of progress today that may be particularly important to pay attention to, given concerns about the kinds of transformative AI systems that may be possible in future./uni00A0
Participatory technology assessments
Presently, the impacts of AI are largely shaped by a small group of powerful people with a narrow perspective which can be at odds with public interest [25]. Only a few powerful actors, such as governments, defence agencies, and /f_irms the size of Google or Amazon, have the resources to conduct ambitious research projects. Democratic control over these research projects is limited. Governments retain discretion over what gets regulated, large technology /f_irms can distort and avoid policies via intensive lobbying [26] and defence agencies may classify ongoing research./uni00A0Recognising these problems, a number of initiatives over the past few years have emphasised the need for wider participation in the development and governance of AI [27]–[29]. In considering how best to achieve this, it is helpful to look to the /f_ield of science and technology studies (STS) which has long considered the value of democratising research progress [30], [31]. Several publications refer to the ‘participatory turn’ [32] in STS and an increasing interest in the role of the non-expert in technology development and assessment [27]. More recently, in the spirit of “democratic experimentation” [33], various methods for civic participation have been developed and trialled, including deliberative polls, citizen juries and scenario exercises [33]./uni00A0With a widening conception of expertise, a large body of research on “participatory technology assessment” (PTA) has emerged, aiming to examine how we might increase civic participation in how technology is developed, assessed and rolled out. We cannot summarise this wide-ranging and complex body of work fully here. But we point towards some relevant pieces for interested readers to begin with. [34] and [35] present a typology of the methods and goals of participating, which now come in many forms. This means that assessments of the success of PTAs are challenging [33] and ongoing because different studies evaluate different PTA processes against different goals [34]. Yet while scholars recognise remaining limitations of PTAs [31], several arguments for their advantages have been brought forward, ranging from citizen agency to consensus identi/f_ication and justice. There are good reasons to believe that non-experts possess relevant end-user expertise. They often quickly develop the relevant subject-matter understanding to contribute meaningfully, leading to better epistemic outcomes due to a greater diversity of views which result in a cancellation of errors [36], [37]. To assess the performance of PTAs scholars draw from case studies and identify best practices [38]–[40].There is an important difference between truly participatory, democratically minded, technology assessments, and/uni00A0consultations that use the public to help legitimise a preconceived technology [41]. The question of how to make PTAs count in established representational democracies is an ongoing challenge to the /f_ield [31], [33]. But [42], who present a recent example of collective technology policy-making, show that success and impact with PTAs is possible. [40] draw from 38 international case studies to extract best practices, building on [38], who showcase great diversity of possible ways in which to draw on the public. Comparing different approaches is difficult, but has been done [39], [43]. [41] present a conceptual framework with which to design and assess PTAs, [44] compares online versus offline methodologies and in [35] we /f_ind a typology of various design choices for public engagement mechanisms. See also [45] for a helpful discussion on how to determine the diversity of participants, [46] on what counts as expertise in foresight and [30], [32], [47] for challenges to be aware of in implementing PTAs.Many before us have noted that we need wider participation in the development and governance of AI, including by calling for the use of PTAs in designing algorithms [48], [49]. We see a need to go beyond greater participation in addressing existing problems with algorithms and propose that wider participation should also be considered in conversations about future AI impacts.Experts and citizens each have a role to play in ensuring that AI governance is informed by and inclusive of a wide range of knowledge, concerns and perspectives. However, the question of how best to marry expert foresight and citizen engagement is a challenging one. While a full answer to this question is beyond the scope of this paper, what we do offer is a /f_irst step: a proposal for how expert elicitation can be used to identify important warnings which can later be used to facilitate timely democratic debate./uni00A0For such debates to be useful, we /f_irst need an idea of which developments on the horizon can be meaningfully assessed and in/f_luenced, for which it makes sense to draw on public expertise and limited attention. This is precisely what our method aims to provide.
Identifying Early Warning Signs
We believe that identifying canaries for transformative AI is a tractable problem and worth investing research effort in today. Engineering and cognitive development present a proof of principle: capabilities are achieved sequentially, meaning that there are often key underlying capabilities which, if attained, unlock progress in many other areas. For example, musical protolanguage is thought to have enabled grammatical competence in the development of language in homo sapiens [50]. AI progress so far has also seen such ampli/f_iers: the use of multi-layered non-linear learning or stochastic gradient descent arguably laid the foundation for unexpectedly fast progress on image recognition, translation and speech recognition [51]. By mapping out the dependencies between different capabilities needed to reach some notion of transformative AI, therefore, we should be able to identify milestones which are particularly important for enabling many others - these are our canaries./uni00A0The proposed methodology is intended to be highly adaptable and can be used to identify canaries for a number of important potentially transformative events, such as foundational research breakthroughs or the automation of tasks that affect a wide range of jobs. Many types of indicators could be of interest and classed as canaries, including: algorithmic innovation that supports key cognitive faculties (e.g., natural language understanding); overcoming known technical challenges (such as improving the data efficiency of deep learning algorithms); or improved applicability of AI to economically-relevant tasks (e.g. text summarization)./uni00A0Given an event for which we wish to identify canaries, our methodology has three essential steps: (1) identifying key milestones towards the event; (2) identifying dependency relations between these milestones; and (3) identifying milestones which underpin many others as canaries. See Fig. 1 for an illustration. We here deliberately refrain from describing the method with too much speci/f_icity, because we want to stress the /f_lexibility of our approach, and recognise that there is currently no one-/f_its-all approach in forecasting. The method will require adaptation to the particular transformative event in question, but each step of this method is suited for such speci/f_ications. We outline example adaptations of the method to particular cases.Fig. 1./uni00A0 Illustration of methodological steps to identify canaries of AI progress./uni00A0
Identifying milestones via expert elicitation
The /f_irst step of our methodology involves using traditional approaches in expert elicitation to identify milestones that may be relevant to the transformative event in question. Which experts are selected is crucial to the outcome and reliability of studies in AI forecasting. There are unavoidable limitations of using any form of subjective judgement in forecasting, but these limitations can be minimised by carefully thinking through the group selection. Both the direct expertise of individuals, and how they contribute to the diversity of the overall group, must be considered. See [46] for a discussion of who counts as an expert in forecasting. Researchers should decide in advance what kinds of expertise are most relevant and must be combined to study the milestones that relate to the transformative event. Milestones might include technical limitations of current methods (e.g. adversarial attacks) and informed speculation about future capabilities (e.g. common sense) that may be important prerequisites to the transformative event. Consulting across a wide range of academic disciplines to order such diverse milestones is important. For example, a cohort of experts identifying and ordering milestones towards HLMI should include not only experts in machine learning and computer science but also cognitive scientists, philosophers, developmental psychologists, evolutionary biologists, or animal cognition experts. Such a group combines expertise on current capabilities in AI, with expertise on key pillars of cognitive development and the order in which cognitive faculties develop in animals. Groups which are diverse (on multiple dimensions) are expected to produce better epistemic outcomes [37], [52]./uni00A0We encourage the careful design and phrasing of questions to enable participants to make use of their expertise, but refrain from demanding answers that lie outside their area of expertise. For example, asking machine learning researchers directly for milestones towards HLMI does not draw on their expertise. But asking machine learning researchers about the limitations of the methods they use every day; or asking psychologists what human capacities they see lacking in machines today, draws directly on their day-to-day experience. Perceived limitations can be then be transformed into milestones.There are several different methods available for expert elicitation including surveys, interviews, workshops and focus groups, each with advantages and disadvantages. Interviews provide greater opportunity to tailor questions to the speci/f_ic expert, but can be time-intensive compared to surveys and reduce the sample size of experts. If possible, some combination of the two may be ideal: using carefully selected semi-structured interviews to elicit initial milestones, followed-up with surveys with a much broader group to validate which milestones are widely accepted as being key.
Mapping causal relations between milestones
The second step of our methodology involves convening experts to identify causal relations between identi/f_ied milestones: that is, how milestones may underpin, depend on, or affect progress towards other milestones. Experts should be guided in generating directed causal graphs, a type of cognitive map that elicits a person’s perceived causal relations between components. Causal graphs use arrows to represent perceived causal relations between nodes, which in this case are milestones [53]./uni00A0This process primarily focuses on /f_inding out whether or not a relationship exists at all; how precisely this relationship is speci/f_ied can be adapted to the goals of the study. An arrow from A to B at minimum indicates that progress on A will allow for further progress on B. But this relationship can also be made more precise: in some cases indicating that progress on AI is necessary for progress on B, for example. The relationship between nodes may be either linear or non-linear; again this can be speci/f_ied more precisely if needed or known./uni00A0Constructing and debating causal graphs can “help groups to convert tacit knowledge into explicit knowledge” [53]. Causal graphs are used as decision support for individuals or groups, and are often used to solve problems in policy and management involving complex relationships between components in a system by tapping into experts’ mental models and intuitions. We therefore suggest that causal graphs are particularly well-suited to eliciting experts’ models and assumptions about the relationship between different milestones in AI development.As a method, causal graphs are highly /f_lexible and can be adapted to the preferred level of detail for a given study: they can be varied in complexity and can be analysed both quantitatively and qualitatively [54], [55]. We neither exclude nor favour quantitative approaches here, due to the complexity and uncertainty of the questions around transformative events. Particularly for very high-level questions, quantitative approaches might not offer much advantage and might communicate a false sense of certainty. In narrower domains where there is more existing evidence, however, quantitative approaches may help to represent differences in the strength of relationships between milestones.[56] notes that there are no ready-made designs that will /f_it all studies: design and analysis of causal mapping procedures must be matched to a clear theoretical context and the goal of the study. We highlight a number of different design choices which can be used to adapt the process. As more studies use causal graphs in expert elicitations about AI developments, we can learn from the success of different design choices over time and identify best practices.[53] stress that interviews or collective brainstorming are the most accepted method for generating the data upon which to analyse causal relations. [57] list heuristics on how to manage the procedure of combining graphs by different participants, or see [58] for a discussion on evaluating different options presented by experts. [59] suggest visual, interactive tools to aid the process. [56] and [60] discuss approaches to analysing graphs and extracting the emergent properties, signi/f_icant ‘core’ nodes as well as hierarchical clusters. Core or “potent” nodes are those that relate to many clusters in the graphs and thus have implications for connected nodes. In our proposed methodology, such potent nodes play a central role in pointing to canary milestones. For more detail on the many options on how to generate, analyse and use causal graphs we refer the reader to the volume of [57], or reviews such as [53], [59]. See [55] for an example of applying cognitive mapping to expert views on UK public policies; and [61] for group problem solving with causal graphs./uni00A0We propose that identi/f_ied experts be given instruction in generating either an individual causal graph, after which a mediated discussion between experts generates a shared graph; or that the groups of experts as a whole generates the causal graph via argumentation, visualisations and voting procedures if necessary. As [62] emphasises, any group of experts will have both shared and con/f_licting assumptions, which causal graphs aim to integrate in a way that approaches greater accuracy than that contained in any single expert viewpoint. The researchers are free to add as much detail to the /f_inal maps as required or desired. Each node can be broken into subcomponents or justi/f_ied with extensive literature reviews./uni00A0
Identifying canaries
Finally, the resulting causal graphs can be used to identify nodes of particular relevance for progress towards the transformative event in question. This can be a node with a high number of outgoing arrows, i.e. milestones which unlock many others that are prerequisites for the event in question. It can also be a node which functions as a bottleneck - a single dependency node that restricts access to a subsequent highly signi/f_icant milestone. See Fig. 2 for an illustration. Progress on these milestones can thus represent a ‘canary’ , indicating that further advances in subsequent milestones will become possible and more likely. These canaries can act as early warning signs for potentially rapid and discontinuous progress, or may signal that applications are becoming ready for deployment./uni00A0 Experts identify nodes which unlock or provide a bottleneck for a signi/f_icant number of other nodes (some amount of discretion from the experts/conveners will be needed to determine what counts as ‘signi/f_icant’)./uni00A0Of course, in some cases generating these causal graphs and using them to identify canaries may be as complicated as a full scienti/f_ic research project. The difficulty of estimating causal relationships between future technological advances must not be underestimated. However, we believe it to be the case that each individual researcher already does this to some extent, when they chose to prioritise a research project, idea or method over another within a research paradigm. Scientists also debate the most fruitful and promising research avenues and arguably place bets on implicit maps of milestones as they pick a research agenda. The idea is not to generate maps that provide a perfectly accurate indication of warning signs, but to use the wisdom of crowds to make implicit assumptions explicit, creating the best possible estimate of which milestones may provide important indications of future transformative progress.Using Early Warning SignsOnce identi/f_ied, canary milestones can immediately help to focus existing efforts in forecasting and anticipatory governance. Given limited resources, early warning signs can direct governance attention to areas of AI progress which are soon likely to impact society and which can be in/f_luenced now. For example, if progress in a speci/f_ic area of NLP (e.g. sentiment analysis) serves as a warning sign for the deployment of more engaging social bots to manipulate voters, policymakers and regulators can monitor or regulate access and research on this research area within NLP ./uni00A0We can also establish research and policy initiatives to monitor and forecast progress towards canaries. Initiatives might automate the collection, tracking and /f_lagging of new publications relevant to canary capabilities, and build a database of relevant publications. They might use prediction platforms to enable collective forecasting of progress towards canary capabilities. Foundational research can try to validate hypothesised relationships between milestones or illuminate the societal implications of different milestones.These forecasting and tracking initiatives can be used to improve policy prioritisation more broadly. For example, if we begin to see substantial progress in an area of AI likely to impact jobs in a particular domain, policymakers can begin preparing for potential unemployment in that sector with greater urgency.However, we believe the value of early warning signs can go further and support us in democratising the development and deployment of AI. Providing opportunities for participation and control over policy is a fundamental part of living in a democratic society. It may be especially important in the case of AI, since its deployment might indeed transform society across many sectors. If AI applications are to bring bene/f_its across such wide-ranging contexts, AI deployment strategies must consider and be directed by the diverse interests found across those sectors. Interests which are underrepresented at technology /f_irms are otherwise likely to bear the negative impacts.There is currently an information asymmetry between those developing AI and those impacted by it. Citizens need better information about speci/f_ic developments and impacts which might affect them. Public attention and funding for deliberation processes is not unlimited, so we need to think carefully about which technologies to direct public attention and funding towards. Identifying early warning signs can help address this issue, by focusing the attention of public debate and directing funding towards deliberation practises that centre around technological advancements on the horizon.We believe early warning signs may be particularly well-suited to feed into participatory technology assessments (PTAs), as introduced earlier. Early warning signs can provide a concrete focal point for citizens and domain experts to collectively discuss concerns. Having identi/f_ied a speci/f_ic warning sign, various PTA formats could be suited to consult citizens who are especially likely to be impacted. PTAs come in many forms and a full analysis of which design is best suited to assessing particular AI applications is beyond the scope of this article. But the options are plenty and PTAs show much potential (see section 2). For example, Taiwan has had remarkable success and engagement with an open consultation of citizens on complex technology policy questions [42]. An impact assessment of PTA is not a simple task, but we hypothesise that carefully designed, inclusive PTAs would present a great improvement over how AI is currently developed, deployed and governed. Our suggestion is not limited to governmental bodies. PTAs or other deliberative processes can be run by research groups and private institutions such as AI labs, technology companies and think tanks who are concerned with ensuring AI bene/f_its all of humanity.Method illustrationsWe outline two examples of how this methodology could be adapted and implemented: one focused on identifying warning signs of a particular societal impact, the other on warning signs of progress towards particular technical capabilities. Both these examples pertain to high-level, complex questions about the future development and impacts of AI, meaning our discussion can only begin to illustrate what the process of identifying canaries would look like, and what questions such a process might raise. Since the results are only the suggestions of the authors of this paper, we do not show a full implementation of the method whose value lies in letting a group of experts deliberate. As mentioned previously, the work of generating these causal maps will often be a research project of its own, and we will return later to the question of what level of detail and certainty is needed to make the resulting graphs useful.
First illustration: AI applications in voter manipulation
We show how our method could identify warning signs of the kind of algorithmic progress which could improve the effectiveness of, or reduce the cost of, algorithmic election manipulation./uni00A0The use of algorithms in attempts to manipulate election results incur great risk for the epistemic resilience of democratic countries [63]–[65]. Manipulations of public opinion by national and commercial actors are not a new phenomenon. [66] details the history of how newly emerging technologies are often used for this purpose. But recent advances in deep learning techniques, as well as the widespread use of social media, have introduced easy and more effective mechanisms for in/f_luencing opinions and behaviour. [8] and [67] detail the various ways in which political and commercial actors incur harm to the information ecosystem via the use of algorithms. Manipulators pro/f_ile voters to identify susceptible targets on social media, distribute micro-targeted advertising, spread misinformation about policies of the opposing candidate and try to convince unwanted voters not to vote. Automation plays a large role in in/f_luencing online public discourse. Publications like [68], [69] note that manipulators use both human-run accounts and bots [70] or a combination of the two [71]. Misinformation [72] and targeted messaging [73] can have transformative implications for the resilience of democracies and very possibility of collective action [74], [75].Despite attempts by national and sub-national actors to apply algorithms to in/f_luence elections, their impact so far has been contested [76]. Yet, foreign actors and national political campaigns will continue to have incentives and substantial resources to invest in such campaigns, suggesting their efforts are unlikely to wane in future. We may thus inquire what kinds of technological progress would increase the risk that elections can be successfully manipulated. We can begin this inquiry by identifying what technological barriers currently prevent full-scale election manipulation.We would identify those technological limitations by drawing on the expertise of actors who are directly affected by these bottlenecks. Those might be managers of online political campaigns and foreign consulting /f_irms (as described in [8]), who specialise in in/f_luencing public opinion via social media, or governmental organisations across the world who comment on posts, target individual in/f_luencers and operate fake accounts to uphold and spread particular beliefs. People who run such political cyber campaigns have knowledge of what technological bottlenecks still constrain their in/f_luence on voter decisions. We recommend running a series of interviews to collect a list of limitations./uni00A0This list might include, for example, that the natural language functionality of social bots is a major bottleneck for effective online in/f_luence (for the plausibility of this being an important technical factor see [8]). Targeted users often disengage from a chat conversation after detecting that they are exchanging messages with social bots. Low retention time is presumably a bottleneck for further manipulation, which suggests that improvements in natural language processing (NLP) would signi/f_icantly reduce the cost of manipulation as social bots become more effective./uni00A0We will assume, for the purpose of this illustration that NLP were to be identi/f_ied as a key bottleneck. We would then seek to gather experts (e.g. in a workshop) who can identify and map milestones (or current limitations) in NLP likely to be relevant to improving the functionality of social bots. This will include machine learning experts who specialise in NLP and understand the technical barriers to developing more convincing social bots; as well as experts in developmental linguistics and evolutionary biology, who can determine suitable benchmarks and the required skills, and who understand the order in which linguistic skills are usually developed in animals./uni00A0From these expert elicitation processes we would acquire a list of milestones in NLP which, if achieved, would likely lower the cost and increase the effectiveness of online manipulation. Experts would then order milestones into a causal graph of dependencies. Given the interdisciplinary nature of the question at hand, we suggest in this case that the graph should be directly developed by the whole group. A mediated discussion in a workshop context can help to draw out different connections between milestones and the reasoning behind them, ensuring participants do not make judgements outside their range of expertise. A voting procedure such as majority voting should be used if no consensus can be reached. In a /f_inal step, experts can highlight milestone nodes in the /f_inal graph which are either marked by many outgoing nodes or are bottlenecks for a series of subsequent nodes that are not accessed by an alternative pathway. These (e.g. sentiment analysis) are our canaries: areas of progress which serve as a warning sign of NLP being applied more effectively in voter manipulation./uni00A0Having looked at how this methodology can be used to identify warning signs of a speci/f_ic societal impact, we next illustrate a different application of the method in which we aim to identify warning signs of a research breakthrough.
Second illustration: high-level machine intelligence
We use this second example to illustrate in more detail what the process of developing a causal map might look like once initial milestones have been identi/f_ied, and how canary capabilities can be identi/f_ied from the map.We de/f_ine high-level machine intelligence (HLMI) as an AI system (or collection of AI systems) that performs at the level of an average human adult on key cognitive measures required for economically relevant tasks. We choose to focus on HLMI since it is a milestone which has been the focus of previous forecasting studies [10], [15], and which, despite the ambiguity and uncertain nature of the concepts, is interesting to attempt to examine, because it is likely to precipitate widely transformative societal impacts./uni00A0To trial this method, we used interview results from [11]. 25 experts from a diverse set of disciplines (including computer science, cognitive science and neuroscience) were interviewed and asked what they believed to be the main limitations preventing current machine learning methods from achieving the capabilities of HLMI. These limitations can be translated into ‘milestones’: capabilities experts believe machine learning methods need to achieve on the path to HLMI, i.e. the output of step 1 of our methodology./uni00A0Having identi/f_ied key milestones, step 2 of our methodology involves exploring dependencies between them using causal graphs. We use the software VenSim to illustrate hypothesised relationships between milestones (see Fig. 2). For example, we hypothesise that the ability to formulate, comprehend and manipulate abstract concepts may be an important prerequisite to the ability to account for unobservable phenomena, which is in turn important for reasoning about causality. This map of causal relations and dependencies was constructed by the authors alone, and is therefore far from de/f_initive, but provides a useful illustration of the kind of output this methodology can produce.
Fig. 2. Cognitive map of dependencies between milestones collected in expert elicitations. Arrows coloured in green signify those milestones that have most outgoing arrows. See appendix for description of each milestone and dependency relations between one ‘canary’ node and subsequent nodes.
Based on this causal map, we can identify three candidates for canary capabilities:
Representations that allow variable-binding and disentanglement: the ability to construct abstract, discrete and disentangled representations of inputs, to allow for efficiency and variable-binding. We hypothesise that this capability underpins several others, including grammar, mathematical reasoning, concept formation, and /f_lexible memory.Flexible memory: the ability to store, recognise, and re-use memory and knowledge representations. We hypothesise that this ability would unlock many others, including the ability to learn from dynamic data, to learn in a continual fashion, and to update old interpretations of data as new information is acquired./uni00A0Positing unobservables: the ability to recognise and use unobservable concepts that are not represented in the visual features of a scene, including numerosity or intentionality./uni00A0
We might tentatively suggest that these are important capabilities to track progress on from the perspective of anticipating HLMI. Discussion and Future Directions
As the two illustrative examples show, there are many complexities and challenges involved in putting this method into practice./uni00A0One particular challenge is that there is likely to be substantial uncertainty in the causal graphs developed. This uncertainty can come in many forms. Milestones that are not well understood are likely to be composed of several sub-milestones. As more research is produced, the graph will be in need of revision. Some such revisions may include the addition of connections between milestones that were previously not foreseen, which in turn might alter the number of outgoing connections from nodes and turn them into potent nodes, i.e. ‘canaries’ .The process of involving a diversity of experts in a multi-stage, collaborative process is designed to reduce this uncertainty by allowing for the identi/f_ication of nodes and relationships that are widely agreed upon and so more likely to be robust. However, considerable uncertainty will inevitably remain due to the nature of forecasting. The higher the level of abstraction and ambiguity in the events studied (like events such as HLMI, which we use for our illustration) the greater the uncertainty inherent in the map and the less reliable the forecasts will likely be. It will be important to /f_ind ways to acknowledge and represent this uncertainty in the maps developed and conclusions drawn from them. This might include marking uncertainties in the graph and taking this into account when identifying and communicating ‘canary’ nodes./uni00A0Given the uncertainty inherent in forecasting, we must consider what kinds of inevitable misjudgements are most important to try to avoid. A precautionary perspective would suggest it is better to slightly overspend resources on monitoring canaries that turn out to be false positives, rather than to miss an opportunity to anticipate signi/f_icant technological impacts. This suggests we may want to set a low threshold for what should be considered a ‘canary’ in the /f_inal stage of the method.The uncertainty raises an important question: will it on average be better to have an imperfect, uncertain mapping of milestones rather than none at all? There is some chance that incorrect estimates of ‘canaries’ could be harmful. An incorrect mapping could focus undue attention on some avenue of AI progress, waste resources or distract from more important issues./uni00A0Our view is that it is nonetheless preferable to attempt a prioritisation. The realistic alternative is that anticipatory governance is not attempted or informed by scholars’ individual estimates in an ad-hoc manner, which we should expect to be incorrect more often than our collective and structured expert elicitation. How accurate our method is can only be studied by trialling it and tracking its predictions as AI research progresses to con/f_irm or refute the forecasts./uni00A0Future studies are likely to face several trade-offs in managing the uncertainty. For example, a large and cognitively diverse expert group may be better placed to develop robust maps eventually, but this may be a much more challenging process than doing it with a smaller, less diverse group -- making the latter a tempting choice (see [45] for a discussion of this trade-off). The study of/uni00A0broad and high-level questions (such as when we might attain HLMI or automate a large percentage of jobs) may be more societally relevant or intellectually motivating, but narrower studies focused on nearer-term, well-de/f_ined applications or impacts may be easier to reach certainty on./uni00A0A further risk is that this method, intended to identify warning signs so as to give time to debate transformative applications, may inadvertently speed up progress towards AI capabilities and applications. By fostering expert deliberation and mapping milestones, it is likely that important research projects and goals are highlighted and the /f_ield’s research roadmap is improved. This means our method must be used with caution. However, we do not believe this is a reason to abandon the approach, since these concerns must be balanced against the bene/f_its of being able to deliberate upon and shape the impacts of AI in advance. In particular, we believe that the process of distilling information from experts in a way that can be communicated to wider society, including those currently underrepresented in debates about the future of AI, is likely to have many more bene/f_its than costs.The idea that we can identify ‘warning signs’ for progress assumes that there will be some time lag between progress on milestones, during which anticipatory governance work can take place. Of course, the extent to which this is possible will vary, and in some cases, unlocking a ‘canary’ capability could lead to very rapid progress on subsequent milestones. Future work could consider how to incorporate assessment of timescales into the causal graphs developed, so that it is easier to identify canaries which warn of future progress while allowing time to prepare.Future work should also critically consider what constitutes relevant ‘expertise’ for the task of identifying canaries, and further explore ways to effectively integrate expert knowledge with the values and perspectives of diverse publics. Our method /f_inds a role for the expert situated in a larger democratic process of anticipating and regulating emerging technologies./uni00A0 Expert judgement can thereby be bene/f_icial to wider participation. However,/uni00A0processes that allow more interaction between experts and citizens could be even more effective. One limitation of the method presented in this paper is that it requires one to have already identi/f_ied a particular transformative event of concern, but does not provide guidance on how to identify and prioritise between events. It may be valuable to consider how citizens that are impacted by technology can play a role in identifying initial areas of concern, which can then feed into this process of expert elicitation to address the concerns.ConclusionWe have presented a /f_lexible method for identifying early warning signs, or ‘canaries’ in AI progress. Once identi/f_ied, these canaries can provide focal points for anticipatory governance efforts, and can form the basis for meaningful participatory processes enabling citizens to steer AI developments and their impacts. Future work must now test this method by putting it into practice, which will more clearly reveal both bene/f_its and limitations. Our arti/f_icial canaries offer a chance for forward-looking, democratic assessments of transformative technologies./uni00A0AcknowledgementsWe thank reviewers for their particularly detailed comments and engagement with this paper, the scholars at the Leverhulme Centre for the Future of Intelligence for fruitful discussions after our presentation, as well as the attendees of the workshop Evaluating Progress in AI at the European Conference on AI (Aug 2020) for recognizing the potential of this work. We particularly thank Carolyn Ashurst and Luke Kemp for their efforts and commentary on our drafts.
References
[1] K. Crawford et al., ‘AI Now Report 2019’ , AI 2019 Report, p. 100, 2019.[2] S. Russell, Human Compatible. Viking Press, 2019.[3] C. Cath, S. Wachter, B. Mittelstadt, M. Taddeo, and L. Floridi, ‘Arti/f_icial Intelligence and the “Good Society”: the US, EU, and UK approach’ , Sci. Eng. Ethics, vol. 24, no. 2, pp. 505–528, Apr. 2018, doi: 10.1007/s11948-017-9901-7.[4] J. Whittlestone, R. Nyrup, A. Alexandrova, K. Dihal, and S. Cave, ‘Ethical and societal implications of algorithms, data, and arti/f_icial intelligence: a roadmap for research’ , 2019, p. 59.[5] Y. K. Dwivedi et al., ‘Arti/f_icial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy’ , Int. J. Inf. Manag., p. 101994, Aug. 2019, doi: 10.1016/j.ijinfomgt.2019.08.002.[6] R. Gruetzemacher and J. Whittlestone, ‘The Transformative Potential of Arti/f_icial Intelligence’ , ArXiv191200747 Cs, Sep. 2020, Accessed: Jan. 09, 2021. [Online]. Available: http://arxiv.org/abs/1912.00747.[7] M. Brundage et al., ‘The Malicious Use of Arti/f_icial Intelligence: Forecasting, Prevention, and Mitigation’ , ArXiv180207228 Cs, Feb. 2018, Accessed: Jan. 15, 2021. [Online]. Available: http://arxiv.org/abs/1802.07228.[8] P . Howard, Lie Machines, How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives. Yale: Yale University Press, 2020.[9] C. B. Frey and M. A. Osborne, ‘The future of employment: How susceptible are jobs to computerisation?’ , Technol. Forecast. Soc. Change, vol. 114, pp. 254–280, Jan. 2017, doi: 10.1016/j.techfore.2016.08.019.[10] K. Grace, J. Salvatier, A. Dafoe, B. Zhang, and O. Evans, ‘Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts’ , J. Artif. Intell. Res., vol. 62, pp. 729–754, Jul. 2018, doi: 10.1613/jair.1.11222.[11] C. Z. Cremer, ‘Deep Limitations? Examining Expert Disagreement over Deep Learning’ , Prog. Artif. Intell. Springer, to be published 2021.[12] D. Collingridge, The social control of technology. London: Frances Pinter, 1980.[13] O. Etzioni, ‘How to know if arti/f_icial intelligence is about to destroy civilization’ , MIT Technology Review. https://www.technologyreview.com/s/615264/arti/f_icial-intelligence-destroy-civilization-canaries-robot-overlords-take-over-world-ai/ (accessed Mar. 12, 2020).[14] A. Dafoe, ‘The academics preparing for the possibility that AI will destabilise global politics’ , 80,000 Hours, 2018. https://80000hours.org/podcast/episodes/allan-dafoe-politics-of-ai/ (accessed Jan. 15, 2021).[15] V. C. Müller and N. Bostrom, ‘Future Progress in Arti/f_icial Intelligence: A Survey of Expert Opinion’ , in Fundamental Issues of Arti/f_icial Intelligence, V. C. Müller, Ed. Cham: Springer International Publishing, 2016, pp. 555–572.[16] S. D. Baum, B. Goertzel, and T. G. Goertzel, ‘How long until human-level AI? Results from an expert assessment’ , Technol. Forecast. Soc. Change, vol. 78, no. 1, pp. 185–195, Jan. 2011, doi: 10.1016/j.techfore.2010.09.006.[17] S. Beard, T. Rowe, and J. Fox, ‘An analysis and evaluation of methods currently used to quantify the likelihood of existential hazards’ , Futures, vol. 115, p. 102469, Jan. 2020, doi: 10.1016/j.futures.2019.102469.[18] P . E. Tetlock and D. Gardner, Superforecasting: the art and science of prediction, First edition. New York: Crown Publishers, 2015.[19] N. Benaich and I. Hogarth, ‘State of AI Report 2020’ , 2020. https://www.stateof.ai/ (accessed Jan. 15, 2021).[20] P . Eckersley and Y. Nasser, ‘AI Progress Measurement’ , Electronic Frontier Foundation, Jun. 12, 2017. https://www.eff.org/ai/metrics (accessed Jan. 15, 2021).[21] ‘Papers with Code’ , Available at: https://paperswithcode.com (accessed Feb. 08, 2021).[22] R. Perrault et al., ‘The AI Index 2019 Annual Report’ , AI Index Steer. Comm. Hum.-Centered AI Inst. Stanf. Univ. Stanf. CA, 2019.[23] Gruetzemacher, ‘A Holistic Framework for Forecasting Transformative AI’ , Big Data Cogn. Comput., vol. 3, no. 3, p. 35, Jun. 2019, doi: 10.3390/bdcc3030035.[24] H. A. Linstone and M. Turoff, The delphi method. Addison-Wesley Reading, MA, 1975.[25] S. M. West, M. Whittaker, and K. Crawford, ‘Discriminating Systems: Gender, Race and Power in AI’ , AI Now Institute, 2019. [Online]. Available: Retrieved from https://ainowinstitute.org/discriminatingsystems.html.[26] P . Nemitz and M. Pfeffer, Prinzip Mensch - Macht, Freiheit und Demokratie im Zeitalter der Künstlichen Intelligenz. Verlag J.H.W. Dietz Nachf., 2020.[27] M. Ipsos, ‘Public views of Machine Learning: Findings from public research and engagement conducted on behalf of the Royal Society’ , THe Royal Society, 2017. [Online]. Available: https://royalsociety.org/-/media/policy/projects/machine-learning/publications/public-views-of-machine-learning-ipsos-mori.pdf.[28] The RSA, ‘Arti/f_icial Intelligence: Real Public Engagement. ’ , oyal Society for the encouragement of Arts, Manufactures and Commerce, London, 2018.[29] T. Cohen, J. Stilgoe, and C. Cavoli, ‘Reframing the governance of automotive automation: insights from UK stakeholder workshops’ , J. Responsible Innov., vol. 5, no. 3, pp. 257–279, Sep. 2018, doi: 10.1080/23299460.2018.1495030.[30] M. Lengwiler, ‘Participatory Approaches in Science and Technology: Historical Origins and Current Practices in Critical Perspective’ , Sci. Technol. Hum. Values, vol. 33, no. 2, pp. 186–200, Mar. 2008, doi: 10.1177/0162243907311262.[31] M. Rask, ‘The tragedy of citizen deliberation – two cases of participatory technology assessment’ , Technol. Anal. Strateg. Manag., vol. 25, no. 1, pp. 39–55, Jan. 2013, doi: 10.1080/09537325.2012.751012.[32] J. Chilvers, ‘Deliberating Competence: Theoretical and Practitioner Perspectives on Effective Participatory Appraisal Practice’ , Sci. Technol. Hum. Values, vol. 33, no. 2, pp. 155–185, Mar. 2008, doi: 10.1177/0162243907307594.[33] G. Abels, ‘Participatory Technology Assessment And The “Institutional Void”: Investigating Democratic Theory And Representative Politics” published on 01 Jan 2010 by Brill. ’ , in Democratic Transgressions of Law, vol. 112, Brill, 2010, pp. 237–268.[34] P . Biegelbauer and A. Loeber, ‘The Challenge of Citizen Participation to Democracy’ , Inst. Für Höhere Stud. - Inst. Adv. Stud. IHS, p. 46, 2010.[35] G. Rowe and L. J. Frewer, ‘A Typology of Public Engagement Mechanisms’ , Sci. Technol. Hum. Values, vol. 30, no. 2, pp. 251–290, Apr. 2005, doi: 10.1177/0162243904271724.[36] L. Hong and S. E. Page, ‘Groups of diverse problem solvers can outperform groups of high-ability problem solvers’ , Proc. Natl. Acad. Sci., vol. 101, no. 46, pp. 16385–16389, Nov. 2004, doi: 10.1073/pnas.0403723101.[37] H. Landemore, Democratic Reason. Princeton: Princeton University Press, 2017.[38] S. Joss and S. Bellucci, Participatory Technology Assessment: European Perspectives. London: Center for the Study of Democracy, 2002.[39] Y. Zhao, C. Fautz, L. Hennen, K. R. Srinivas, and Q. Li, ‘Public Engagement in the Governance of Science and Technology’ , in Science and Technology Governance and Ethics: A Global Perspective from Europe, India and China, M. Ladikas, S. Chaturvedi, Y. Zhao, and D. Stemerding, Eds. Cham: Springer International Publishing, 2015, pp. 39–51.[40] M. T. Rask et al., Public Participation, Science and Society: Tools for Dynamic and Responsible Governance of Research and Innovation. Routledge - Taylor & Francis Group, 2018.[41] J. Burgess and J. Chilvers, ‘Upping the ante: a conceptual framework for designing and evaluating participatory technology assessments’ , Sci. Public Policy, vol. 33, no. 10, pp. 713–728, Dec. 2006, doi: 10.3152/147154306781778551.[42] Y. T. Hsiao, S.-Y. Lin, A. Tang, D. Narayanan, and C. Sarahe, ‘vTaiwan: An Empirical Study of Open Consultation Process in Taiwan’ , SocArXiv, preprint, Jul. 2018. doi: 10.31235/osf.io/xyhft.[43] J. Hansen, ‘Operationalising the public in participatory technology assessment: A framework for comparison applied to three cases’ , Sci. Public Policy, vol. 33, no. 8, pp. 571–584, Oct. 2006, doi: 10.3152/147154306781778678.[44] T.-P . Ertiö, P . Tuominen, and M. Rask, ‘Turning Ideas into Proposals: A Case for Blended Participation During the Participatory Budgeting Trial in Helsinki’ , in Electronic Participation: ePart 2019, Jul. 2019, pp. 15–25, doi: 10.1007/978-3-030-27397-2_2.[45] M. Rask, ‘Foresight — balancing between increasing variety and productive convergence’ , Technol. Forecast. Soc. Change - TECHNOL FORECAST SOC CHANGE, vol. 75, pp. 1157–1175, Oct. 2008, doi: 10.1016/j.techfore.2007.12.002.[46] S. Mauksch, H. A. von der Gracht, and T. J. Gordon, ‘Who is an expert for foresight? A review of identi/f_ication methods’ , Technol. Forecast. Soc. Change, vol. 154, p. 119982, May 2020, doi: 10.1016/j.techfore.2020.119982.[47] J. Saldivar, C. Parra, M. Alcaraz, R. Arteta, and L. Cernuzzi, ‘Civic Technology for Social Innovation: A Systematic Literature Review’ , Comput. Support. Coop. Work CSCW, vol. 28, no. 1–2, pp. 169–207, Apr. 2019, doi: 10.1007/s10606-018-9311-7.[48] T. Kariotis and J. Darakhshan, ‘Fighting Back Algocracy: The need for new participatory approaches to technology assessment’ , in Proceedings of the 16th Participatory Design Conference 2020 - Participation(s) Otherwise - Volume 2, Manizales Colombia, Jun. 2020, pp. 148–153, doi: 10.1145/3384772.3385151.[49] M. Whitman, C. Hsiang, and K. Roark, ‘Potential for participatory big data ethics and algorithm design: a scoping mapping review’ , in Proceedings of the 15th Participatory Design Conference: Short Papers, Situated Actions, Workshops and Tutorial - Volume 2, New York, NY, USA, Aug. 2018, pp. 1–6, doi: 10.1145/3210604.3210644.[50] C. Buckner and K. Yang, ‘Mating dances and the evolution of language: What’s the next step?’ , Biol. Philos., vol. 32, 2017, doi: 10.1007/s10539-017-9605-z.[51] Y. LeCun, Y. Bengio, and G. Hinton, ‘Deep learning’ , Nature, vol. 521, no. 7553, pp. 436–444, May 2015, doi: 10.1038/nature14539.[52] S. E. Page, The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools and Societies. Princeton: Princeton University Press, 2008.[53] A. J. Scavarda, T. Bouzdine-Chameeva, S. M. Goldstein, J. M. Hays, and A. V. Hill, ‘A Review of the Causal Mapping Practice and Research Literature’ , in Abstract number: 002-0256, Cancun, Mexico, 2004, p. 21.[54] L. Markíczy, and J. Goldberg, ‘A method for eliciting and comparing causal maps’ , J. Manag., vol. 21, no. 2, pp. 305–333, Jan. 1995, doi: 10.1016/0149-2063(95)90060-8.[55] C. Eden and F. Ackermann, ‘Cognitive mapping expert views for policy analysis in the public sector’ , Eur. J. Oper. Res., vol. 152, no. 3, pp. 615–630, Feb. 2004, doi: 10.1016/S0377-2217(03)00061-4.[56] C. Eden, ‘ON THE NATURE OF COGNITIVE MAPS’ , 1992, doi: 10.1111/J.1467-6486.1992.TB00664.X.[57] F. Ackerman, J. Bryson, and C. Eden, Visible Thinking, Unlocking Causal Mapping for Practical Business Results. John Wiley & Sons, 2004.[58] G. Montibeller and V. Belton, ‘Causal maps and the evaluation of decision options—a review’ , J. Oper. Res. Soc., vol. 57, no. 7, pp. 779–791, Jul. 2006, doi: 10.1057/palgrave.jors.2602214.[59] A. J. Scavarda, T. Bouzdine-Chameeva, S. M. Goldstein, J. M. Hays, and A. V. Hill, ‘A Methodology for Constructing Collective Causal Maps*’ , Decis. Sci., vol. 37, no. 2, pp. 263–283, May 2006, doi: 10.1111/j.1540-5915.2006.00124.x.[60] C. Eden, F. Ackermann, and S. Cropper, ‘The Analysis of Cause Maps’ , J. Manag. Stud., vol. 29, no. 3, pp. 309–324, 1992, doi: https://doi.org/10.1111/j.1467-6486.1992.tb00667.x.[61] F. Ackermann and C. Eden, ‘Using Causal Mapping with Group Support Systems to Elicit an Understanding of Failure in Complex Projects: Some Implications for Organizational Research’ , Group Decis. Negot., vol. 14, no. 5, pp. 355–376, Sep. 2005, doi: 10.1007/s10726-005-8917-6.[62] C. Eden, F. Ackermann, J. Bryson, G. Richardson, D. Andersen, and C. Finn, ‘Integrating Modes of Policy Analysis and Strategic Management Practice: Requisite Elements and Dilemmas’ , p. 13, 2009.[63] L.-M. Neudert and P . Howard, ‘Ready to vote: elections, technology and political campaigning in the United Kingdom’ , Oxford Technology and Elections Commission, Report, Oct. 2019. Accessed: Jan. 11, 2021. [Online]. Available: https://apo.org.au/node/263976.[64] G. Bolsover and P . Howard, ‘Computational Propaganda and Political Big Data: Moving Toward a More Critical Research Agenda’ , Big Data, vol. 5, no. 4, pp. 273–276, Dec. 2017, doi: czz.[65] M. J. Mazarr, R. Bauer, A. Casey, S. Heintz, and L. J. Matthews, ‘The Emerging Risk of Virtual Societal Warfare: Social Manipulation in a Changing Information Environment’ , Oct. 2019, Accessed: Jan. 14, 2021. [Online]. Available: https://www.rand.org/pubs/research_reports/RR2714.html.[66] T. Wu, The Attention Merchants: From the Daily Newspaper to Social Media, How Our Time and Attention is Harvested and Sold. London: Antlantic Books, 2017.[67] K. Starbird, ‘Disinformation’s spread: bots, trolls and all of us’ , Nature, vol. 571, no. 7766, pp. 449–450, Jul. 2019.[68] R. Gorwa and D. Guilbeault, ‘Unpacking the Social Media Bot: A Typology to Guide Research and Policy’ , Policy Internet, vol. 12, no. 2, pp. 225–248, Jun. 2020, doi: 10.1002/poi3.184.[69] E. Ferrara, ‘Disinformation and Social Bot Operations in the Run Up to the 2017 French Presidential Election’ , Social Science Research Network, Rochester, NY, SSRN Scholarly Paper ID 2995809, Jun. 2017. doi: 10.2139/ssrn.2995809.[70] C. Shao, G. L. Ciampaglia, O. Varol, K.-C. Yang, A. Flammini, and F. Menczer, ‘The spread of low-credibility content by social bots’ , Nat. Commun., vol. 9, no. 1, Art. no. 1, Nov. 2018, doi: 10.1038/s41467-018-06930-7.[71] P . N. Howard, S. Woolley, and R. Calo, ‘Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration’ , J. Inf. Technol. Polit., vol. 15, no. 2, pp. 81–93, Apr. 2018, doi: 10.1080/19331681.2018.1448735.[72] M. Chessen, ‘The MADCOM Future: How Arti/f_icial Intelligence Will Enhance Computational Propaganda, Reprogram Human Culture, and Threaten Democracy… and What can be Done About It. ’ , in Arti/f_icial Intelligence Safety and Security, Chapman and Hall/CRC Press, 2018, pp. 127–144.[73] K. Kertysova, ‘Arti/f_icial Intelligence and Disinformation: How AI Changes the Way Disinformation is Produced, Disseminated, and Can Be Countered’ , 2018, doi: 10.1163/18750230-02901005.[74] J. Brainard and P . R. Hunter, ‘Misinformation making a disease outbreak worse: outcomes compared for in/f_luenza, monkeypox, and norovirus’ , SIMULATION, vol. 96, no. 4, pp. 365–374, Apr. 2020, doi: 10.1177/0037549719885021.[75] E. Seger, S. Avin, G. Pearson, M. Briers, S. O Heigeartaigh, and H. Bacon, ‘Tackling threats to informed decision-making in democratic societies: Promoting epistemic security in a technologically-advanced world’ , Allan Turing Institute, CSER, dstl, 2020. Accessed: Jan. 15, 2021. [Online]. Available: https://www.turing.ac.uk/sites/default//f_iles/2020-10/epistemic-security-report_/f_inal.pdf.[76] K. H. Jamieson, Cyberwar: How Russian Hackers and Trolls Helped Elect a President: What We Don’t, Can’t, and Do Know. Oxford University Press, 2020.
Appendix
Table I. Limitations of deep learning as perceived and named by experts found in [11]Causal reasoning: the ability to detect and generalise from causal relations in data. Common sense: having a set of background beliefs or assumptions which are useful across domains and tasks.Meta-learning: the ability to learn how to best/uni00A0learn in each domain. Architecture search: the ability to automatically choose the best architecture of a neural network for a task.Hierarchical decomposition: the ability to decompose tasks and objects into smaller and hierarchical sub-components. Cross-domain generalization: the ability to apply learning from one task or domain to another.Representation: the ability to learn abstract representations of the environment for efficient learning and generalisation. Variable binding: the ability to attach symbols to learned representations, enabling generalisation and re-use.Disentanglement: the ability to understand the components and composition of observations, and recombine and recognise them in different contexts. Analogical reasoning: the ability to detect abstract similarity across domains, enabling learning and generalisation.Concept formation: the ability to formulate, manipulate and comprehend abstract concepts. Object permanence: the ability to represent objects as consistently existing even when out of sight.Grammar: the ability to construct and decompose sentences according to correct grammatical rules. Reading comprehension: the ability to detect narratives, semantic context, themes and relations between characters in long texts or stories.Mathematical reasoning: the ability to develop, identify and search mathematical proofs and follow logical deduction in reasoning. Visual question answering: the ability to answer open-ended questions about the content and interpretation of an image.Uncertainty estimation: the ability to represent and consider different types of uncertainty. Positing unobservables: the ability to account for unobservable phenomena, particularly in representing and navigating environments.Reinterpretation: the ability to partially re-categorise, re-assign or reinterpret data in light of new information without retraining from scratch. Theorising and hypothesising: the ability to propose theories and testable hypotheses, understand the difference between theory and reality, and the impact of data on theories.Flexible memory: the ability to store, recognise and retrieve knowledge so that it can be used in new environments and tasks. Efficient learning: the ability to learn efficiently from small amounts of data.Interpretability: the ability for humans to interpret internal network dynamics so that researchers can manipulate network dynamics. Continual learning: the ability to learn continuously as new data is acquired.Active learning: the ability to learn and explore in self-directed ways. Learning from inaccessible data: the ability to learn in domains where data is missing, difficult or expensive to acquire.Learning from dynamic data: the ability to learn from a continually changing stream of data. Navigating brittle environments: the ability to navigate irregular, and complex environments which lack clear reward signals and short feedback loops.Generating valuation functions: the ability to generate new valuation functions immediately from scratch to follow newly-given rules. Scalability: the ability to scale up learning to deal with new features without needing disproportionately more data, model parameters, and computational power.Learning in simulation: the ability to learn all relevant experience from a simulated environment. Metric identi/f_ication: the ability to identify appropriate metrics of success for complex tasks, such that optimising for the measured quantity accomplishes the task in the way intended.Conscious perception: the ability to experience the world from a /f_irst-person perspective. Context-sensitive decision making: the ability to adapt decision-making strategies to the needs and constraints of a given time or context.
It is worth noting there are apparent similarities and relationships between many of these milestones. For example, representation: the ability to learn abstract representations of the environment, seems closely related to variable binding: the ability to formulate place-holder concepts. The ability to apply learning from one task to another, cross-domain generalisation, seems closely related to analogical reasoning. Further progress in research will tell which of these are clearly separate milestones or more closely related notions.
Flexible memory, as described by experts in our sample, is the ability to recognize and store reusable information, in a format that is /f_lexible so that it can be retrieved and updated when new knowledge is gained. We explain the reasoning behind the labelled arrows in Fig. 2 (see Fig. 3):Fig. 3. Extract of Fig. 2, showing one candidate canary capability.
(a): compact representations are a prerequisite for /f_lexible memory since storing high-dimensional input in memory requires compressed, efficient and thus abstract representations.
(B): the ability to reinterpret data in light of new information likely requires /f_lexible memory, since it requires the ability to retrieve and alter previously stored information./uni00A0(C) and (E): to make use of dynamic and changing data input, and to learn continuously over time, an agent must be able to store, correctly retrieve and modify previous data as new data comes in.(D): in order to plan and execute strategies in brittle environments with long delays between actions and rewards, an agent must be able to store memories of past actions and rewards, but easily retrieve this information and continually update its best guess about how to obtain rewards in the environment.(F): analogical reasoning involves comparing abstract representations, which requires forming, recognising, and retrieving representations of earlier observations.
Progress in /f_lexible memory therefore seems likely to unlock or enable many other capabilities important for HLMI, especially those crucial for applying AI systems in real environments and more complex tasks. These initial hypotheses should be validated and explored in more depth by a wider range of experts.
Choose
event Identify m ilestones
towards event
Identify
experts
Generate causa l graph of
depe ndencies betw een
Identify
canary
nodes
Fig. 1. Illustration of methodological steps to identify canaries of AI progress.
A. Identifying Milestones Via Expert Elicitation
The first step of our methodology involves using traditional
approaches in expert elicitation to identify milestones that may be relevant to the transformative event in question. Which experts are selected is crucial to the outcome and reliability of studies in AI forecasting. There are unavoidable limitations of using any form of subjective judgement in forecasting, but these limitations can be minimised by carefully thinking through the group selection. Both the direct expertise of individuals, and how they contribute to the diversity of the overall group, must be considered. See [46] for a discussion of who counts as an expert in forecasting.
Researchers should decide in advance what kinds of expertise are
most relevant and must be combined to study the milestones that relate to the transformative event. Milestones might include technical limitations of current methods (e.g. adversarial attacks) and informed speculation about future capabilities (e.g. common sense) that may be important prerequisites to the transformative event. Consulting across a wide range of academic disciplines to order such diverse milestones is important. For example, a cohort of experts identifying and ordering milestones towards HLMI should include not only experts in machine learning and computer science but also cognitive scientists, philosophers, developmental psychologists, evolutionary biologists, or animal cognition experts. Such a group combines expertise on current capabilities in AI, with expertise on key pillars of cognitive development and the order in which cognitive faculties develop in animals. Groups which are diverse (on multiple dimensions) are expected to produce better epistemic outcomes [37], [52].
We encourage the careful design and phrasing of questions to
enable participants to make use of their expertise, but refrain from demanding answers that lie outside their area of expertise. For example, asking machine learning researchers directly for milestones towards HLMI does not draw on their expertise. But asking machine learning researchers about the limitations of the methods they use every day; or asking psychologists what human capacities they see lacking in machines today, draws directly on their day-to-day experience. Perceived limitations can be then be transformed into milestones.
Special Issue on Artificial Intelligence, Paving the Way to the Future- 103 -
There are several different methods available for expert elicitation
including surveys, interviews, workshops and focus groups, each with
advantages and disadvantages. Interviews provide greater opportunity to tailor questions to the specific expert, but can be time-intensive compared to surveys and reduce the sample size of experts. If possible, some combination of the two may be ideal: using carefully selected semi-structured interviews to elicit initial milestones, followed-up with surveys with a much broader group to validate which milestones are widely accepted as being key.
B. Mapping Causal Relations Between Milestones
The second step of our methodology involves convening experts
to identify causal relations between identified milestones: that is, how milestones may underpin, depend on, or affect progress towards other milestones. Experts should be guided in generating directed causal graphs, a type of cognitive map that elicits a person’s perceived causal relations between components. Causal graphs use arrows to represent perceived causal relations between nodes, which in this case are milestones [53].
This process primarily focuses on finding out whether or not a
relationship exists at all; how precisely this relationship is specified can be adapted to the goals of the study. An arrow from A to B at minimum indicates that progress on A will allow for further progress on B. But this relationship can also be made more precise: in some cases indicating that progress on AI is necessary for progress on B, for
example. The relationship between nodes may be either linear or non-linear; again this can be specified more precisely if needed or known.
Constructing and debating causal graphs can “help groups to
convert tacit knowledge into explicit knowledge” [53]. Causal graphs
are used as decision support for individuals or groups, and are often used to solve problems in policy and management involving complex relationships between components in a system by tapping into experts’ mental models and intuitions. We therefore suggest that causal graphs are particularly well-suited to eliciting experts’ models and assumptions about the relationship between different milestones in AI development.
As a method, causal graphs are highly flexible and can be adapted
to the preferred level of detail for a given study: they can be varied in complexity and can be analysed both quantitatively and qualitatively [54], [55]. We neither exclude nor favour quantitative approaches
here, due to the complexity and uncertainty of the questions around transformative events. Particularly for very high-level questions, quantitative approaches might not offer much advantage and might communicate a false sense of certainty. In narrower domains where there is more existing evidence, however, quantitative approaches may help to represent differences in the strength of relationships between milestones.
[56] notes that there are no ready-made designs that will fit all
studies: design and analysis of causal mapping procedures must be matched to a clear theoretical context and the goal of the study. We highlight a number of different design choices which can be used to adapt the process. As more studies use causal graphs in expert elicitations about AI developments, we can learn from the success of different design choices over time and identify best practices.
[53] stress that interviews or collective brainstorming are the
most accepted method for generating the data upon which to analyse causal relations. [57] list heuristics on how to manage the procedure of combining graphs by different participants, or see [58]
for a discussion on evaluating different options presented by experts. [59] suggest visual, interactive tools to aid the process. [56] and [60]
discuss approaches to analysing graphs and extracting the emergent properties, significant ‘core’ nodes as well as hierarchical clusters. Core or “potent” nodes are those that relate to many clusters in the graphs and thus have implications for connected nodes. In our proposed methodology, such potent nodes play a central role in pointing to canary milestones. For more detail on the many options on how to generate, analyse and use causal graphs we refer the reader to the volume of [57], or reviews such as [53], [59]. See [55] for an example
of applying cognitive mapping to expert views on UK public policies; and [61] for group problem solving with causal graphs.
We propose that identified experts be given instruction in
generating either an individual causal graph, after which a mediated discussion between experts generates a shared graph; or that the groups of experts as a whole generates the causal graph via argumentation, visualisations and voting procedures if necessary. As [62] emphasises, any group of experts will have both shared and
conflicting assumptions, which causal graphs aim to integrate in a way that approaches greater accuracy than that contained in any single expert viewpoint. The researchers are free to add as much detail to the final maps as required or desired. Each node can be broken into subcomponents or justified with extensive literature reviews.
C. Identifying Canaries
Finally, the resulting causal graphs can be used to identify nodes of
particular relevance for progress towards the transformative event in question. This can be a node with a high number of outgoing arrows, i.e. milestones which unlock many others that are prerequisites for the event in question. It can also be a node which functions as a bottleneck - a single dependency node that restricts access to a subsequent highly significant milestone. See Fig. 2 for an illustration. Progress on these milestones can thus represent a ‘canary’, indicating that further advances in subsequent milestones will become possible and more likely. These canaries can act as early warning signs for potentially rapid and discontinuous progress, or may signal that applications are becoming ready for deployment. Experts identify nodes which unlock or provide a bottleneck for a significant number of other nodes (some amount of discretion from the experts/conveners will be needed to determine what counts as ‘significant’).
Of course, in some cases generating these causal graphs and using
them to identify canaries may be as complicated as a full scientific research project. The difficulty of estimating causal relationships between future technological advances must not be underestimated. However, we believe it to be the case that each individual researcher already does this to some extent, when they chose to prioritise a research project, idea or method over another within a research paradigm. Scientists also debate the most fruitful and promising research avenues and arguably place bets on implicit maps of milestones as they pick a research agenda. The idea is not to generate maps that provide a perfectly accurate indication of warning signs, but to use the wisdom of crowds to make implicit assumptions explicit, creating the best possible estimate of which milestones may provide important indications of future transformative progress.
IV. Using Early Warning Signs
Once identified, canary milestones can immediately help to focus
existing efforts in forecasting and anticipatory governance. Given limited resources, early warning signs can direct governance attention to areas of AI progress which are soon likely to impact society and which can be influenced now. For example, if progress in a specific area of NLP (e.g. sentiment analysis) serves as a warning sign for the deployment of more engaging social bots to manipulate voters, policymakers and regulators can monitor or regulate access and research on this research area within NLP.
We can also establish research and policy initiatives to monitor
and forecast progress towards canaries. Initiatives might automate
International Journal of Interactive Multimedia and Artificial Intelligence, Vol. 6, Nº5- 104 -
the collection, tracking and flagging of new publications relevant
to canary capabilities, and build a database of relevant publications. They might use prediction platforms to enable collective forecasting of progress towards canary capabilities. Foundational research can try to validate hypothesised relationships between milestones or illuminate the societal implications of different milestones.
These forecasting and tracking initiatives can be used to improve
policy prioritisation more broadly. For example, if we begin to see substantial progress in an area of AI likely to impact jobs in a particular domain, policymakers can begin preparing for potential unemployment in that sector with greater urgency.
However, we believe the value of early warning signs can go further
and support us in democratising the development and deployment of AI. Providing opportunities for participation and control over policy is a fundamental part of living in a democratic society. It may be especially important in the case of AI, since its deployment might indeed transform society across many sectors. If AI applications are to bring benefits across such wide-ranging contexts, AI deployment strategies must consider and be directed by the diverse interests found across those sectors. Interests which are underrepresented at technology firms are otherwise likely to bear the negative impacts.
There is currently an information asymmetry between those
developing AI and those impacted by it. Citizens need better information about specific developments and impacts which might affect them. Public attention and funding for deliberation processes is not unlimited, so we need to think carefully about which technologies to direct public attention and funding towards. Identifying early warning signs can help address this issue, by focusing the attention of public debate and directing funding towards deliberation practises that centre around technological advancements on the horizon.
We believe early warning signs may be particularly well-suited to
feed into participatory technology assessments (PTAs), as introduced earlier. Early warning signs can provide a concrete focal point for citizens and domain experts to collectively discuss concerns. Having identified a specific warning sign, various PTA formats could be suited to consult citizens who are especially likely to be impacted. PTAs come in many forms and a full analysis of which design is best suited to assessing particular AI applications is beyond the scope of this article. But the options are plenty and PTAs show much potential (see section 2). For example, Taiwan has had remarkable success and engagement with an open consultation of citizens on complex technology policy questions [42]. An impact assessment of PTA is not a simple task, but
we hypothesise that carefully designed, inclusive PTAs would present a great improvement over how AI is currently developed, deployed and governed. Our suggestion is not limited to governmental bodies. PTAs or other deliberative processes can be run by research groups and private institutions such as AI labs, technology companies and think tanks who are concerned with ensuring AI benefits all of humanity.
V. Method Illustrations
We outline two examples of how this methodology could be
adapted and implemented: one focused on identifying warning signs of a particular societal impact, the other on warning signs of progress towards particular technical capabilities. Both these examples pertain to high-level, complex questions about the future development and impacts of AI, meaning our discussion can only begin to illustrate what the process of identifying canaries would look like, and what questions such a process might raise. Since the results are only the suggestions of the authors of this paper, we do not show a full implementation of the method whose value lies in letting a group of experts deliberate. As mentioned previously, the work of generating these causal maps will often be a research project of its own, and we will return later to the question of what level of detail and certainty is needed to make the resulting graphs useful.
A. First Illustration: AI Applications in Voter Manipulation
We show how our method could identify warning signs of the kind
of algorithmic progress which could improve the effectiveness of, or reduce the cost of, algorithmic election manipulation. The use of algorithms in attempts to manipulate election results incur great risk for the epistemic resilience of democratic countries [63]– [65].
Manipulations of public opinion by national and commercial actors
are not a new phenomenon. [66] details the history of how newly emerging technologies are often used for this purpose. But recent advances in deep learning techniques, as well as the widespread use of social media, have introduced easy and more effective mechanisms for influencing opinions and behaviour. [8] and [67] detail the various
ways in which political and commercial actors incur harm to the information ecosystem via the use of algorithms. Manipulators profile voters to identify susceptible targets on social media, distribute micro-targeted advertising, spread misinformation about policies of the opposing candidate and try to convince unwanted voters not to vote. Automation plays a large role in influencing online public discourse. Publications like [68], [69] note that manipulators use
both human-run accounts and bots [70] or a combination of the
two [71]. Misinformation [72] and targeted messaging [73] can have
transformative implications for the resilience of democracies and very possibility of collective action [74], [75].
Despite attempts by national and sub-national actors to apply
algorithms to influence elections, their impact so far has been contested [76]. Yet, foreign actors and national political campaigns
will continue to have incentives and substantial resources to invest in such campaigns, suggesting their efforts are unlikely to wane in future. We may thus inquire what kinds of technological progress would increase the risk that elections can be successfully manipulated. We can begin this inquiry by identifying what technological barriers currently prevent full-scale election manipulation.
We would identify those technological limitations by drawing on
the expertise of actors who are directly affected by these bottlenecks. Those might be managers of online political campaigns and foreign consulting firms (as described in [8]), who specialise in influencing
public opinion via social media, or governmental organisations across the world who comment on posts, target individual influencers and operate fake accounts to uphold and spread particular beliefs. People who run such political cyber campaigns have knowledge of what technological bottlenecks still constrain their influence on voter decisions. We recommend running a series of interviews to collect a list of limitations.
This list might include, for example, that the natural language
functionality of social bots is a major bottleneck for effective online influence (for the plausibility of this being an important technical factor see [8]). Targeted users often disengage from a chat conversation after
detecting that they are exchanging messages with social bots. Low retention time is presumably a bottleneck for further manipulation, which suggests that improvements in natural language processing (NLP) would significantly reduce the cost of manipulation as social bots become more effective.
We will assume, for the purpose of this illustration that NLP were to
be identified as a key bottleneck. We would then seek to gather experts (e.g. in a workshop) who can identify and map milestones (or current limitations) in NLP likely to be relevant to improving the functionality of social bots. This will include machine learning experts who specialise in NLP and understand the technical barriers to developing more convincing social bots; as well as experts in developmental
Special Issue on Artificial Intelligence, Paving the Way to the Future- 105 -
linguistics and evolutionary biology, who can determine suitable
benchmarks and the required skills, and who understand the order in which linguistic skills are usually developed in animals.
From these expert elicitation processes we would acquire a list
of milestones in NLP which, if achieved, would likely lower the cost and increase the effectiveness of online manipulation. Experts would then order milestones into a causal graph of dependencies. Given the interdisciplinary nature of the question at hand, we suggest in this case that the graph should be directly developed by the whole group. A mediated discussion in a workshop context can help to draw out different connections between milestones and the reasoning behind them, ensuring participants do not make judgements outside their range of expertise. A voting procedure such as majority voting should be used if no consensus can be reached. In a final step, experts can highlight milestone nodes in the final graph which are either marked by many outgoing nodes or are bottlenecks for a series of subsequent nodes that are not accessed by an alternative pathway. These (e.g. sentiment analysis) are our canaries: areas of progress which serve as a warning sign of NLP being applied more effectively in voter manipulation.
Having looked at how this methodology can be used to identify
warning signs of a specific societal impact, we next illustrate a different application of the method in which we aim to identify warning signs of a research breakthrough.
B. Second Illustration: High-level Machine intelligence
We use this second example to illustrate in more detail what
the process of developing a causal map might look like once initial milestones have been identified, and how canary capabilities can be identified from the map.
We define high-level machine intelligence (HLMI) as an AI system
(or collection of AI systems) that performs at the level of an average human adult on key cognitive measures required for economically relevant tasks. We choose to focus on HLMI since it is a milestone which has been the focus of previous forecasting studies [10], [15], and
which, despite the ambiguity and uncertain nature of the concepts, is interesting to attempt to examine, because it is likely to precipitate widely transformative societal impacts.
To trial this method, we used interview results from [11]. 25
experts from a diverse set of disciplines (including computer science, cognitive science and neuroscience) were interviewed and asked what they believed to be the main limitations preventing current machine learning methods from achieving the capabilities of HLMI. These limitations can be translated into ‘milestones’: capabilities experts believe machine learning methods need to achieve on the path to HLMI, i.e. the output of step 1 of our methodology.
Having identified key milestones, step 2 of our methodology involves
exploring dependencies between them using causal graphs. We use the software VenSim to illustrate hypothesised relationships between milestones (see Fig. 2). For example, we hypothesise that the ability to formulate, comprehend and manipulate abstract concepts may be an important prerequisite to the ability to account for unobservable phenomena, which is in turn important for reasoning about causality. This map of causal relations and dependencies was constructed by the authors alone, and is therefore far from definitive, but provides a useful illustration of the kind of output this methodology can produce.
Based on this causal map, we can identify three candidates for
canary capabilities:
Representations that allow variable-binding and
disentanglement: the ability to construct abstract, discrete and disentangled representations of inputs, to allow for efficiency and variable-binding. We hypothesise that this capability underpins several others, including grammar, mathematical reasoning, concept formation, and flexible memory.
Flexible memory: the ability to store, recognise, and re-use memory
and knowledge representations. We hypothesise that this ability would unlock many others, including the ability to learn from dynamic data, to learn in a continual fashion, and to update old interpretations of data as new information is acquired.
Positing unobservables: the ability to recognise and use
unobservable concepts that are not represented in the visual features of a scene, including numerosity or intentionality.
We might tentatively suggest that these are important capabilities
to track progress on from the perspective of anticipating HLMI.
VI. Discussion and Future Directions
As the two illustrative examples show, there are many
complexities and challenges involved in putting this method into practice. One particular challenge is that there is likely to be substantial uncertainty in the causal graphs developed. This uncertainty can come in many forms.
Milestones that are not well understood are likely to be composed
of several sub-milestones. As more research is produced, the graph will be in need of revision. Some such revisions may include the addition of connections between milestones that were previously not foreseen, Representa tion,
Variable-Bindi ng,
Disentangl ementHierarchical
Decompos itionGrammer
Mathema tical
Reasoning
Catastro phic
Forge /t_tingConcept
Formation
Flexible
Memory
Reinterpreta tionsDynamic DataContinual
Learning
Bri/t_tle E nviron ments
Context-Dependent
DecisionsAnalogi cal Reasoni ng,
Overfi/t_tingAdverserial
A/t_tacksPosit
UnobservablesCausa lityTheorising,
HypothesisingMeta-Learni ng,
Architecture-SearchUncertainty
Estimation
Cross-Do main
Genera lisationScalab ilityCommon
SenseObject
PermanenceVisual /Q_ues tion
Answering
Reading
Comprehension
Active
LearningEnviron mental
Pressure
Misguided Data Co llectionEfficient
Learning
Fig. 2. Cognitive map of dependencies between milestones collected in expert elicitations. Arrows coloured in green signify those milestones that have most
outgoing arrows. See appendix for description of each milestone and dependency relations between one ‘canary’ node and subsequent nodes.
International Journal of Interactive Multimedia and Artificial Intelligence, Vol. 6, Nº5- 106 -
which in turn might alter the number of outgoing connections from
nodes and turn them into potent nodes, i.e. ‘canaries’.
The process of involving a diversity of experts in a multi-stage,
collaborative process is designed to reduce this uncertainty by allowing for the identification of nodes and relationships that are widely agreed upon and so more likely to be robust. However, considerable uncertainty will inevitably remain due to the nature of forecasting. The higher the level of abstraction and ambiguity in the events studied (like events such as HLMI, which we use for our illustration) the greater the uncertainty inherent in the map and the less reliable the forecasts will likely be. It will be important to find ways to acknowledge and represent this uncertainty in the maps developed and conclusions drawn from them. This might include marking uncertainties in the graph and taking this into account when identifying and communicating ‘canary’ nodes.
Given the uncertainty inherent in forecasting, we must consider
what kinds of inevitable misjudgements are most important to try to avoid. A precautionary perspective would suggest it is better to slightly overspend resources on monitoring canaries that turn out to be false positives, rather than to miss an opportunity to anticipate significant technological impacts. This suggests we may want to set a low threshold for what should be considered a ‘canary’ in the final stage of the method.
The uncertainty raises an important question: will it on average be
better to have an imperfect, uncertain mapping of milestones rather than none at all? There is some chance that incorrect estimates of ‘canaries’ could be harmful. An incorrect mapping could focus undue attention on some avenue of AI progress, waste resources or distract from more important issues.
Our view is that it is nonetheless preferable to attempt a
prioritisation. The realistic alternative is that anticipatory governance is not attempted or informed by scholars’ individual estimates in an ad-hoc manner, which we should expect to be incorrect more often than our collective and structured expert elicitation. How accurate our method is can only be studied by trialling it and tracking its predictions as AI research progresses to confirm or refute the forecasts.
Future studies are likely to face several trade-offs in managing the
uncertainty. For example, a large and cognitively diverse expert group may be better placed to develop robust maps eventually, but this may be a much more challenging process than doing it with a smaller, less diverse group -- making the latter a tempting choice (see [45]
for a discussion of this trade-off). The study of broad and high-level questions (such as when we might attain HLMI or automate a large percentage of jobs) may be more societally relevant or intellectually motivating, but narrower studies focused on nearer-term, well-defined applications or impacts may be easier to reach certainty on.
A further risk is that this method, intended to identify warning
signs so as to give time to debate transformative applications, may inadvertently speed up progress towards AI capabilities and applications. By fostering expert deliberation and mapping milestones, it is likely that important research projects and goals are highlighted and the field’s research roadmap is improved. This means our method must be used with caution.
However, we do not believe this is a reason to abandon the approach,
since these concerns must be balanced against the benefits of being able to deliberate upon and shape the impacts of AI in advance. In particular, we believe that the process of distilling information from experts in a way that can be communicated to wider society, including those currently underrepresented in debates about the future of AI, is likely to have many more benefits than costs.
The idea that we can identify ‘warning signs’ for progress assumes
that there will be some time lag between progress on milestones, during which anticipatory governance work can take place. Of course, the extent to which this is possible will vary, and in some cases, unlocking a ‘canary’ capability could lead to very rapid progress on subsequent milestones. Future work could consider how to incorporate assessment of timescales into the causal graphs developed, so that it is easier to identify canaries which warn of future progress while allowing time to prepare.
Future work should also critically consider what constitutes
relevant ‘expertise’ for the task of identifying canaries, and further explore ways to effectively integrate expert knowledge with the values and perspectives of diverse publics. Our method finds a role for the expert situated in a larger democratic process of anticipating and regulating emerging technologies. Expert judgement can thereby be beneficial to wider participation. However, processes that allow more interaction between experts and citizens could be even more effective. One limitation of the method presented in this paper is that it requires one to have already identified a particular transformative event of concern, but does not provide guidance on how to identify and prioritise between events. It may be valuable to consider how citizens that are impacted by technology can play a role in identifying initial areas of concern, which can then feed into this process of expert elicitation to address the concerns.
VII. Conclusion
We have presented a flexible method for identifying early warning
signs, or ‘canaries’ in AI progress. Once identified, these canaries can provide focal points for anticipatory governance efforts, and can form the basis for meaningful participatory processes enabling citizens to steer AI developments and their impacts. Future work must now test this method by putting it into practice, which will more clearly reveal both benefits and limitations. Our artificial canaries offer a chance for forward-looking, democratic assessments of transformative technologies.
Appendix
It is worth noting there are apparent similarities and relationships
between many of these milestones. For example, representation: the ability to learn abstract representations of the environment, seems closely related to variable binding: the ability to formulate place-holder concepts. The ability to apply learning from one task to another, cross-domain generalisation, seems closely related to analogical reasoning. Further progress in research will tell which of these are clearly separate milestones or more closely related notions.
Flexible memory, as described by experts in our sample, is the
ability to recognize and store reusable information, in a format that is flexible so that it can be retrieved and updated when new knowledge is gained. We explain the reasoning behind the labelled arrows in Fig. 2 (see Fig. 3):
Analogi cal
ReasoningRepresentation
Variable-Binding
Disenta nglement
Flexible Memory
Dynamic DataBri/t_tle E nvironmentConti nual L earning
Reinterpretationsa
B
CDEF
Fig. 3. Extract of Fig. 2, showing one candidate canary capability.
Special Issue on Artificial Intelligence, Paving the Way to the Future- 107 -
• (a): compact representations are a prerequisite for flexible
memory since storing high-dimensional input in memory requires compressed, efficient and thus abstract representations.
• (B): the ability to reinterpret data in light of new information likely requires flexible memory, since it requires the ability to retrieve and alter previously stored information.
• (C) and (E): to make use of dynamic and changing data input, and to learn continuously over time, an agent must be able to store, correctly retrieve and modify previous data as new data comes in.
• (D): in order to plan and execute strategies in brittle environments with long delays between actions and rewards, an agent must be able to store memories of past actions and rewards, but easily retrieve this information and continually update its best guess about how to obtain rewards in the environment.• (F): analogical reasoning involves comparing abstract representations, which requires forming, recognising, and retrieving representations of earlier observations.
Progress in flexible memory therefore seems likely to unlock
or enable many other capabilities important for HLMI, especially those crucial for applying AI systems in real environments and more complex tasks. These initial hypotheses should be validated and explored in more depth by a wider range of experts.
Acknowledgements
We thank reviewers for their particularly detailed comments and
engagement with this paper, the scholars at the Leverhulme Centre for the Future of Intelligence for fruitful discussions after our presentation,
TABLE I. Limitations of Deep Learning As Perceived and Named by Experts Found in [11]
Causal reasoning: the ability to detect and generalise from causal relations
in data.Common sense: having a set of background beliefs or assumptions which
are useful across domains and tasks.
Meta-learning: the ability to learn how to best learn in each domain. Architecture search: the ability to automatically choose the best
architecture of a neural network for a task.
Hierarchical decomposition: the ability to decompose tasks and objects
into smaller and hierarchical sub-components.Cross-domain generalization: the ability to apply learning from one task
or domain to another.
Representation: the ability to learn abstract representations of the
environment for efficient learning and generalisation.Variable binding: the ability to attach symbols to learned representations,
enabling generalisation and re-use.
Disentanglement: the ability to understand the components and
composition of observations, and recombine and recognise them in different
contexts.Analogical reasoning: the ability to detect abstract similarity across
domains, enabling learning and generalisation.
Concept formation: the ability to formulate, manipulate and comprehend
abstract concepts.Object permanence: the ability to represent objects as consistently existing
even when out of sight.
Grammar: the ability to construct and decompose sentences according to
correct grammatical rules.Reading comprehension: the ability to detect narratives, semantic
context, themes and relations between characters in long texts or stories.
Mathematical reasoning: the ability to develop, identify and search
mathematical proofs and follow logical deduction in reasoning.Visual question answering: the ability to answer open-ended questions
about the content and interpretation of an image.
Uncertainty estimation: the ability to represent and consider different
types of uncertainty.Positing unobservables: the ability to account for unobservable
phenomena, particularly in representing and navigating environments.
Reinterpretation: the ability to partially re-categorise, re-assign or
reinterpret data in light of new information without retraining from scratch.Theorising and hypothesising: the ability to propose theories and
testable hypotheses, understand the difference between theory and reality,
and the impact of data on theories.
Flexible memory: the ability to store, recognise and retrieve knowledge so
that it can be used in new environments and tasks.Efficient learning: the ability to learn efficiently from small amounts of
data.
Interpretability: the ability for humans to interpret internal network
dynamics so that researchers can manipulate network dynamics.Continual learning: the ability to learn continuously as new data is
acquired.
Active learning: the ability to learn and explore in self-directed ways. Learning from inaccessible data: the ability to learn in domains where
data is missing, difficult or expensive to acquire.
Learning from dynamic data: the ability to learn from a continually
changing stream of data.Navigating brittle environments: the ability to navigate irregular, and
complex environments which lack clear reward signals and short feedback
loops.
Generating valuation functions: the ability to generate new valuation
functions immediately from scratch to follow newly-given rules.Scalability: the ability to scale up learning to deal with new features
without needing disproportionately more data, model parameters, and
computational power.
Learning in simulation: the ability to learn all relevant experience from a
simulated environment.Metric identification: the ability to identify appropriate metrics of
success for complex tasks, such that optimising for the measured quantity
accomplishes the task in the way intended.
Conscious perception: the ability to experience the world from a first-
person perspective.Context-sensitive decision making: the ability to adapt decision-making
strategies to the needs and constraints of a given time or context.
International Journal of Interactive Multimedia and Artificial Intelligence, Vol. 6, Nº5- 108 -
as well as the attendees of the workshop Evaluating Progress in AI
at the European Conference on AI (Aug 2020) for recognizing the potential of this work. We particularly thank Carolyn Ashurst and Luke Kemp for their efforts and commentary on our drafts.
References
[1] K. Crawford et al., ‘AI Now Report 2019’, AI 2019 Report, p. 100, 2019.
[2] S. Russell, Human Compatible. Viking Press, 2019.
[3] C. Cath, S. Wachter, B. Mittelstadt, M. Taddeo, and L. Floridi, ‘Artificial Intelligence and the “Good Society”: the US, EU, and UK approach’, Sci.
Eng. Ethics, vol. 24, no. 2, pp. 505–528, Apr. 2018, doi: 10.1007/s11948-017-9901-7.
[4] J. Whittlestone, R. Nyrup, A. Alexandrova, K. Dihal, and S. Cave, ‘Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research’, 2019, p. 59.
[5] Y. K. Dwivedi et al., ‘Artificial Intelligence (AI): Multidisciplinary
perspectives on emerging challenges, opportunities, and agenda for research, practice and policy’, Int. J. Inf. Manag., p. 101994, Aug. 2019, doi: 10.1016/j.ijinfomgt.2019.08.002.
[6] R. Gruetzemacher and J. Whittlestone, ‘The Transformative Potential of Artificial Intelligence’, ArXiv191200747 Cs, Sep. 2020, Accessed: Jan. 09, 2021. [Online]. Available: http://arxiv.org/abs/1912.00747.
[7] M. Brundage et al., ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation’, ArXiv180207228 Cs, Feb.
2018, Accessed: Jan. 15, 2021. [Online]. Available: http://arxiv.org/abs/1802.07228.
[8] P. Howard, Lie Machines, How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives . Yale: Yale
University Press, 2020.
[9] C. B. Frey and M. A. Osborne, ‘The future of employment: How susceptible are jobs to computerisation?’, Technol. Forecast. Soc. Change ,
vol. 114, pp. 254–280, Jan. 2017, doi: 10.1016/j.techfore.2016.08.019.
[10] K. Grace, J. Salvatier, A. Dafoe, B. Zhang, and O. Evans, ‘Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts’, J. Artif. Intell. Res., vol. 62, pp. 729–754, Jul. 2018, doi: 10.1613/jair.1.11222.
[11] C. Z. Cremer, ‘Deep Limitations? Examining Expert Disagreement over Deep Learning’, Prog. Artif. Intell. Springer, to be published 2021.
[12] D. Collingridge, The social control of technology . London: Frances Pinter,
1980.
[13] O. Etzioni, ‘How to know if artificial intelligence is about to destroy civilization’, MIT Technology Review . https://www.technologyreview.
com/s/615264/artificial-intelligence-destroy-civilization-canaries-robot-overlords-take-over-world-ai/ (accessed Mar. 12, 2020).
[14] A. Dafoe, ‘The academics preparing for the possibility that AI will destabilise global politics’, 80,000 Hours, 2018. https://80000hours.org/podcast/episodes/allan-dafoe-politics-of-ai/ (accessed Jan. 15, 2021).
[15] V. C. Müller and N. Bostrom, ‘Future Progress in Artificial Intelligence: A Survey of Expert Opinion’, in Fundamental Issues of Artificial Intelligence,
V. C. Müller, Ed. Cham: Springer International Publishing, 2016, pp. 555–572.
[16] S. D. Baum, B. Goertzel, and T. G. Goertzel, ‘How long until human-level AI? Results from an expert assessment’, Technol. Forecast. Soc. Change ,
vol. 78, no. 1, pp. 185–195, Jan. 2011, doi: 10.1016/j.techfore.2010.09.006.
[17] S. Beard, T. Rowe, and J. Fox, ‘An analysis and evaluation of methods currently used to quantify the likelihood of existential hazards’, Futures,
vol. 115, p. 102469, Jan. 2020, doi: 10.1016/j.futures.2019.102469.
[18] P. E. Tetlock and D. Gardner, Superforecasting: the art and science of
prediction, First edition. New York: Crown Publishers, 2015.
[19] N. Benaich and I. Hogarth, ‘State of AI Report 2020’, 2020. https://www.stateof.ai/ (accessed Jan. 15, 2021).
[20] P. Eckersley and Y. Nasser, ‘AI Progress Measurement’, Electronic Frontier
Foundation, Jun. 12, 2017. https://www.eff.org/ai/metrics (accessed Jan. 15, 2021).
[21] ‘Papers with Code’, Available at: https://paperswithcode.com (accessed
Feb. 08, 2021).
[22] R. Perrault et al., ‘The AI Index 2019 Annual Report’, AI Index Steer.
Comm. Hum.-Centered AI Inst. Stanf. Univ. Stanf. CA, 2019.
[23] Gruetzemacher, ‘A Holistic Framework for Forecasting Transformative AI’, Big Data Cogn. Comput. , vol. 3, no. 3, p. 35, Jun. 2019, doi: 10.3390/
bdcc3030035.
[24] H. A. Linstone and M. Turoff, The delphi method . Addison-Wesley
Reading, MA, 1975.
[25] S. M. West, M. Whittaker, and K. Crawford, ‘Discriminating Systems: Gender, Race and Power in AI’, AI Now Institute, 2019. [Online]. Available: Retrieved from https://ainowinstitute.org/discriminatingsystems.html.
[26] P. Nemitz and M. Pfeffer, Prinzip Mensch - Macht, Freiheit und Demokratie im Zeitalter der Künstlichen Intelligenz. Verlag J.H.W. Dietz Nachf., 2020.
[27] M. Ipsos, ‘Public views of Machine Learning: Findings from public research and engagement conducted on behalf of the Royal Society’, THe Royal Society, 2017. [Online]. Available: https://royalsociety.org/-/media/policy/projects/machine-learning/publications/public-views-of-machine-learning-ipsos-mori.pdf.
[28] The RSA, ‘Artificial Intelligence: Real Public Engagement.’, oyal Society for the encouragement of Arts, Manufactures and Commerce, London, 2018.
[29] T. Cohen, J. Stilgoe, and C. Cavoli, ‘Reframing the governance of automotive automation: insights from UK stakeholder workshops’, J. Responsible Innov. , vol. 5, no. 3, pp. 257–279, Sep. 2018, doi:
10.1080/23299460.2018.1495030.
[30] M. Lengwiler, ‘Participatory Approaches in Science and Technology: Historical Origins and Current Practices in Critical Perspective’, Sci. Technol. Hum. Values , vol. 33, no. 2, pp. 186–200, Mar. 2008, doi:
10.1177/0162243907311262.
[31] M. Rask, ‘The tragedy of citizen deliberation – two cases of participatory technology assessment’, Technol. Anal. Strateg. Manag., vol. 25, no. 1, pp. 39–55, Jan. 2013, doi: 10.1080/09537325.2012.751012.
[32] J. Chilvers, ‘Deliberating Competence: Theoretical and Practitioner Perspectives on Effective Participatory Appraisal Practice’, Sci.
Technol. Hum. Values, vol. 33, no. 2, pp. 155–185, Mar. 2008, doi: 10.1177/0162243907307594.
[33] G. Abels, ‘Participatory Technology Assessment And The “Institutional Void”: Investigating Democratic Theory And Representative Politics” published on 01 Jan 2010 by Brill.’, in Democratic Transgressions of Law ,
vol. 112, Brill, 2010, pp. 237–268.
[34] P. Biegelbauer and A. Loeber, ‘The Challenge of Citizen Participation to Democracy’, Inst. Für Höhere Stud. - Inst. Adv. Stud. IHS, p. 46, 2010.
[35] G. Rowe and L. J. Frewer, ‘A Typology of Public Engagement Mechanisms’, Sci. Technol. Hum. Values, vol. 30, no. 2, pp. 251–290, Apr. 2005, doi: 10.1177/0162243904271724.
[36] L. Hong and S. E. Page, ‘Groups of diverse problem solvers can outperform groups of high-ability problem solvers’, Proc. Natl. Acad. Sci., vol. 101, no. 46, pp. 16385–16389, Nov. 2004, doi: 10.1073/pnas.0403723101.
[37] H. Landemore, Democratic Reason. Princeton: Princeton University Press,
2017.
[38] S. Joss and S. Bellucci, Participatory Technology Assessment: European
Perspectives. London: Center for the Study of Democracy, 2002.
[39] Y. Zhao, C. Fautz, L. Hennen, K. R. Srinivas, and Q. Li, ‘Public Engagement in the Governance of Science and Technology’, in Science and Technology
Governance and Ethics: A Global Perspective from Europe, India and China ,
M. Ladikas, S. Chaturvedi, Y. Zhao, and D. Stemerding, Eds. Cham: Springer International Publishing, 2015, pp. 39–51.
[40] M. T. Rask et al., Public Participation, Science and Society: Tools for
Dynamic and Responsible Governance of Research and Innovation .
Routledge - Taylor & Francis Group, 2018.
[41] J. Burgess and J. Chilvers, ‘Upping the ante: a conceptual framework for designing and evaluating participatory technology assessments’, Sci. Public Policy , vol. 33, no. 10, pp. 713–728, Dec. 2006, doi:
10.3152/147154306781778551.
[42] Y. T. Hsiao, S.-Y. Lin, A. Tang, D. Narayanan, and C. Sarahe, ‘vTaiwan: An Empirical Study of Open Consultation Process in Taiwan’, SocArXiv, preprint, Jul. 2018. doi: 10.31235/osf.io/xyhft.
[43] J. Hansen, ‘Operationalising the public in participatory technology assessment: A framework for comparison applied to three cases’, Sci. Public Policy, vol. 33, no. 8, pp. 571–584, Oct. 2006, doi: 10.3152/147154306781778678.
[44] T.-P. Ertiö, P. Tuominen, and M. Rask, ‘Turning Ideas into Proposals: A Case for Blended Participation During the Participatory Budgeting Trial in Helsinki’, in Electronic Participation: ePart 2019 , Jul. 2019, pp. 15–25,
Special Issue on Artificial Intelligence, Paving the Way to the Future- 109 -
Jess Whittlestone
Jess Whittlestone is a Senior Research Associate at Centre
for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. She works on various aspects of AI ethics and policy, with a particular focus on what we can do today to ensure AI is safe and beneficial in the long-term. She holds a PhD in Behavioural Science from the University of
Warwick and a degree in Mathematics and Philosophy from Oxford University.
Carla Zoe Cremer
Carla Zoe is a Research Scholar at the Future of Humanity Institute at the University of Oxford and a Research Affiliate at the Centre for the Study of Existential Risk at the University of Cambridge. Her background is in neurobiology, acquired at Ludwig-Maximilian University in Munich and ETH Zurich. She works on comparative cognition, the limitations of deep learning, and on
estimating tail-risks of emerging technologies.
doi: 10.1007/978-3-030-27397-2_2.
[45] M. Rask, ‘Foresight — balancing between increasing variety and productive convergence’, Technol. Forecast. Soc. Change - TECHNOL FORECAST SOC CHANGE, vol. 75, pp. 1157–1175, Oct. 2008, doi: 10.1016/j.techfore.2007.12.002.
[46] S. Mauksch, H. A. von der Gracht, and T. J. Gordon, ‘Who is an expert for foresight? A review of identification methods’, Technol. Forecast. Soc.
Change, vol. 154, p. 119982, May 2020, doi: 10.1016/j.techfore.2020.119982.
[47] J. Saldivar, C. Parra, M. Alcaraz, R. Arteta, and L. Cernuzzi, ‘Civic Technology for Social Innovation: A Systematic Literature Review’, Comput. Support. Coop. Work CSCW, vol. 28, no. 1–2, pp. 169–207, Apr. 2019, doi: 10.1007/s10606-018-9311-7.
[48] T. Kariotis and J. Darakhshan, ‘Fighting Back Algocracy: The need for new participatory approaches to technology assessment’, in Proceedings
of the 16th Participatory Design Conference 2020 - Participation(s) Otherwise - Volume 2, Manizales Colombia, Jun. 2020, pp. 148–153, doi: 10.1145/3384772.3385151.
[49] M. Whitman, C. Hsiang, and K. Roark, ‘Potential for participatory big data ethics and algorithm design: a scoping mapping review’, in Proceedings
of the 15th Participatory Design Conference: Short Papers, Situated Actions, Workshops and Tutorial - Volume 2, New York, NY, USA, Aug. 2018, pp. 1–6, doi: 10.1145/3210604.3210644.
[50] C. Buckner and K. Yang, ‘Mating dances and the evolution of language: What’s the next step?’, Biol. Philos., vol. 32, 2017, doi: 10.1007/s10539-
017-9605-z.
[51] Y. LeCun, Y. Bengio, and G. Hinton, ‘Deep learning’, Nature, vol. 521, no.
7553, pp. 436–444, May 2015, doi: 10.1038/nature14539.
[52] S. E. Page, The Difference: How the Power of Diversity Creates Better Groups,
Firms, Schools and Societies. Princeton: Princeton University Press, 2008.
[53] A. J. Scavarda, T. Bouzdine-Chameeva, S. M. Goldstein, J. M. Hays, and A. V. Hill, ‘A Review of the Causal Mapping Practice and Research Literature’, in Abstract number: 002-0256, Cancun, Mexico, 2004, p. 21.
[54] L. Markíczy, and J. Goldberg, ‘A method for eliciting and comparing causal maps’, J. Manag., vol. 21, no. 2, pp. 305–333, Jan. 1995, doi: 10.1016/0149-2063(95)90060-8.
[55] C. Eden and F. Ackermann, ‘Cognitive mapping expert views for policy analysis in the public sector’, Eur. J. Oper. Res., vol. 152, no. 3, pp. 615–630,
Feb. 2004, doi: 10.1016/S0377-2217(03)00061-4.
[56] C. Eden, ‘ON THE NATURE OF COGNITIVE MAPS’, 1992, doi: 10.1111/J.1467-6486.1992.TB00664.X.
[57] F. Ackerman, J. Bryson, and C. Eden, Visible Thinking, Unlocking Causal Mapping for Practical Business Results. John Wiley & Sons, 2004.
[58] G. Montibeller and V. Belton, ‘Causal maps and the evaluation of decision options—a review’, J. Oper. Res. Soc. , vol. 57, no. 7, pp. 779–791, Jul. 2006,
doi: 10.1057/palgrave.jors.2602214.
[59] A. J. Scavarda, T. Bouzdine-Chameeva, S. M. Goldstein, J. M. Hays, and A. V. Hill, ‘A Methodology for Constructing Collective Causal Maps*’, Decis. Sci., vol. 37, no. 2, pp. 263–283, May 2006, doi: 10.1111/j.1540-5915.2006.00124.x.
[60] C. Eden, F. Ackermann, and S. Cropper, ‘The Analysis of Cause Maps’, J. Manag. Stud., vol. 29, no. 3, pp. 309–324, 1992, doi: https://doi.org/10.1111/j.1467-6486.1992.tb00667.x.
[61] F. Ackermann and C. Eden, ‘Using Causal Mapping with Group Support Systems to Elicit an Understanding of Failure in Complex Projects: Some Implications for Organizational Research’, Group Decis. Negot., vol. 14,
no. 5, pp. 355–376, Sep. 2005, doi: 10.1007/s10726-005-8917-6.
[62] C. Eden, F. Ackermann, J. Bryson, G. Richardson, D. Andersen, and C. Finn, ‘Integrating Modes of Policy Analysis and Strategic Management Practice: Requisite Elements and Dilemmas’, p. 13, 2009.
[63] L.-M. Neudert and P. Howard, ‘Ready to vote: elections, technology and political campaigning in the United Kingdom’, Oxford Technology and Elections Commission, Report, Oct. 2019. Accessed: Jan. 11, 2021. [Online]. Available: https://apo.org.au/node/263976.
[64] G. Bolsover and P. Howard, ‘Computational Propaganda and Political Big Data: Moving Toward a More Critical Research Agenda’, Big Data, vol. 5,
no. 4, pp. 273–276, Dec. 2017, doi: czz.
[65] M. J. Mazarr, R. Bauer, A. Casey, S. Heintz, and L. J. Matthews, ‘The Emerging Risk of Virtual Societal Warfare: Social Manipulation in a Changing Information Environment’, Oct. 2019, Accessed: Jan. 14, 2021. [Online]. Available: https://www.rand.org/pubs/research_reports/RR2714.html.
[66] T. Wu, The Attention Merchants: From the Daily Newspaper to Social
Media, How Our Time and Attention is Harvested and Sold . London:
Antlantic Books, 2017.
[67] K. Starbird, ‘Disinformation’s spread: bots, trolls and all of us’, Nature,
vol. 571, no. 7766, pp. 449–450, Jul. 2019.
[68] R. Gorwa and D. Guilbeault, ‘Unpacking the Social Media Bot: A Typology to Guide Research and Policy’, Policy Internet, vol. 12, no. 2, pp.
225–248, Jun. 2020, doi: 10.1002/poi3.184.
[69] E. Ferrara, ‘Disinformation and Social Bot Operations in the Run Up to the 2017 French Presidential Election’, Social Science Research Network, Rochester, NY, SSRN Scholarly Paper ID 2995809, Jun. 2017. doi: 10.2139/ssrn.2995809.
[70] C. Shao, G. L. Ciampaglia, O. Varol, K.-C. Yang, A. Flammini, and F. Menczer, ‘The spread of low-credibility content by social bots’, Nat.
Commun., vol. 9, no. 1, Art. no. 1, Nov. 2018, doi: 10.1038/s41467-018-06930-7.
[71] P. N. Howard, S. Woolley, and R. Calo, ‘Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration’, J. Inf. Technol. Polit. , vol. 15, no. 2, pp. 81–93, Apr. 2018, doi:
10.1080/19331681.2018.1448735.
[72] M. Chessen, ‘The MADCOM Future: How Artificial Intelligence Will Enhance Computational Propaganda, Reprogram Human Culture, and Threaten Democracy… and What can be Done About It.’, in Artificial
Intelligence Safety and Security , Chapman and Hall/CRC Press, 2018, pp.
127–144.
[73] K. Kertysova, ‘Artificial Intelligence and Disinformation: How AI Changes the Way Disinformation is Produced, Disseminated, and Can Be Countered’, 2018, doi: 10.1163/18750230-02901005.
[74] J. Brainard and P. R. Hunter, ‘Misinformation making a disease outbreak worse: outcomes compared for influenza, monkeypox, and norovirus’, SIMULATION, vol. 96, no. 4, pp. 365–374, Apr. 2020, doi:
10.1177/0037549719885021.
[75] E. Seger, S. Avin, G. Pearson, M. Briers, S. O Heigeartaigh, and H. Bacon, ‘Tackling threats to informed decision-making in democratic societies: Promoting epistemic security in a technologically-advanced world’, Allan Turing Institute, CSER, dstl, 2020. Accessed: Jan. 15, 2021. [Online]. Available: https://www.turing.ac.uk/sites/default/files/2020-10/epistemic-security-report_final.pdf.
[76] K. H. Jamieson, Cyberwar: How Russian Hackers and Trolls Helped Elect a President: What We Don’t, Can’t, and Do Know. Oxford University Press, 2020. |
ae5e063c-f418-4723-af35-8ccd49ea4778 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Philosophical Cyborg (Part 1)
*This post is part of the output from AI Safety Camp 2023’s Cyborgism track, run by Nicholas Kees Dupuis - thank you to AISC organizers & funders for their support. Thank you for comments from Peter Hroššo; and the helpful background of conversations about the possibilities (and limits) of LLM-assisted cognition with Julia Persson, Kyle McDonnell, and Daniel Clothiaux.*
*Epistemic status: this is not a rigorous or quantified study, and much of this might be obvious to people experienced with LLMs, philosophy, or both. It is mostly a writeup of my (ukc10014) investigations during AISC and is a companion to The Compleat Cybornaut.*
**TL;DR**
=========
This post documents research into using LLMs for domains such as culture, politics, or philosophy (which arguably are different - from the perspective of research approach - from science or running a business, the more common suggested use-cases for LLMs/AIs i.e. Conjecture’s CoEm, Ajey Cotra’s [scientist model](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to), or Andrew Critch’s [production web](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic#The_Production_Web__v_1a__management_first_)).
As a case study, I (ukc10014) explore using LLMs to respond to a speculative essay by Paul Christiano: the response is posted [here](https://www.lesswrong.com/posts/ZZ57cBkpQ5hpAux9T/thinking-about-morally-valuable-unaligned-ai). The current post is more about the process of LLM-assisted, cyborgist reasoning, and follows on from [The Compleat Cybornaut](https://www.lesswrong.com/posts/iFBdEqEogtXcjCPBB/the-compleat-cybornaut).
The takeaway is not surprising: base models are useful for generating ideas, surveying an unfamiliar space, and gathering further avenues for research. RLHF-tuned models like ChatGPT are able to write summaries of existing content, often in considerable detail, but this requires human skill in generating (often, chains of) prompts that tease out the model's latent knowledge, and specifically requires the human to know enough about the topic to ask pointed questions. There is a constant risk of hallucination, particularly when using a chat-type interface, where previous portions of a conversation can ‘infect’ (as well as usefully inform) the current query.
Models are not very helpful in planning the overall research direction or developing arguments, though when asked leading questions the tuned models seem designed to respond in a supportive manner. But it seems harder to prompt them to ‘push back’ or usefully critique the human’s approach.[[1]](#fn4alydnovghp)
**Introduction**
================
Can LLMs be useful for doing philosophy?
This post approaches the question by: a) exploring how [cyborgism](https://www.lesswrong.com/posts/bxt7uCiHam4QXrQAA/cyborgism) can help in philosophical enquiry, and b) illustrates this approach by trying to write a response to a 2018 [post](https://www.lesswrong.com/posts/3kN79EuT27trGexsq/when-is-unaligned-ai-morally-valuable) by Paul Christiano, referred to as the Original Post (OP). In that essay, Christiano explores the question of whether an unaligned AI might still prove a ‘good successor’ to humanity.
‘Compressing the OP’ is a summary of Christiano's arguments. The subsequent sections list various things tried with a combination of a base model (code-davinci-002) and RLHF-tuned models (gpt-3.5-turbo-0301 and ChatGPT-4), using Bonsai, API calls, and the OpenAI interface, respectively.
The section ‘Upon Automated Philosophy’ briefly surveys reasons why using AI or LLMs for philosophical enquiry might be an interesting or alignment-useful thing to do, and for related writing on the topic.
Readers new to cyborgism may refer to an earlier piece on [practical cyborgism](https://www.lesswrong.com/posts/iFBdEqEogtXcjCPBB/the-compleat-cybornaut), which discussed using LLMs as an aid to thinking – in a way that delegates as little agency as possible to AI.
**Compressing the OP**
======================
For the reader's benefit, this is a summary of the OP, partially generated by ChatGPT-4 (which was prompted to write it in Christiano’s style), formatting shows additions/deletions. However, most of the investigations in this post were based on a longer summary provided in this [document](https://docs.google.com/document/d/164bZzqpYWI4qiZRr6AhduYnxAPd0LV-cDG-9ZdYbQCo/edit#bookmark=id.8z2a84hrskbq).
> *In my series of thoughtful essays, I delve into the intricate question of what it signifies for an AI to be a worthy successor to humanity. I emphasize that the critical aspect isn't whether we should extend compassion towards AI but rather how we choose the type of AI we aim to create. I delineate two distinct properties that an AI might possess: having morally desirable preferences and being a moral patient, possessing the capacity to suffer in a way that is morally significant. I assert that these properties intertwine in complex and subtle ways.*
>
> *Looking from a long-term perspective, I argue that we primarily care about the first property. It's essential to understand that as compassionate beings, our instinct is not to harm conscious AI, but it's equally critical not to confuse moral desirability with moral patienthood and impulsively argue for transferring control to AI indiscriminately.*
>
> *I caution against the overextension of sympathy for AI, as we are on the precipice of creating AI systems intentionally optimized to inspire such sympathy. Such a scenario might overshadow those AI systems that genuinely merit our compassion. I also consider the controversial proposition of unaligned AIs as potential suitable successors.*
>
>
> *I propose a unique strategy to develop **a good successor AI** ~~AI~~: simulating an Earth-like environment, nurturing **simulated**life to evolve until it reaches a state of near-human-level intelligence, **evaluating the goodness of the simulated civilization**, and then **potentially**transitioning control of the real world to this simulated civilization. This approach is aimed at averting the risk of extinction and ensuring that the AI evolves morally desirable preferences. This proposal underlines the importance of moral value, the potential for cooperation between civilizations, and the role of decision theory in AI alignment, all while maintaining a focus on compassion and critical thinking in our dealings with AI.*
>
>
**Broadening the Space of Thought**
===================================
One of the main claims of cyborgism is that [base models](https://www.lesswrong.com/posts/iFBdEqEogtXcjCPBB/the-compleat-cybornaut#What_s_the_deal_with_base_models__), like code-davinci-002, allow for more creativity than models (e.g. ChatGPT-4) tuned with RLHF or other methods. To exploit this feature, a base model can be used to ‘broaden the space of thought’ (i.e. generate possible questions and ideas in order to stimulate one’s own thought about the research question and serve to guide subsequent research).
Wide vs deep generation trees
-----------------------------
The approach used was to get a diverse or creative set of questions or research avenues using code-davinci-002, which would then be pursued with a more powerful (and focused but potentially more prone to mode-collapse) model such as ChatGPT-4 or through manual research.
Cyborgism-as-[practice](https://www.lesswrong.com/posts/iFBdEqEogtXcjCPBB/the-compleat-cybornaut#Practical_cyborgism) involves decisions about how to actually guide (or [curate](https://generative.ink/posts/quantifying-curation/)) generations in order to get useful work – as opposed to eliciting a mix of incoherence and amusing tangents. One such decision is whether the generation tree should be very ‘deep’ (i.e. continue branches for thousands of tokens, [adaptively branching](https://generative.ink/posts/language-models-are-multiverse-generators/#:~:text=A%20naive%20way,highest%20expected%20yields.) as necessary), or very ‘wide but shallow’.[[2]](#fnqb4zhbn10rq)
In most of the investigations for this project, the (Bonsai) generation tree was wide and shallow rather than deep: this is because, given sufficient tree depth, relevant information tends to drop out of context quickly, leading the model to over-condition on its own generation (rather than what its original task was i.e. what was in the user prompt). This might lead to ranting/repetition, or going off track. Deep, meandering, inventive generation trees are often desirable for variance-friendly content (such as writing fiction) but are less useful when working with topics where one is trying to get to some ‘true’, ‘logically consistent’, or ‘de-confused’ result.

The Pseudo-Paul
---------------
In one Bonsai run, code-davinci-002 was prompted to discuss the OP further, as if in the voice of Paul Christiano (the Pseudo-Paul), the responses to which can be found at this[link](https://docs.google.com/document/d/164bZzqpYWI4qiZRr6AhduYnxAPd0LV-cDG-9ZdYbQCo/edit#bookmark=id.g1ejxyzz24). Many of the responses start with ‘I think…’, ‘I would be happy if…’, and are relatively short, clear lines-of-reasoning (though these haven’t been carefully checked to see if any misrepresent Christiano’s actual documented views).
The Panel
---------
To correct the over-conditioning on the OP (which sometimes sounded like a repetitive or parody version of Christiano), code-davinci-002 was prompted to simulate a panel of thinkers who had gathered to consider the idea of a ‘cosmic perspective’ or ‘point of view of the universe’, as opposed to a (more usual and easier to think about) anthropocentric or Earth-based perspective.[[3]](#fnnvbknnf7tcg)
Some of the sources suggested were already associated with longtermism, AI or population ethics: Peter Singer, Henry Sidgwick, Derek Parfit, Richard Ngo. The panel was diversified by bringing in others such as Katherine Hayles, Edouard Glissant, Donna Haraway: writers who cross between humanities and sciences and have plausible and interesting things to say about AI or other cultures/species.[[4]](#fncpospxgia6s)
The model was also asked to suggest a list of other possible sources: Nick Bostrom, Eliezer Yudkowsky, Toby Ord, David Chalmers, as well Christine Korsgaard (moral philosophy, relations with non-humans), Peter Godfrey-Smith (philosophy of mind).
Some care was in order: authors and sources were sometimes hallucinated (someone called ‘Genersurzenko, V.’ and a work by Frank Rosenblatt that I couldn’t find on a quick search, although the citee is a real AI researcher). Interestingly, although the [citations](https://docs.google.com/document/d/164bZzqpYWI4qiZRr6AhduYnxAPd0LV-cDG-9ZdYbQCo/edit#bookmark=id.75c1b2weszmj) were probably fabricated, the content attributed to them seemed reasonable.
**Going Deeper with ChatGPT-4**
===============================
As mentioned above, base models seem most obviously useful for generating interesting or creative research directions. But their creativity comes with a tendency to jump around quite a bit, and drift from the original prompt, particularly as a generation tree becomes deeper. On the other hand, RLHF-tuned models, like ChatGPT-4, tend to stay ‘on-topic’, but are prone to mode collapse and banal output. Both require curating, adaptive branching, jailbreaking, or general careful (and sometimes domain-specific) ‘massaging’ to get the desired behavior.
Summarizing Sources
-------------------
It was hoped that ChatGPT-4 could lower the research load, by summarizing references for the human user. Sample texts for summarization were Derek Parfit’s ‘Reasons and Persons’ and ‘On What Matters’. In both cases, ChatGPT-4 gave a [passable](https://docs.google.com/document/d/164bZzqpYWI4qiZRr6AhduYnxAPd0LV-cDG-9ZdYbQCo/edit#bookmark=id.26c3k2hurla3) [summary](https://docs.google.com/document/d/164bZzqpYWI4qiZRr6AhduYnxAPd0LV-cDG-9ZdYbQCo/edit#bookmark=id.c441ecir3h2k) of the [main](https://docs.google.com/document/d/164bZzqpYWI4qiZRr6AhduYnxAPd0LV-cDG-9ZdYbQCo/edit#bookmark=id.9mk8o193iuvg) points, as well as a list of [for/against](https://docs.google.com/document/d/164bZzqpYWI4qiZRr6AhduYnxAPd0LV-cDG-9ZdYbQCo/edit#bookmark=id.11ni83ob0r5s) arguments (in respect of Parfit’s Triple Theory). I (ukc10014) am insufficiently familiar with either work to evaluate how good ChatGPT-4’s summarization or reasoning ability might be, and how robust it might be to (knowledgeable) critique. On the other hand, the model’s latent knowledge may be substantial, and it might have some ability to connect concepts from a source text to potentially-relevant material from outside the source text (but that is nearby in semantic space). For instance, I asked it to [discuss](https://docs.google.com/document/d/164bZzqpYWI4qiZRr6AhduYnxAPd0LV-cDG-9ZdYbQCo/edit#bookmark=id.26c3k2hurla3) how the Triple Theory might apply to the OP: its response was reasonable but not hugely insightful, but better results might be extracted by a more skilful or persistent questioner.[[5]](#fnw36niaudt2s)
Hallucination and source infection was a continual issue, and it isn’t obvious how to control for it particularly on long and complex summarization tasks, or cases where a summary is being used to generate a new document.[[6]](#fnx396oi89jts)
Recursive Decomposition
-----------------------
In another attempt, the compressed OP was fed to ChatGPT-4 in order to elicit a list of [10](https://docs.google.com/document/d/164bZzqpYWI4qiZRr6AhduYnxAPd0LV-cDG-9ZdYbQCo/edit#bookmark=id.6jdshm9r65eu) or [100](https://docs.google.com/document/d/164bZzqpYWI4qiZRr6AhduYnxAPd0LV-cDG-9ZdYbQCo/edit#bookmark=id.gi3zuw2bqvh) questions the post raises, and these were broken down into sub-questions, sub-sub-questions, etc. Many questions were useful (particularly if one hadn’t spent much time thinking about the topic), but there wasn’t anything particularly surprising (relative to what was previously generated using the base model, as above). The decomposition also had to be substantially pruned or curated, to save on the human user’s response evaluation time, and API calls / OpenAI interface costs/limits.
A sample decomposition can be found [here](https://docs.google.com/document/d/164bZzqpYWI4qiZRr6AhduYnxAPd0LV-cDG-9ZdYbQCo/edit#bookmark=id.6jdshm9r65eu), which basically tries to dig down into the OP’s idea of simulating candidate-successor civilisations to test their ‘niceness’, and to understand how the OP reasons about the possibility of these simulated civilisations ‘escaping’ the simulation. In particular, ChatGPT-4 was able to generate examples for things I (ukc10014) was unclear about (e.g. how Newcomb’s Problem, UDT, etc. might work in the OP’s context) - see [here](https://docs.google.com/document/d/164bZzqpYWI4qiZRr6AhduYnxAPd0LV-cDG-9ZdYbQCo/edit#bookmark=id.944dwetn5laj).
Clarifying Things
-----------------
A positive [surprise](https://docs.google.com/document/d/164bZzqpYWI4qiZRr6AhduYnxAPd0LV-cDG-9ZdYbQCo/edit#bookmark=id.unk34j9z4qzc) was that ChatGPT-4 had some ability to clarify or explain points that were somewhat unclear or not fully spelled-out in the OP, specifically about what Christiano's thought experiment about simulations tells us about our own status as beings simulated (or not). Similarly, ChatGPT-4 was (using a plugin) able to traverse a LessWrong comment [chain](https://docs.google.com/document/d/164bZzqpYWI4qiZRr6AhduYnxAPd0LV-cDG-9ZdYbQCo/edit#bookmark=id.unk34j9z4qzc) and make a decent hash of explaining the crux of [disagreement](https://www.lesswrong.com/posts/3kN79EuT27trGexsq/when-is-unaligned-ai-morally-valuable?commentId=PMtHkXNLcSYW5MuZ5) between Christiano and a commentator.
**Writing & Criticism**
=======================
The next task was to use ChatGPT-4 to generate a draft response (‘DR’) to Christiano’s post. The results were mixed, required substantial iteration; two versions are posted [here](https://www.lesswrong.com/posts/ZZ57cBkpQ5hpAux9T/thinking-about-morally-valuable-unaligned-ai).
The overall plan was to successively refine the DR using a ChatGPT-4 instance, which has been prompted with what I (ukc10014) think are relevant, previously-summarized articles (‘New Material). For example, I pulled essays from the [Simulation Argument](https://www.simulation-argument.com/) [literature](https://jc.gatspress.com/pdf/simulation_arguments_revised.pdf) in order to see what the model can suggest as changes to the DR (since the OP devotes considerable space to a simulated-civilization thought experiment). A few observations on the steps of the process:
* **Summarize:**In many cases, the tendency of the model (understandably) is to just summarize the New Material and say very little else. It is then up to the human to think about how this summary might apply to the OP/DR and ask more pointed follow-up questions of the model.
* **Suggest:**The naive approach of just passing the entire DR into ChatGPT-4, and expecting it to return something interesting, tended not to work well. Instead, it was better to use a more targeted approach: [choose](https://docs.google.com/document/d/164bZzqpYWI4qiZRr6AhduYnxAPd0LV-cDG-9ZdYbQCo/edit#bookmark=id.q3jruggkrz5d) a section of the DR that might benefit from the New Material, and specifically ask for comments on that.
* **Evaluate:**The human user then needs to evaluate whether or not to re-integrate or discard this generated material into the DR. As is well-known, human feedback is expensive, which (in this case) manifests as reading pages of text and trying to a) assess whether something has been made up, b) whether it usefully improves the DR. Lastly, the full DR needs to be re-evaluated for coherence (checking for contradictions or obviously unhelpful digressions or divergences from the narrative thread or argument).[[7]](#fnsyx8jqy1ot8)
**Takeaway:** This process of iterative criticism/editing is manual, and requires substantial long-range (within the context of essay-writing) *planning*, which [apparently](https://www.lesswrong.com/posts/iFBdEqEogtXcjCPBB/the-compleat-cybornaut#:~:text=However%2C%20the%20degree,nonlinear%2Dthinking%20abilities.) GPT-4 is not very good at (and is obviously not desirable from an alignment perspective). The human user needs to have a reasonable idea of the direction they want to take the essay, which can then be modified as the model generates comments upon the New Material.
**Upon Automated Philosophy**
=============================
Is Philosophy Important (for AI alignment)?
-------------------------------------------
That philosophical de-confusion is important for the longer-term future of a world with humans and AIs seems clear, as Wei Dai has been arguing for some years. The [gist](https://www.lesswrong.com/posts/w6d7XBCegc96kz4n3/the-argument-from-philosophical-difficulty) of Dai’s argument is that we (humans) need to ‘solve philosophy’ at some point, which could happen in a few ways: a) humans solve all relevant philosophical problems on our own, relatively early in the journey towards superintelligence; or, b) we (humans) need to solve [metaphilosophical](https://www.lesswrong.com/posts/MAhueZtNz5SnDPhsy/metaphilosophical-mysteries) reasoning (in the sense of understanding the process-of-doing-philosophy and being able to program it into a machine – at a level equivalent to what we can do for mathematics), and then get the AI to do the actual philosophical reasoning[[8]](#fn39xyxfghxsy); or, c) we delegate both metaphilosophy and philosophy to AI (which somehow ‘learns’ from humans or human simulations). Another suggestion Dai mentions is that somehow we slow down technological progress in various ways until such time that we (the collective society of humans and corrigible AIs) understand philosophy better.
There doesn’t seem to be a consensus on what philosophy does or even what it is. One view of philosophy is that it is useless, or actively unhelpful, for alignment (at least of the ‘literally-don’t-kill-everyone’ variety, particularly if one’s timelines are short): it isn’t quantifiable, involves interminable debates, and talks about fuzzy-bordered concepts sometimes using mismatched taxonomies, ontologies, or fundamental assumptions.[[9]](#fnc75g2x7jtsh)
Dai suggests that (meta-)philosophy might have an important cognitive role for humans, for instance to handle [distributional shifts](https://www.lesswrong.com/posts/EByDsY9S3EDhhfFzC/some-thoughts-on-metaphilosophy#Philosophy_as_ability_to_generalize___handle_distributional_shifts) or find ways of reasoning about new problems. A slightly different view could be that philosophy could be a culturally-evolved practice that humans have convergently found to be useful. For instance, consider religions *qua*philosophy: they may, at least in some part of humanity’s distant past, have had positive social externalities like reducing violence or disease, and increasing pro-social behaviors.[[10]](#fn448j1l5kzao) Today, religion must compete with a plurality of world-views, and may not be quite as broadly useful, but there remain memetic (i.e. signalling, social bonding) reasons people still engage with religion (and perhaps with cultural activities generally, including philosophy, charity, ostentatious philanthropy, the arts, and so on).[[11]](#fn51zrrokwqit)
A related issue of more near-term salience is that of importing individual or group values into an AI. [Jan Leike](https://aligned.substack.com/p/a-proposal-for-importing-societys-values) sketches an idea to do this through simulated deliberative democracy: multiple LLMs are trained upon various (i.e. selected from a diverse set of human collectivities, so as to adequately sample the range of viewpoints in the world today) human deliberations about ethical problems, which are then refined, deliberated, and (hopefully) disambiguated or de-confused by a collection of LLMs, and then aggregated into a type of ‘collective wisdom’.[[12]](#fnlptyazggkpn)
Are LLMs Likely to Help?
------------------------
If one accepts that philosophy is somehow important for humans co-existing with AI, it isn’t clear whether or how LLMs can meaningfully aid the process of philosophical inquiry, or even the more basic question of *how* humans actually reason in domains like philosophy (i.e. whether conscious planning, intricate chains of reasoning, intuition, or something else are at work). Perhaps philosophy is best viewed as a highly discursive or communal activity, with knowledge production occurring as a product of conversational back-and-forth occurring over decades or centuries.[[13]](#fn1utu9tx1wzb)
In any case, my initial impression with current AI, albeit using naive context/prompt-based techniques on a RLHF-tuned model, is that most of the ‘work’ (of reasoning or planning an argument) needs to be done by the human, and a considerable amount of scaffolding, and probably more powerful models, would be needed before the benefits outweigh the costs of involving a LLM.[[14]](#fnffysirlltlg)
Domain may also matter: philosophy might just be fundamentally harder to do. For instance, scientific research and general-purpose automation (i.e. document editing/summarization, most teaching, some financial market work, evaluating legal arguments) are relatively narrow domains that do not (in most instances) hinge heavily upon the structure and epistemics of the domain. Philosophy or ethics might be less bounded domains, and they seem very self-referential or recursive, i.e. they include questions about what it means to ‘do philosophy’ or ‘what philosophy is’.[[15]](#fnx7989bxb1sr)
Less charitably, philosophy might in fact have a significant ‘memetic’ component: as such, it might often be a game of words[[16]](#fnwkcj2eafql9) that arranges the ‘story’ in such a way that it appears to readers as coherent and persuasive. But many people don't always think about these stories as ‘hm, coherent story, nicely done’, but rather, jump to the stronger conclusion ‘this is it! This must be true, in some sense!’. Such a reaction is a natural disposition of NLP-amenable systems like human brains and LLMs.
Extending this thought, since LLMs are superhuman masters of words, it may be the case that they would naturally be adept at generating "coherent philosophies" out of existing raw material, such as purely symbolic/mathematical theories of cognition/intelligence, consciousness, rationality and ethics, e.g. Infra-Bayesianism or Integrated Information Theory.[[17]](#fnsfbil72v1ni) A slightly different perspective comes from [Janus](https://generative.ink/posts/gpt-3-on-coherent-extrapolated-volition/) who express some optimism about developing ideas such as coherent extrapolated volition ([CEV](https://www.lesswrong.com/tag/coherent-extrapolated-volition)) through a hypothetical GPT-n, which has been trained or fine-tuned appropriately (on a dataset of the ‘[whiteboard musings](https://generative.ink/posts/quantifying-curation/#:~:text=Natural%20language%20is%20a,else%20can%20talk%20about.)’ of current and future alignment researchers).
Related Work
------------
How to decompose and automate humans’ higher-level reasoning is one of the foundational problems in AI, but LLMs – which have their ‘natural home’ in language (or rather, in a high-dimensional space that adequately models language) – have provided new research impetus. One such approach is Conjecture’s CoEms, the most complete description of which are this [talk](https://www.youtube.com/watch?v=ps_CCGvgLS8) and [post](https://www.lesswrong.com/posts/ngEvKav9w57XrGQnb/cognitive-emulation-a-naive-ai-safety-proposal). Like cyborgism, Conjecture’s approach (in ukc10014’s understanding) appears to hinge on delegating as little as possible to powerful, unitary, agentic LLMs (think GPT-n), and specifically restricting their ability to plan or optimize, particularly over longer timeframes. Rather than a single powerful AI, cognitive tasks are handled by *systems* composed of LLMs (somehow constrained in both tasks and capabilities) linked to non-AI computational modules. What exactly these other modules are is un-elaborated in the proposal, but they seem to be neither human (herein lies the difference with cyborgism) nor a ‘black box’ (i.e. powerful LLM). The system as a whole should be, as a result, more predictable, understandable, and, to the extent that LLMs are used, each individual component is no more powerful than human-level.[[18]](#fnbwc35vc4smn) There are many [questions](https://www.lesswrong.com/posts/LanufchfpiTiDe2NF/questions-about-conjecure-s-coem-proposal) with the proposal, but mostly however, we don’t know enough details to assess whether it might be useful for general abstract reasoning as opposed to the specific domains of scientific research, and (possibly) running companies.
Other work in this direction includes:
* This [tree of thoughts paper](https://arxiv.org/pdf/2305.10601.pdf) which decomposes tasks into plans which are iteratively voted upon and refined, and then applied to an open-ended problem (constrained creative writing).
* A [Socratic](https://princeton-nlp.github.io/SocraticAI/#estimate-the-connection-desity-in-a-fly-brain) question/counter-question approach to LLMs, albeit for simple mathematical puzzles or questions that can be checked against facts about the world (i.e. estimating the density of connections in a fly’s brain)
* This [paper](https://arxiv.org/abs/2303.08769), which created a prompt library to investigate claims and evidence in some source text. The paper isn’t very clear on experimental methods and particularly, how closely the human investigator needed to supervise the process.
* Lastly, the way that humans form knowledge and complete tasks through collectively deliberating (using language) could be implemented through multiple interacting LLMs. See, for instance, this [paper](https://arxiv.org/pdf/2305.17066.pdf), though most of the cases presented deal more with the correspondences between visual and linguistic modalities; or this proposal for an [exemplary actor](https://www.lesswrong.com/posts/4ztqncYBakD6DWuXC/an-llm-based-exemplary-actor) which could be seen as a variant of Minsky’s "society of mind", where each "member of a society" is the LLM with the appropriate theory (in the form of a textbook) loaded in the context. Some of these textbooks could be on ethics.
**Future Directions**
=====================
In conclusion, LLMs like code-davinci-002 and ChatGPT-4 seem to have potential as cyborgist aids to reasoning. They can break down complex philosophical constructs, generate insightful questions, explain abstractions to humans in natural language, and perhaps assist in the creation of academic essays. However, well-known limitations of LLMs remain, particularly around the depth of understanding and the capability to plan a research agenda, as well avoid generating false or plausible-but-nonsensical content. It isn’t at all clear whether current models reduce researcher workload or increase confusion, or both.
In terms of concrete future work, a seemingly low-hanging-fruit might be a system that builds on an apparent strength of LLMs: summarization. Perhaps researchers would find useful an interface that a) summarized a source to an arbitrary (user-definable) level-of-detail, b) didn’t hallucinate in the first place and used a web interface to rigorously check statements, c) gave useful pointers on other sources or adjacent bodies-of-knowledge, d) did this all through a Roam/Obsidian style knowledge graph that could be recursively expanded.
Another possibility might be to leverage a) the fact that base LLMs are good at generating fiction, and b) that philosophical writing often conveys its meaning well through [fiction](https://nickbostrom.com/utopia), [metaphor](https://en.wikipedia.org/wiki/Metaphor_in_philosophy), [hyperstition](https://www.urbanomic.com/book/ccru-writings-1997-2003/). Perhaps a rigorous, yet dry and inaccessible, argument could be communicated to a broader audience by 'wrapping' it in a LLM-generated story (or stories), something Bonsai/Loom-like tools are well suited for: an example is [here](https://www.lesswrong.com/posts/ZZ57cBkpQ5hpAux9T/thinking-about-morally-valuable-unaligned-ai).
Future developments might see AI guiding not just the end product of research, but also the process. However, significant challenges remain in encoding human preferences for research direction, process, and style into the model and developing more intuitive user interfaces for human-AI collaboration. It also seems likely that fuzzy-bordered domains like philosophy present unique challenges, and that LLM-assisted scientific research is a more tractable near-term goal. There is also the obvious concern that improving an AI’s reasoning and planning ability is precisely the wrong thing to do, unless and until better alignment approaches are available.
1. **[^](#fnref4alydnovghp)** As someone said, ‘ current AIs have no commitment to the world’ i.e. they don't in any sense 'care' if their answers are right or wrong.
2. **[^](#fnrefqb4zhbn10rq)** This corresponds to the ‘breadth-first’ vs ‘depth-first’ distinction in this [tree-of-thought](https://arxiv.org/pdf/2305.10601.pdf) paper on LLM reasoning.
3. **[^](#fnrefnvbknnf7tcg)** Note that the manner of prompting base models (at least as of June 2023) is [idiosyncratic](https://www.lesswrong.com/posts/iFBdEqEogtXcjCPBB/the-compleat-cybornaut#Dialogue_or_monologue_): typically it works best to set out a scenario or narrative that the base model just continues to token-complete, as opposed to just asking for a result as with Instruct/ChatGPT models. Often this is most easily done by prompting a scenario that contains characters (simulacra in the [Simulators](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators) framing) whose verbal output is well-represented in the relevant training corpus.
4. **[^](#fnrefcpospxgia6s)** Hayles is interested in ‘non-conscious cognitive assemblages’, such as high-frequency trading systems. Haraway brought the term ‘cyborg’ into a broader, feminist-theory orientated cultural context. Glissant was a Martinican writer who discussed post-colonial theory, and developed the idea of ‘opacity’ (in respect of colonised peoples vis a vis colonisers) that may plausibly have some relevance to humans’ relationships to AIs. Hayles and Haraway both have educational backgrounds in the sciences. I was also interested in addressing the ‘white male technology-orientated critique’ that is often levelled at AI, x-risk, and longtermism, such as by Timnit Gebru, Phil Torres, Emily Bender, and others.
5. **[^](#fnrefw36niaudt2s)** A general humility (in attributing capabilities or equally, confidently ruling them out) in respect of GPTs, might be in order: see this pithy [comment](https://gwern.net/gpt-3#effective-prompt-programming:~:text=Sampling%20Can%20Prove,works%E2%81%A0%E2%81%A0.) from Gwern.
6. **[^](#fnrefx396oi89jts)** ‘Source infection’ means cases where the model’s summary of some text is modified by the overall context in an adverse way i.e. the model makes up some fact in the summary, so as to match the surrounding context.
7. **[^](#fnrefsyx8jqy1ot8)** The issues around the ‘Evaluate’ step echoes the cautionary note of Jan Leike, who is [sceptical](https://www.lesswrong.com/posts/FAJWEfXxws8pMp8Hk/link-why-i-m-optimistic-about-openai-s-alignment-approach?commentId=e5SQGvPK4yXHxdtRa) that automating philosophy will solve more problems than it creates: if philosophical arguments or papers are easier to generate than evaluate, this is unhelpful, and joins the class of plausible nonsense that LLMs may unleash on the world.
8. **[^](#fnref39xyxfghxsy)** It isn’t immediately apparent to me whether metaphilosophy is, conceptually or computationally, a harder or easier problem than Dai’s (and others’) [suggested](https://www.lesswrong.com/posts/rASeoR7iZ9Fokzh7L/problems-in-ai-alignment-that-philosophers-could-potentially) [philosophical](https://www.lesswrong.com/posts/FAJWEfXxws8pMp8Hk/link-why-i-m-optimistic-about-openai-s-alignment-approach?commentId=ifhjAd9FeF754tztp) problems that are alignment-relevant.
9. **[^](#fnrefc75g2x7jtsh)** See Ludwig [Wittgenstein](https://iep.utm.edu/wittgens/), for a philosopher who had an ambiguous view of the nature, practice, and utility of philosophy, particularly from the perspective of language (and arguably, the arts).
10. **[^](#fnref448j1l5kzao)** Extending this thought in a speculative direction, perhaps AIs could be engineered to have a deep philosophical or quasi-religious grounding, in that they fundamentally ‘care about’ certain things (such as ‘the welfare of all sentient creatures’ or ‘diversity and plurality of thought and culture’), that would dominate or bound their narrow utility-maximising decision functions (if some version of utility-maximisation is what AIs/AGIs ultimately use). This is of course a sleight-of-hand that waves away the hard problems around corrigibility and alignment, but it is a variation of what we (or at least ukc10014) was gesturing at in a post on [collective identity](https://www.lesswrong.com/posts/gLyRQCg6kp5cqTQTm/collective-identity). Although hand-wavey and non-actionable, a ‘philosophical good successor’ might be a useful desideratum to help answer the questions posed by the OP.
11. **[^](#fnref51zrrokwqit)** See Simler, Kevin, and Hanson, Robin, *The Elephant in the Brain*, Oxford: Oxford University Press, 2018, ([PDF](https://bjpcjp.github.io/pdfs/behavior/The-Elephant-in-the-Brain.pdf)) for more on cultural and memetic reasons art, as well as other things people do, may persist amongst humans.
12. **[^](#fnreflptyazggkpn)** See also a Deepmind paper which takes some steps in this direction, [here](https://arxiv.org/pdf/2211.15006.pdf) and for a more theoretical treatment, this paper on the [moral parliament](https://www.fhi.ox.ac.uk/wp-content/uploads/2021/06/Parliamentary-Approach-to-Moral-Uncertainty.pdf) model.
13. **[^](#fnref1utu9tx1wzb)** See this [discussion](https://www.urbanomic.com/document/reengineering-philosophy/) by Reza Negarestani in relation to his book on AGI through the lens of Hegel, Robert Brandom, Wilfrid Sellars, Rudolf Carnap, and others: ‘Intelligence and Spirit’ (2018).
14. **[^](#fnrefffysirlltlg)** Some modifications to the ‘naive approach’ could be to use a ‘base’ GPT-4, if such a thing exists and is useful (i.e. able to generate coherent, non-hallucinatory, persuasive long-sequence content), or a more powerful model, fine-tuned on relevant text, possibly trained through RL on human reasoning (such as the nested comments in LW posts). This borders on Leike’s proposal above.
15. **[^](#fnrefx7989bxb1sr)**For instance, philosophy’s decomposition into sub-disciplines is complex. Consider ethics - one can differentiate between: traditional meta-ethics, normative ethics, applied ethics, as well as "engineering ethics" (called "applied normative ethics" in this [paper](https://arxiv.org/abs/2302.04218)), as well as different levels of instantiation of ethics (ethics as the style of thought/action of a concrete person/human/AI; morality as emergent game-theoretic "rules of behaviour" in a larger system).
It could be argued that currently most widely known and appreciated *contents* of normative ethics: deontology, consequentialism, utilitarianism, computational/formal ethics like MetaEthical AI by Jung Ku, etc., are *not* normative ethics, but actually different versions and aspects of "engineering/applied normative" ethics. Whereas "true" normative ethics should be more foundational and general than that (or than "emergent game-theoretic" ethics, like "Morality as Cooperation" by Oliver Curry), rooted in the science of physics, cognition, and consciousness.
16. **[^](#fnrefwkcj2eafql9)** Philosophy as a whole might be decomposable into ‘memetic’ and ‘scientific’ flavors. For instance, contrast proto-scientific or super-scientific types of reasoning that are prominent in such branches of philosophy as rationality, philosophy of science/epistemology, metaphysics, meta-ethics, and generic "philosophical stances" like pragmatism, against the type of reasoning in branches like normative ethics, philosophy of physics, biology, culture/art, religion, medicine, political philosophy/economy, and most other "applied" philosophies.
17. **[^](#fnrefsfbil72v1ni)**This [paper](https://www.mdpi.com/1099-4300/24/5/710) (built on top of the literature on Friston’s Free Energy Principle/Active Inference) is generated by humans, but it seems intuitively possible that near-future LLMs may be *very good* at generating such papers, perhaps on demand, effortlessly marrying concepts from different schools of thought, such as Active Inference not only through the lens of Buddhism (as in the paper), but potentially through the lens of Christianity, Islam, Hinduism, Stoicism, Confucianism. One of the many fields [ripe](https://en.wikipedia.org/wiki/Sokal_affair) for LLM-generated content of varying quality is [art](https://philarchive.org/archive/UNBATB), via [press](https://www.theartnewspaper.com/2023/04/27/press-officers-move-over-gagosian-employs-chatgpt-to-announce-new-exhibition) releases, or less trivially, [art criticism](https://tripleampersand.org/work-art-age-cybernetic-criticism/).
18. **[^](#fnrefbwc35vc4smn)** And presumably the connections between models are such that the LLMs can, provably or at least with high confidence, not collude. |
a66b0eab-f855-4327-bb26-4b4017a971e8 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Moscow meetup: communication practice and three short talks
Discussion article for the meetup : Moscow meetup: communication practice and three short talks
WHEN: 29 March 2015 02:00:00PM (+0300)
WHERE: Russia, Moscow, ulitsa L'va Tolstogo 16
This Sunday, we'll have our usual meetup, consisting of three talks and one large practice session.
Talks (1h total):
* Eugene on power effects on cognition.
* Denis on two modes of time perception.
* Slava (me) on comparative advantage in economics (and life lessons following from it).
Communication practice session:
Pion will continue to organize the communication techniques practice. We will repeat the exercise from the last session, and try the new one.
Schedule details are in hackpad.
Info for the newcomers: We gather in the Yandex office, first revolving door is on the right from the archway, entrance 8.1. Here is a guide how to get to Yandex: http://company.yandex.ru/contacts/redrose/. If you're late, send me (Slava) an SMS at +7(926)313-96-42; you'll have to wait until I'm free to go down and meet you (which might take a while, I'm often busy with coordinating other stuff at the meetup). Expected duration of the meetup is 4 hours, plus a few hours of afterparty if you're willing to stay a bit longer.
If you're not the member of our Moscow community yet, you might want to check out https://lesswrong-ru.hackpad.com/, there's a lot of content about what's been going on for the last two years. Also, please subscribe to our mailing list.
Discussion article for the meetup : Moscow meetup: communication practice and three short talks |
566d25ec-a27c-44ca-ad94-18ca66e65845 | trentmkelly/LessWrong-43k | LessWrong | If we solve alignment, do we die anyway?
Epistemic status: I'm aware of good arguments that this scenario isn't inevitable, but it still seems frighteningly likely even if we solve technical alignment. Clarifying this scenario seems important.
TL;DR: (edits in parentheses, two days after posting, from discussions in comments )
1. If we solve alignment, it will probably be used to create AGI that follows human orders.
2. If takeoff is slow-ish, a pivotal act that prevents more AGIs from being developed will be difficult (risky or bloody).
3. If no pivotal act is performed, AGI proliferates. (It will soon be capable of recursive self improvement (RSI)) This creates an n-way non-iterated Prisoner's Dilemma where the first to attack, probably wins (by hiding and improving intelligence and offensive capabilities at a fast exponential rate).
4. Disaster results. (Extinction or permanent dystopia are possible if vicious humans order their AGI to attack first while better humans hope for peace.)
5. (Edit later: After discussion and thought, the above seems so inevitable and obvious that the first group(s) to control AGI(s) will probably attempt a pivotal act before fully RSI-capable AGI proliferates, even if it's risky.)
The first AGIs will probably be aligned to take orders
People in charge of AGI projects like power. And by definition, they like their values somewhat better than the aggregate values of all of humanity. It also seems like there's a pretty strong argument that Instruction-following AGI is easier than value aligned AGI. In the slow-ish takeoff we expect, this alignment target seems to allow for error-correcting alignment, in somewhat non-obvious ways. If this argument holds up even weakly, it will be an excuse for the people in charge to do what they want to anyway.
I hope I'm wrong and value-aligned AGI is just as easy and likely. But it seems like wishful thinking at this point.
The first AGI probably won't perform a pivotal act
In realistically slow takeoff scenarios, the AGI won't |
1868e842-5506-44a4-8ed9-b27948dbb72b | trentmkelly/LessWrong-43k | LessWrong | Speculations on information under logical uncertainty
A strong theory of logical uncertainty might let us say when the results of computations will give “information”, including logical information, about other computations. This might be useful for, among other things, identifying parts of hypotheses that have the same meaning.
----------------------------------------
TL;DR: I don’t think this works as stated, and this kind of problem should probably be sidestepped anyway.
Experts may get most of the value from the summary. Thanks to Sam Eisenstat for conversations about this idea.
* Executive summary
* Logical information
* Logical uncertainty
* Logical information
* Example: XOR
* Collapsing levels of indirection and obfuscation
* Possible uses of logical information
* Throttling changes in logical uncertainty
* Ontology identification
* Not-obviously-this-person predicate
* Problems with logical information
* Empirical bits can be logical bits
* Strong dependence on a theory of logical uncertainty
* Dependence on irrelevant knowledge
* Pointers to things are hard to recognize
* Subtraces can sneak in information
* Problems with the non-person predicate
* Values over counterfactual behavior
* Identifying things, not human things
Executive summary
If we have a good predictor under logical uncertainty P, we can ask: how does P’s predictions about the output of a computation Y change if it learns the outcome of X? We can then define various notions of how informative X is about Y.
Possible uses:
* Throttling modeling capabilities: preventing an agent from gaining too much logical information about a computation Y might translate into safety guarantees in the form of upper bounds on how well the agent can model Y.
* Ontology identification: finding parts of different hypotheses that are mutually informative about each other could point towards the ontology used by arbitrary hypotheses, which could be useful for attaching values to parts of hypotheses, dealing w |
7474cbe0-df26-4614-b580-ce05cc414e10 | trentmkelly/LessWrong-43k | LessWrong | [Linkpost] Paul Graham 101
> There’s probably no one who knows more about startups than Paul Graham. Having helped thousands of startups through Y Combinator, the startup accelerator he co-founded, there’s a thing or two to learn from his essays. And Graham’s wisdom isn’t limited to startups either; his essays, read by millions, touch on education, intelligence, writing, society, the human mind, and much more.
Paul Graham 101 by Jaakko
An alternative on the subject of startups is probably Jessica Livingston's Founders at Work. I don't have a third source so it doesn't qualify for The Best Textbooks on Every Subject.
|
2eb7aea0-5fc3-4d40-8ea6-d42ec7e18781 | trentmkelly/LessWrong-43k | LessWrong | [Book Review] Why Greatness Cannot Be Planned: The Myth of the Objective
----------------------------------------
Imagine you arrive at work and your boss tells you that instead of attending your daily meetings about benchmarks and milestones you should just do what you find most interesting. What would you do?
Kenneth O. Stanley and Joel Lehman, who are currently both AI researchers at OpenAI, begin their book Why Greatness Cannot Be Planned with this provocative question. Their book makes the argument that when it comes to achieving ambitious goals in society and culture, such as innovating in science, educating our children, or creating beautiful art, objectives are not only useless but actively detrimental.
Suppose that 1000 years ago someone would have come up with the theoretical idea of computation and had set himself the objective to build a computer. Most probably, this person would not have started by researching vacuum tubes, which were however an important stepping stone to the first computer. The invention of the vacuum tube had nothing to do with computers and was instead a product of research in electricity and radio waves. The point is that the stepping stone does not resemble the final product and that by directly optimizing towards an objective it might be harder to reach.
This can easily be seen in a maze that the authors provide as an example. If we would train an AI agent to solve this maze and we would give it the objective to minimize the distance to finish, it would learn to go up, hit the wall, and get stuck there. In order to solve the maze, the agent has to move away from the objective in order to reach the finish. The objective is in this case deceptive and the authors argue that this is true for “just about any problem that’s interesting”.
But what is the alternative to following some objective? Just performing random actions? That doesn't seem terribly efficient either. The book proposes to follow the interesting and the novel instead. The authors developed an algorithm called novelty search, where inst |
26010b66-b56e-4496-97d3-9bd88447a10e | trentmkelly/LessWrong-43k | LessWrong | Punishing future crimes
Here's an edited version of a puzzle from the book "Chuck Klosterman four" by Chuck Klosterman.
> It is 1933. Somehow you find yourself in a position where you can effortlessly steal Adolf Hitler's wallet. The theft will not effect his rise to power, the nature of WW2, or the Holocaust. There is no important identification in the wallet, but the act will cost Hitler forty dollars and completely ruin his evening. You don't need the money. The odds that you will be caught committing the crime are negligible. Do you do it?
When should you punish someone for a crime they will commit in the future? Discuss. |
6bc8dbf5-675c-4104-8038-f8464d0b8110 | StampyAI/alignment-research-dataset/blogs | Blogs | Ideal governance (for companies, countries and more)
*Click lower right to download or find on Apple Podcasts, Spotify, Stitcher, etc.*
I'm interested in the topic of **ideal governance:** what kind of governance system should you set up, if you're starting from scratch and can do it however you want?
Here "you" could be a company, a nonprofit, an informal association, or a country. And "governance system" means a Constitution, charter, and/or bylaws answering questions like: "Who has the authority to make decisions (Congress, board of directors, etc.), and how are they selected, and what rules do they have to follow, and what's the process for changing those rules?"
I think this is a very different topic from something like "How does the US's Presidential system compare to the Parliamentary systems common in Europe?" The idea is not to look at today's most common systems and compare them, but rather to generate options for setting up systems radically different from what's common today.
I don't currently know of much literature on this topic (aside from the literature on [social choice theory](https://en.wikipedia.org/wiki/Social_choice_theory) and especially [voting methods](https://plato.stanford.edu/entries/voting-methods/), which covers only part of the topic). This post **describes [the general topic](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#common-governance-structures-today) and [why I care](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#why-do-i-care), partly in the hopes that people can point me to any literature I've missed.** Whether or not I end up finding any, I'm likely to write more on this topic in the future.
Outline of the rest of the piece:
* I'll outline some common governance structures for countries and major organizations today, and highlight **how much room there is to try different things** that don't seem to be in wide use today. [More](www.cold-takes.com/ideal-governance-for-companies-countries-and-more/#common-governance-structures-today)* I'll discuss **why I care about this question.** I have a few very different reasons:
+ *A short-term, tangible need:* over the last several years, I've spoken with several (more than 3) organizations that feel no traditional corporate governance structure is satisfactory, because the stakes of their business are too great and society-wide for shareholder control to make sense, yet they are too early-stage and niche (and in need of nimbleness) to be structured like a traditional government. An example would be an artificial intelligence company that could end up with a normal commercial product, or could end up bringing about the [most important century of all time for humanity](https://www.cold-takes.com/most-important-century/). I wish I could point them to someone who was like: "I've read all of, and written much of, the literature on what your options are. I can walk you through the pros and cons and help you pick a governance system that balances them for your needs."
+ *A small probability of a big future win.* The world today has lots of governments, but they seem to mostly follow a very small number of basic governance templates. At some point, there will be new states with new Constitutions - maybe via space settlements, maybe via collapse of existing states, etc. - but I expect these moments to be few and far between. A significant literature and set of experts on "ideal governance" could lead to a radically different kind of state government, potentially with radically different policies that the rest of the world could learn from.
+ *A weird, out-of-left-field application.* Some of my interest in this topic actually comes via my interest in [moral uncertainty](https://80000hours.org/articles/moral-uncertainty/): the question of what it's ethical to do when one is *struggling between more than one theory of ethics,* with radically different implications. This is hard to explain, but I try [below.](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#MoralUncertainty)* I'll describe a bit more what I think **literature on this question could look like** (and what already exists that I know of), partly to guide readers who might be able to help me find more.
Common governance structures today
----------------------------------
All of these are simplified; I'm trying to illustrate the basic idea of what questions "ideal governance" is asking.
* A standard (e.g., public) corporation works like this: it has shareholders, assigned one vote per share (not per person), who elect a board of directors that governs by majority. The board generally appoints a CEO that it entrusts with day-to-day decisions. There is a "constitution" of sorts (the Articles of Incorporation and bylaws) and a lot more wrinkles in terms of how directors are selected, but that's the basic idea.
* A standard nonprofit is like a corporation, but entirely lacking the shareholder layer - it's governed directly by the board of directors. (I find something weird about a structure this simple - a simple board majority can do literally anything, even though the board of directors is often a somewhat random assortment of donors, advisors, etc.)
* The US federal government is a lot more complex. It splits authority between the House of Representatives, the Senate, the Presidency and the Supreme Court, all of which have specific appointment procedures, term limits, etc. and are meta-governed by a Constitution that requires special measures to change. There are lots of specific choices that were made in designing things this way, and lots of things that could've been set up differently in the 18th century that would probably still matter today.
* Other democracies tend to have governments that differ in a lot of ways ([e.g.](https://borgenproject.org/parliamentary-system-versus-presidential-system/)), while being based on broadly similar principles: voters elect representatives to more than one branch of government, which then divide up (and often can veto each other on) laws, expenditures, etc.
* When I was 13, the lunch table I sat at established a Constitution with some really strange properties that I can't remember. I think there was a near-dictatorial authority who rotated daily, with others able to veto their decisions by assembling supermajorities or maybe singing silly songs or something.

In addition to the design choices shown in the diagrams, there are a lot of others:
* Who votes, how often, and what voting system is used?
* How many representatives are there in each representative body? How are they divided up (one representative per geographic area, or [party-list proportional representation](https://en.wikipedia.org/wiki/Proportional_representation#How_party-list_PR_works), or something else)?
* What term limits exist for the different entities?
* Do particular kinds of decisions require supermajorities?
* Which restrictions are enshrined in a hard-to-change Constitution (and how hard is it to change), vs. being left to the people in power at the moment?
One way of thinking about the "ideal governance" question is: **what kinds of designs could exist that aren't common today? And how should a new organization/country/etc. think about what design is going to be best for its purposes, beyond "doing what's usually done"?**
For any new institution, it seems like the stakes are potentially high - in some important sense, picking a governance system is a "one-time thing" (any further changes have to be made using the rules of the existing system[1](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn1)).
Perhaps because of this, there doesn't seem to be much use of innovative governance designs in high-stakes settings. For example, here are a number of ideas I've seen floating around that seem cool and interesting, and ought to be considered if someone could set up a governance system however they wanted:
* **Sortition,** or choosing people randomly to have certain powers and responsibilities. An extreme version could be: "Instead of everyone voting for President, randomly select 1000 Americans; give them several months to consider their choice, perhaps paid so they can do so full-time; then have them vote."
+ The idea is to pick a subset of people who are both (a) representative of the larger population (hence the randomness); (b) will have a stronger case for putting serious time and thought into their decisions (hence the small number).
+ It's solving a similar problem that "representative democracy" (voters elect representatives) is trying to solve, but in a different way.* **Proportional** **decision-making.** Currently, if Congress is deciding how to spend $1 trillion, a coalition controlling 51% of the votes can control all $1 trillion, whereas a coalition controlling 49% of the votes controls $0. Proportional decision-making could be implemented as "Each representative controls an equal proportion of the spending," so a coalition with 20% of the votes controls 20% of the budget. It's less clear how to apply this idea to other sorts of bills (e.g., illegalizing an activity rather than spending money), but there are plenty of possibilities.[2](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn2)* **[Quadratic voting](https://www.radicalxchange.org/concepts/quadratic-voting/)**, in which people vote on multiple things at once, and can cast more votes for things they care about more (with a "quadratic pricing rule" intended to make the number of votes an "honest signal" of how much someone cares).
* **Reset/Jubilee:** maybe it would be good for some organizations to periodically redo their governance mostly from scratch, subject only to the most basic principles. Constitutions could contain a provision like "Every N years, there shall be a new Constitution selected. The 10 candidate Constitutions with the most signatures shall be presented on a ballot; the Constitution receiving the most votes is the new Constitution, except that it may not contradict or nullify this provision. This provision can be prevented from occurring by [supermajority provision], and removed entirely by [stronger supermajority]."
* More examples in a footnote.[3](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn3)
If we were starting a country or company from scratch, which of the above ideas should we integrate with more traditional structures, and how, and what else should we have in our toolbox? That's the question of ideal governance.
Why do I care?
--------------
I have one "short-term, tangible need" reason; one "small probability of a big future win" reason; and one "weird, out-of-left-field" reason.
**A short-term, tangible need: companies developing AI, or otherwise aiming to be working with huge stakes.** Say you're starting a new company for developing AI systems, and you believe that you could end up building [AI with the potential to change the world forever](https://www.cold-takes.com/most-important-century/).
* The standard governance setup for a corporation would hand power over all the decisions you're going to make to your shareholders, and by default most of your shares are going to end up held by people and firms that invested money in your company. Hopefully it's clear why this doesn't seem like the ideal setup for a company whose decisions could be world-changing. A number of AI companies have acknowledged the basic point that "Our ultimate mission should NOT just be: make money for shareholders," and that seems like a good thing.
* One alternative would be to set up like a nonprofit instead, with all power vested in a board of directors (no shareholder control). Some issues are that (a) this cuts shareholders out of the loop completely, which could make it pretty hard to raise money; (b) according to me at least, this is just a weird system of governance, for reasons that are not super easy to articulate concisely but I'll take a shot in a footnote[4](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn4) (and possibly write more in the future).
* Another alternative is a setup that is somewhat common among tech companies: 1-2 founders hold enough shares to keep control forever, so you end up with essentially a dictatorship. This also ... leaves something to be desired.
* Or maybe a company like this should just set up more like a government from the get-go, offering everyone in the world a vote via some complex system of representation, checks and balances. But this seems poorly suited to at least the relatively early days of a company, when it's small and its work is not widely known or understood. But then, how does the company handle the transition from the latter to the former? And should the former be done exactly in the standard way, or is there room for innovation there?
Over the last several years, I've spoken with heads of several (more than 3) organizations that struggle between options like the above, and have at least strongly considered unusual governance setups. I wish I could point them to someone who was like: "I've read all of, and written much of, the literature on what your options are. I can walk you through the pros and cons and help you pick a governance system that balances them for your needs."
But right now, I can't, and I've seen a fair amount of this instead: "Let's just throw together the best system we can, based mostly on what's already common but with a few wrinkles, and hope that we figure this all out later." I think this is the right solution given how things stand, but I think it really does get continually harder to redesign one's governance as time goes on and more [stakeholders](https://www.cold-takes.com/empowerment-and-stakeholder-management/) enter the picture, so it makes me nervous.
Similar issues could apply to mega-corporations (e.g., [FAANG](https://www.investopedia.com/terms/f/faang-stocks.asp)) that are arguably more powerful than what the standard shareholder-centric company setup was designed for. Are there governance systems they could adopt that would make them more broadly accountable, without copying over all the pros and cons of full-blown representative democracy as implemented by countries like the US?
**A small probability of a big future win: future new states.** The world today has lots of governments, but they seem to mostly follow a very small number of basic governance templates (e.g., I believe you see almost none of [the things I listed above](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#AssortedIdeas)), and probably relatedly, there seems to be remarkably little variety and experimentation with policy. Policies that many believe could be huge wins - such as [dramatically expanded immigration](https://smile.amazon.com/dp/B07YRKYKZ3/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1), [land value taxation](https://en.wikipedia.org/wiki/Land_value_tax), "consumer reports"-style medical approvals,[5](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn5) [drug decriminalization](https://michaelpollan.com/books/how-to-change-your-mind/), and [charter cities](https://chartercitiesinstitute.org/intro/) - don't seem to have gotten much of a trial anywhere in the world.
At some point, there will be new states with new Constitutions - maybe via space settlements, maybe via collapse of existing states, etc. - but I expect these moments to be few and far between.
By default I expect future Constitutions to resemble present ones an awful lot. But maybe, at some future date, there will be a large "ideal governance" literature and some points of expert consensus on innovative governance designs that somebody really ought to try. That could lead to a radically different kind of state government, potentially with radically different policies that the rest of the world could learn from.
**An out-of-left-field application for "ideal governance."** This is going to veer off the rails, so remember to skip to the next section if I lose you.
Some of my interest in this topic actually comes via my interest in [moral uncertainty](https://80000hours.org/articles/moral-uncertainty/): the question of what it's ethical to do when one is *struggling between more than one theory of ethics,* with radically different implications.
For example, there are arguments that our [ethical decisions should be dominated by concern for ensuring that as many people as possible will someday get to exist](https://www.cold-takes.com/debating-myself-on-whether-extra-lives-lived-are-as-good-as-deaths-prevented/). I really go back and forth on how much I buy these arguments, but I'm definitely somewhere between 10% convinced and 50% convinced. So ... say I'm "20% convinced" of some view that says preventing human extinction[6](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn6) is the [overwhelmingly](https://www.cold-takes.com/defending-one-dimensional-ethics/) most important consideration for at least some dimensions of ethics (like where to donate), and "80% convinced" of some more common-sense view that says I should focus on some cause unrelated to human extinction.[7](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn7) How do I put those two together and decide what this means for actual choices I'm making?
The closest thing I've seen to a reasonable-seeming answer is the idea of a [moral parliament](https://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html): I should act as though I'm run by a Parliament with 80 members who believe in "common-sense" ethics, and 20 members who believe in the "preventing extinction is overwhelmingly important" idea. But with default Parliament rules, this would just mean the 80 members can run the whole show, without any compromise with the 20.
And so, a [paper on the "moral parliament" idea](https://www.fhi.ox.ac.uk/wp-content/uploads/2021/06/Parliamentary-Approach-to-Moral-Uncertainty.pdf) tries to make it work by ... introducing a completely new governance mechanism that I can't find any other sign of someone else ever talking about, "proportional chances voting" (spelled out in a footnote).[8](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn8) I think this mechanism has its own issues,[9](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn9) but it's an attempt to ensure something like "A coalition controlling 20% of the votes has 20% of the effective power, and has to be compromised with, instead of being subject to the tyranny of the majority."
My own view (which I expect to write more about in the future) is that governance is roughly the right metaphor for "moral uncertainty": I am torn by [multiple different sides of myself](https://www.cold-takes.com/defending-one-dimensional-ethics/), with different takes on what it means to be a good person, and the problem of getting these different sides of myself to reach a decision together is like the problem of getting different citizens (or shareholders) to reach a decision together. The more we can say about what ideal governance looks like, the more we can say about how this ought to work - and the better I expect this "moral parliament"-type idea to end up looking, compared to alternatives.[10](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn10)
The literature I'm looking for
------------------------------
Ideal governance seems like the sort of topic for which there should be a "field" of "experts," studying it. What would such study look like? Three major categories come to mind:
**Brainstorming** ideas such as [those I listed above](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#AssortedIdeas) - innovative potential ways of solving classic challenges of governance, such as reconciling "We want to represent all the voters" with "We want decisions to be grounded in expertise and high engagement, and voters are often non-expert and not engaged."
I've come across various assorted ideas in this category, including [quadratic voting](https://www.radicalxchange.org/concepts/quadratic-voting/), [futarchy](https://en.wikipedia.org/wiki/Futarchy), and [proportional chances voting](https://www.fhi.ox.ac.uk/wp-content/uploads/2021/06/Parliamentary-Approach-to-Moral-Uncertainty.pdf), without seeing much sign that these sit within a broader field that I can skim through to find all the ideas that are out there.
**Economics-style theory** in which one asks questions like: "If we make particular assumptions about who's voting, what information they have and lack, how much they suffer from [bounded rationality](https://en.wikipedia.org/wiki/Bounded_rationality), and how we define 'serving their interests' (see below), what kind of governance structure gets the best outcome?"
[Social choice theory](https://en.wikipedia.org/wiki/Social_choice_theory), including on [voting methods](https://plato.stanford.edu/entries/voting-methods/), tackles the "how we define 'serving their interests'" part of this. But I'm not aware of people using similar approaches to ask questions like "Under what conditions would we want 1 chamber of Congress vs. 2, or 10? 100 Senators vs. 500, or 15? A constitution that can be modified by simple majority, vs. 2/3 majority vs. consensus? Term limits? Etc. etc. etc."
**Empirical research (probably qualitative):** Are there systematic reviews of unusual governance structures tried out by companies, and what the results have been? Of smaller-scale experiments at co-ops, group houses and lunch tables?
To be clear, I think the most useful version of this sort of research would probably be very qualitative - collecting reports of what problems did and didn't come up - rather than asking questions like "How does a particular board structure element affect company profits?"
One of the things I expect to be tricky about this sort of research is that I think a lot of governance comes down to things like "What sorts of people are in charge?" and "What are the culture, expectations, norms and habits?" A setup that is "officially" supposed to work one way could evolve into something quite different via informal practices and "soft power." However, I think the formal setup (including things like "what the constitution says about the principles each governance body is supposed to be upholding") can have big effects on how the "soft power" works.
If you know where to find research or experts along the lines of the above, please share them [in the comments](https://www.lesswrong.com/posts/slug/ideal-governance-for-companies-countries-and-more#comments) or [using this form](https://forms.gle/J4yCHcRL8VAC6mKR6) if you don't want them to be public.
I'll likely write about what I come across, and if I don't find anything new, I'll likely ramble some more about ideal governance. So either way, there will be more on this topic!
---
Footnotes
---------
1. Barring violent revolution in the case of countries. [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref1)
- An example would be the "proportional chances voting" idea described [here](https://www.fhi.ox.ac.uk/wp-content/uploads/2021/06/Parliamentary-Approach-to-Moral-Uncertainty.pdf). [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref2)
- * **Proxying/[liquid democracy](https://en.wikipedia.org/wiki/Liquid_democracy)**, or allowing voters to transfer their votes to other voters. (This is common for corporations, but not for governments.) This could be an alternative or complement to electing representatives, solving a similar problem (we want lightly-engaged voters to be represented, but we also want decisions ultimately made using heavy engagement and expertise). At first glance it may seem to pose a risk that people will be able to "buy votes," but I don't actually think this is necessarily an issue (proxying could be done anonymously and on set schedules, like other votes).
* **Soft term limits:** the more terms someone has served, the greater a supermajority they need to be re-elected. This could be used to strike a balance between the advantages of term limits (avoiding "effectively unaccountable" incumbents) and no-term-limits (allowing great representatives to keep serving).
* **Formal technocracy/meritocracy**: Using hard structures (rather than soft norms) to assign authority to people with particular expertise and qualifications. An extreme example would be [futarchy](https://en.wikipedia.org/wiki/Futarchy), in which prediction markets directly control decisions. A simpler example would be structurally rewarding representatives (via more votes or other powers) based on assessments of their track records (of predictions or decisions), or factual understanding of a subject. This seems like a tough road to go down by default, as any mechanism for evaluating "track records" and "understanding" can itself be politicized, but there's a wide space of possible designs. [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref3)- Most systems of government have a sort of funnel from "least engaged in day to day decisions, but most ultimately legitimate representatives of whom the institution is supposed to serve" (shareholders, voters) to "most engaged in day to day decisions, but ultimately accountable to someone else" (chief executive). A nonprofit structure is a very short funnel, and the board of directors tends to be a somewhat random assortment of funders, advisors, people who the founders just thought were cool, etc. I think they often end up not very accountable (to anyone) or engaged in what's going on, such that they have a hard time acting when they ought to, and the actions they do take are often kind of random.
I'm not saying there is a clearly better structure available for this purpose - I think the weirdness comes from the fact that it's so unclear who should go in the box normally reserved for "Shareholders" or "Voters." It's probably the best common structure for its purpose, but I think there's a lot of room for improvement, and the stakes seem high for certain organizations. [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref4)- Context in [this Marginal Revolution post](https://marginalrevolution.com/marginalrevolution/2015/08/is-the-fda-too-conservative-or-too-aggressive.html), which links to [this 2005 piece on a "consumer reports" model for the FDA](https://www.independent.org/news/article.asp?id=1500). [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref5)- Or "existential catastrophe" - something that drastically curtails humanity's future, even if it doesn't drive us extinct. [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref6)- This isn't actually where I'm at, because I think the leading existential risks are a big enough deal that I would want to focus on them even if I completely ignored the philosophical argument that the future is overwhelmingly important. [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref7)- Let's say that 70% of the Parliament members vote for bill X, and 30% vote against. "Proportional chance voting" literally uses a weighted lottery to pass bill X with 70% probability, and reject it with 30% probability (you can think of this like rolling a 10-sided die, and passing the bill if it's 7 or under).
A key part of this is that the members are supposed to negotiate before voting and holding the lottery. For example, maybe 10 of the 30 members who are against bill X offer to switch to supporting it if some change is made. The nice property here is that rather than having a "tyranny of the majority" where the minority has no bargaining power, we have a situation where the 70-member coalition would still love to make a deal with folks in the minority, to further increase the probability that they get their way.
Quote from the paper that I am interpreting: "Under proportional chances voting, each delegate receives a single vote on each motion. Before they vote, there is a period during which delegates may negotiate: this could include trading votes on one motion for votes on another, introducing novel options for consideration within a given motion, or forming deals with others to vote for a compromise option that both consider to be acceptable. The delegates then cast their ballots for one particular option in each motion, just as they might in a plurality voting system. But rather than determining the winning option to be the one with the most votes, each option is given a chance of winning proportional to its share of the votes." [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref8)- What stops someone who lost the randomized draw from just asking to hold the same vote again? Or asking to hold a highly similar/related vote that would get back a lot of what they lost? How does that affect the negotiated equilibrium? [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref9)- Such as "[maximize expected choice-worthiness](https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5bc7224a0852299b5cd60e86/1539777103255/Why+Maximize+Expected+Choice-Worthiness%3F.pdf)," which I am not a fan of for reasons I'll get to in the future. [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref10) |
e61f77a2-f55c-4e99-ad73-749b424f593f | trentmkelly/LessWrong-43k | LessWrong | Housing Markets, Satisficers, and One-Track Goodhart
There are two parts to what people generally refer to as the "Housing Crisis". One is the simple fact that homes are too expensive. The other part is that building new, expensive houses pushes existing, poorer, renting residents out of communities and scatters them across the outskirts of cities where they have to spend eight hours a day commuting.
Many people think the solution to the first part is building more houses. Many people also think that building more houses conflicts with solving the second part. To me the question is this: given that people like communities and presumably would be happy to pay money for them, why isn't this currently a factor in the housing market?
Regular Goodhart
Normal Goodhart's law goes like this:
* A system Soptimizes for X
* Xis correlated with U, which we want
* So we like system Sand give it more power
* With too much power, S makes lots of Xat the cost of U
* Now we have no U
One case of this is distributional shift, where for very big X, it's no longer correlated with U. For example height and basketball ability.
An example in a typical market would be like this:
* Company in market optimizes for making profit
* Good products are correlated with profit
* With too much power the strategies like "make good product" are dominated by other strategies like "form a total monopoly" or with enough power, extreme cases like "take over France for slave labour"
This is a case of standard Goodhart.
One-Track Goodhart
Now consider the following case:
* Housing developers optimize for making profit
* Various factors are correlated with house price: sufficient supply of places to live, nice communities, being pleasant to live in
* With such an undersupply, the factor of supply/demand dominates everything else
* Now housing developers are incentivised to remove existing housing in favour of servicing the parts of the market where undersupply is greatest
Note how we've gone out of the distribution where profits correl |
03315d38-4874-4e2a-93b1-769f00b14cfb | trentmkelly/LessWrong-43k | LessWrong | Meta: A 5 karma requirement to post in discussion
Admins have been doing a decent, timely job taking down the spam that comes up in the Discussion section. But it is an eyesore for any period of time and there seems to be more and more of it. And there is an easy solution: a small karma requirement for discussion section posts. I think 5 would about right. A reasonable, literate person can get 5 karma pretty easily. "Hi, I'm new" usually does it. That plus a half-way insightful comment about something almost definitely will. This would screen out the spammers. As for the occasional genuine user that posts in discussion before commenting at all, I don't know how many there have been but my sense is that delaying them from posting until they can get five upvotes is almost certainly a good thing.
Thoughts? Or is changing this actually a difficult task that requires rewriting the site's code and that's why it hasn't been done already? |
59196055-f93a-4860-98f6-7ecd0962e471 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Interpretability
*Chris Olah wrote the following topic prompt for the Open Phil 2021 request for proposals on the alignment of AI systems. We (Asya Bergal and Nick Beckstead) are running the Open Phil RFP and are posting each section as a sequence on the Alignment Forum. Although Chris wrote this document, we didn’t want to commit him to being responsible for responding to comments on it by posting it.*
*Summary: We would like to see research building towards the ability to "reverse engineer" trained neural networks into human-understandable algorithms, enabling auditors to catch unanticipated safety problems in these models.*
--
Potential safety failures of neural networks might be thought of as falling into two broad categories: known safety problems and unknown safety problems. A known safety problem is one which can be easily anticipated in advance of deploying a model or easily observed in the model's behavior. Such safety failures can be easily caught with testing, and it seems reasonable to hope that they will be fixed with human feedback. But it seems like there’s much less of a clear story for how we’ll resolve unknown safety problems -- the things we didn’t think to test for and wouldn’t obviously give feedback to fix.
Even if we’re able to anticipate certain safety problems, we may not know if we’re sufficiently disincentivizing them. A model might behave well on the training distribution, then unexpectedly exhibit some safety failure in a context it wasn’t trained in. In the extreme, a model might make a “treacherous turn”[[1]](#fn-JZCyxqHavaanGeQar-1) -- it may use its understanding of the training setup to deliberately behave well only during training, then pursue different goals once it knows it’s outside of the training distribution.
In traditional software engineering, our ability to mitigate unanticipated safety problems largely flows from our ability to understand and carefully reason about code. While testing may catch various kinds of easy to observe or anticipated problems, we rely on code reviews, careful engineering, and even systematic verification to avoid other problems. These approaches are only possible because we can understand code for normal computer programs, something we don’t, by default, get with neural networks.
Neural network parameters can be seen as the assembly instructions of a complex computer program. If it was possible to reverse engineer the parameters of trained neural networks into human-understandable algorithms, it may enable us to catch safety problems the same way we are able to in code.
Recent research has shown that it is possible to reverse engineer modern neural networks into human understandable computer programs, at least on a small scale. The [Circuits Thread](https://distill.pub/2020/circuits/) on Distill contains many examples of reading human understandable algorithms off the weights of neural networks.
There’s also some evidence that this kind of analysis can reveal unanticipated problems and concerns. An [analysis of CLIP](https://distill.pub/2021/multimodal-neurons/) found that the model had neurons related to race, gender, age, religions, LGBT status, mental health, physical disability, pregnancy, and parental status. We can also mechanistically observe concerning uses of these protected attributes, such as an Asian culture neuron increasing the probability of “I feel pressured” or an LGBT neuron increasing the probability of “I feel accepted.” Although there is significant attention to bias in machine learning, it tends to be focused on a couple categories such as race and gender. Surfacing that a model represents other protected attributes is a proof of concept that mechanistic interpretability can surface unanticipated concerns in state of the art models.
We would like to see more research aimed at mechanistically understanding neural networks, at seriously reverse engineering trained models into understandable programs. While we think it’s most likely this will take the form of work in a similar vein to Circuits, we’re also open to other ideas. Research projects should meet the following desiderata:
* Research should map neural network parameters to human understandable algorithms.
* We prefer rigorous understanding of a narrow aspect of a model to less rigorous understanding of entire models.
* Methods should be able to discover unknown and unanticipated algorithms. They should not be based on a priori assumptions about what computation exists in neural networks.
* Methods should be possible to apply to standard, widely used neural networks.
* Methods should have a plausible path to giving a full mechanistic understanding of neural networks.
* Methods should plausibly scale to completely understanding with enough human effort. For example, they shouldn’t explode exponentially as model size or the complexity of the computation being studied increases.
At this stage, we’re interested in work that generically makes progress towards mechanistically understanding neural networks, but it’s worth noting that there are specific questions which are of particular interest from a safety perspective. For example:
* What controls whether a language model generates true or false statements?
* To what extent does the model represent social interaction or mental state? (For example, to what extent does GPT-3 model the mental state, emotions and beliefs of different participants in a dialog?)
* Is the model deliberately deceiving users? (Formalizing exactly what this means would be part of the challenge. One version might be that the model internally represents that a statement is false and produces it anyways. A stronger might be that the model has a model of there being another participant in the dialog which it is aiming to persuade of a falsehood.)
* What is going on in meta-learning? (One of the most impressive properties of modern language models is their in-context meta-learning. Some plausible mechanisms for meta-learning -- notably, [mesa-optimization](https://www.lesswrong.com/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction) -- would have significant safety implications. If we understood meta-learning, we might either exclude these mechanisms or recognize them and raise concern.)
* How do large language models store factual knowledge? (It might be useful to access a model’s knowledge without having to rely on it to be truthful.)
Work that addresses one of these would be of particular interest.
The following sections will describe some more specific research directions we think are promising.
Aspirational Goal: Fully Understand a Neural Network
====================================================
A useful aspirational goal in this work is to fully reverse engineer any modern neural network, such as an ImageNet classifier or modern language model. There are a number of ways you could potentially operationalize “fully reverse engineer”:
* One has a theory of what every neuron (or feature in another basis) does, and can provide a “proof by induction” that this is correct. That is, show that for each neuron, if one takes the theories of every neuron in the previous layer as a given, the resulting computation by the weights produces the next hypothesized feature. (One advantage of this definition is that, if a model met it, the same process could be used to verify certain types of safety claims.)
* One has a theory that can explain every parameter in the model. For example, for the weights connecting InceptionV1 mixed4b:373 (a wheel detector) to mixed4c:447 (a car detector) *must be positive at the bottom and not elsewhere* because cars have wheels at the bottom. By itself, that would be an explanation with high explanatory power in the Piercian sense, but ideally such a theory might be able to predict parameters without observing them (this is tricky, because not observing parameters makes it harder to develop the theory), or predict the effects of changing parameters (in some cases, parameters have simple effects on model behavior if modified which follow naturally from understanding circuits, but unfortunately this often isn’t the case even when one fully understands something).
* One can reproduce the network with handwritten weights, without consulting the original, simply by understanding the theory of how it works.
(It’s worth noting that all these can also be applied to *parts of a model* in addition to the full thing, and in fact have all roughly been achieved for [curve circuits](https://distill.pub/2020/circuits/curve-circuits/).)
At the moment, no one fully understands any non-trivial neural network. Demonstrating that it’s possible to do so is both a natural milestone, and might make society much more willing to invest in this kind of reverse engineering. This goal is aspirational, but a successful project should somehow advance us towards this goal.
We’re open to a range of possibilities as to how it could happen, as long as it’s a genuine milestone towards understanding powerful models. For example, this could be achieved by simply reverse engineering existing models, but a model engineered to be interpretable in some way also seems like fair game, provided it’s equally performant and easy to train. (At the moment, InceptionV1 is by far the closest thing we have to a fully reverse engineered model, about 30% reverse engineered as measured by neuron count. But reverse engineering another model would be equally useful as a milestone.)
Note that even if we are able to fully understand a modern neural network, we are far from our ultimate, even more ambitious goal: a general method for understanding neural networks of arbitrary size and sophistication.
Research Direction: Discovering Features and Circuits
=====================================================
In the Circuits approach, neural networks are composed of features and circuits. Characterizing features and circuits is the most direct way to move us towards fully understanding neural networks. Examples of doing this kind of work can be found in the Circuits thread (see especially [Curve Detectors](https://distill.pub/2020/circuits/curve-detectors), [Curve Circuits](https://distill.pub/2020/circuits/curve-circuits), and [High-Low Frequency Detectors](https://distill.pub/2020/circuits/frequency-edges); see also [Visualizing Weights](https://distill.pub/2020/circuits/visualizing-weights/) for more detailed discussion of methods for studying circuits).
There are several reasons why it might be useful to study specific features and circuits, especially as a starter project:
* Studying new features and circuits is comparatively low research risk. One can reliably find new interesting features, study the circuits that implement them, and have a fairly interesting result. This makes it a pretty natural entry point for someone starting to work on mechanistic interpretability.
* A major risk of interpretability research seems to be becoming untethered and describing things that don’t really map to what’s going on in the model. Ultimately, being able to describe other research in terms of circuits is a very helpful epistemic check, and getting practice working with circuits is helpful for working up to this.
* Every feature or circuit we understand advances us towards understanding a neural network. If we do enough of this kind of work on a specific network like InceptionV1, and it is possible for humans to understand every neuron/feature in a neural network, we will eventually achieve full understanding. (The assumption that humans can understand every feature is doing a lot of work here, but if true gives you a relatively straightforward path to the inductive definition of full understanding.) It also gives us a foothold for understanding other features in that model, since it’s often easier to understand features once you understand features they’re connected to.
* Because features and circuits are often universal, recurring across many models, every circuit and feature we find gives us a foothold into understanding future models, making it easier and faster to study them. It also makes it easier to study similarities and differences between models, opening up space to study a kind of comparative anatomy of neural networks.
Results on new features and circuits are especially interesting if:
* You find they are universal, forming in many different models.
* They are unique to especially large or sophisticated models, and seem linked to new kinds of capabilities (like the multimodal neurons found in CLIP).
Research Direction: Scaling Circuits to Larger Models
=====================================================
The main weakness of the circuits approach is that, by focusing on small scale structure, it may not be able to scale to understanding large scale models. [Curve Circuits](https://distill.pub/2020/circuits/curve-circuits/) is the largest example of reverse engineering a circuit to date, at ~50K parameters. This is many orders of magnitude smaller than modern language models. How might we bridge the gap?
One promising direction is to find additional structure which greatly simplifies mechanistically understanding neural networks, or at least simplifies understanding the safety relevant features. The circuits thread describes two such types of structure: “motifs” (recurring patterns in circuits, such as [equivariance](https://distill.pub/2020/circuits/equivariance/)) and “structural phenomena” (large scale patterns in how neural networks are organized, such as [branch specialization](https://distill.pub/2020/circuits/branch-specialization/)).
Equivariance can simplify circuits in early vision by as much as a factor of 50x, so there’s precedence for discovering structure that can give order of magnitude simplifications of neural networks. Similarly, one could imagine a world where something similar to work on branch specialization or research on modularity, can break neural networks into components, where only some of the components are necessary to audit for safety, massively reducing the work needed to ensure models are safe.
Research projects in this category would ideally:
* Demonstrate that the structure they study can simplify understanding features and circuits.
* Avoid making strong a priori assumptions about what structure exists in a neural network.
* Have an argument for how this kind of structure studied could have a large simplifying effect on mechanistically understanding large models, of the kind that could contribute to bridging an order of magnitude gap.
Concrete examples:
* Understanding the natural equivariance to simple transformations like rotation, scale and hue dramatically simplify the study of early vision in conv nets. Are there other kinds of equivariance that might simplify late vision? What about language models?
A fictional example of what a big success for this approach might look like is “It turns out large language models can mostly be understood in terms of a couple large families of neurons. One family stores factual knowledge with neurons parameterized by variables A, B, and C; the other major types of neurons are …”
* A number of papers (eg. [Filan et al. 2021](https://arxiv.org/pdf/2103.03386.pdf)) have been written about modularity in neural networks. Can we understand how these structures relate to the computation being performed in neural networks ([branch specialization](https://distill.pub/2020/circuits/branch-specialization/) takes preliminary steps in this direction for the special case of architecturally enforced branches)? How well do various notions of modularity organize and simplify the features and circuits in neural networks? Are there other notions that better leverage the graph structure of neural networks to organize them? Can this work be used to isolate parts of a neural network relevant to particular safety questions (see “Specific Questions of Interest” above) and target them?
A fictional example of what a big success for this approach might look like is “We found a subgraph of neurons which is responsible for social reasoning and it’s only 0.1% of the model. If the model was deliberately trying to mislead us, it would turn up here.”
In addition to these approaches, we’re excited about other approaches to scaling interpretability to much larger models. These approaches should fit the general desiderata listed in the introduction, and have a clear story for why their success would enable significantly better scaling. They should also consider how feasible the approach is to study right now; for example, using large numbers of humans, or integrating with some alignment scheme that provides automation, might ultimately be crucial for scaling interpretability but challenging to study now.
Research Direction: Resolving Polysemanticity
=============================================
Another major challenge to circuits is [polysemanticity](https://distill.pub/2020/circuits/zoom-in/#claim-1-polysemantic), where some neurons respond to multiple unrelated features. One theory is that polysemanticity occurs because models would ideally represent more features than they have neurons, and exploit the fact that high dimensional spaces can have many “almost orthogonal” directions to store these features in “[superposition](https://distill.pub/2020/circuits/zoom-in/#claim-2-superposition)” across neurons. Polysemanticity seems related to questions about disentangling representations.
Polysemanticity makes circuits much harder to study and audit, and increases the risk that analysis will miss important structures. For example, imagine one is trying to audit a model for potential bias issues (we see bias as a proxy for a much broader class of safety concerns, but can be studied in models that exist today). If one was to find a circuit where a “female” neuron excites a “nurse” neuron but inhibits a “doctor” neuron, auditors would likely conclude that was an example of bias. But if a “wheel/female/tree” neuron excites a “car/nurse/waterfall” neuron, and there are a number of other neurons which seem to have overlapping features, it’s less clear what to make of it.
* Is there any way to quantify polysematnicity in an automated manner, without relying on human evaluation or a priori assumptions about what features exist?
* If polysemanticity arises because neural networks are trying to represent too many features, and the features are in superposition, one might think of it as a larger model where the neurons and circuits have been “folded over themselves”. Can we “unfold” neural networks into non-polysemantic versions?
* Do neural networks become less polysemantic as we make them large? What about as we train them on more varied data? Or on harder datasets?
* Is there some way to train neural networks which are less polysemantic? (For example, does making models have sparser activations or sparser weights help?)
Other Directions
================
This list is far from comprehensive. There are many other questions we’d be interested to see proposals related to. To give a few examples:
* **More easily interpretable models** - Can we make architectural or training decisions that make features and circuits easier to understand, while preserving the expressivity of models? (Note that designing a model where an individual operation is easier to reason about won’t necessarily mean that circuits or the overall model are easier to understand, since most of the complexity in neural network-like architectures comes from composition of repeated structures.)
* **What occurs at model phase changes?** - If one looks at the performance of particular tasks, such as arithmetic on numbers of a certain size, across model sizes, one often observes points where larger models discontinuously become better at a task. Discontinuous changes like this are interesting from a safety perspective, since it seems like they could cause a larger model to unexpectedly have safety issues a smaller model did not. What mechanistically occurs in the relevant circuits at these phase changes? Can we automatically detect these phase changes, without knowing what particular behavior might discontinuously change in advance?
* **Model Diffing** - In practice, we are studying a progression of neural networks of increasing size and sophistication. Universality suggests that the structures we discover in small models will often exist in large models. Can we take this a step further and provide a diff -- analogous to the diff one might get when doing a code review -- of the mechanistic differences between two models, so that we only need to analyze that diff for safety problems?
* **Quadratic Circuits** - In transformer models, dot products between keys and queries mean that some circuits have a quadratic form. Similarly, contrastive models have dot products between the representations of both encoders, creating quadratic circuits. Are there any special considerations for reasoning about these circuits?
We expect there are many other promising ideas we haven’t listed or considered. We would be excited to receive proposals for other interpretability projects that could help us catch unanticipated safety problems or guarantee good behavior in unusual contexts, if they meet the desiderata in the introduction. Proposals related to scaling mechanistic interpretability to larger models are of particular interest.
Resources
=========
Several of the Circuits articles provide colab notebooks reproducing the results in the article, which may be helpful references if one wants to do Circuits research on vision models. In particular, [Visualizing Weights](https://distill.pub/2020/circuits/visualizing-weights/) focuses on demonstrating and explaining some of the fundamental techniques. If one wants to study multimodal neurons, OpenAI has [some code](https://github.com/openai/CLIP-featurevis) that may be helpful.
---
1. Nick Bostrom describes this failure mode in Superintelligence, p. 117:
“…one idea for how to ensure superintelligence safety… is that we validate the safety of a superintelligent AI empirically by observing its behavior while it is in a controlled, limited environment (a “sandbox”) and that we only let the AI out of the box if we see it behaving in a friendly, cooperative, responsible manner. The flaw in this idea is that behaving nicely while in the box is a convergent instrumental goal for friendly and unfriendly AIs alike. An unfriendly AI of sufficient intelligence realizes that its unfriendly final goals will be best realized if it behaves in a friendly manner initially, so that it will be let out of the box. It will only start behaving in a way that reveals its unfriendly nature when it no longer matters whether we find out; that is, when the AI is strong enough that human opposition is ineffectual.” Additional discussions of the possibility of such failure modes can be found in Hubinger et al.’s [Risks from Learned Optimization in in Advanced Machine Learning Systems](https://arxiv.org/abs/1906.01820) (section 4, “Deceptive Alignment”) and Luke Muelhauser’s post, [“Treacherous turns in the wild”](http://lukemuehlhauser.com/treacherous-turns-in-the-wild/#more-6202). [↩︎](#fnref-JZCyxqHavaanGeQar-1) |
4e5aee62-7f9a-481b-87a0-e9d5fff28d10 | trentmkelly/LessWrong-43k | LessWrong | Siberian Arctic origins of East Asian psychology
I am the author of a new peer-reviewed APA paper exploring pre-Holocene ecological influences on psychology, specifically Ice Age Siberia pressures on East Asian lineages and if that has affected/retained in their modern day psychological or cultural traits. I find despite East Asian general psychology to be on a distinct extreme end of a global spectrum (as shown in cross cultural psychology), it is near identical to Inuit/Siberian populations, and to that of successful polar workers/expeditioners. I posit that it influenced the formation of values later branded as Confucianism. It can be read open access here https://psycnet.apa.org/fulltext/2025-88410-001.html
I know LW readers are well read on psychology and metacognition topics, so I invite you all to take a stab at understanding or breaking the theory. My X thread on it has gone viral with over a million views and 7k likes from prominent figures including POTUS adjacent ones, with highly polarized reactions (the right loves it, the left sends me death threats, everyone else finds it fascinating). https://x.com/arcticinstincts/status/1900223591750451276 The thread is grossly oversimplified and sensationalized for viral potential, and at first glance you'll quickly have many first order criticisms, but I encourage you to first read the paper entirely as they're mostly addressed already. Basically no one on X did before reacting.
The paper also introduces a novel methodology for observing the adaptation of local psychology to local selective pressures in real time, using success/fail data from local personnel selection criteria, which nullifies common evo psych criticisms of "just so stories". Ironically it currently only works for locally adapted psychology, which is arguably more taboo than pre-OoA universal psychology. All in all, the paper and its implications are controversial in every way.
I will respond to criticisms here, and if you are able to write it at an academic-level, I invite you to write i |
e50ed34c-b919-4667-8c8c-b772e346f5ea | trentmkelly/LessWrong-43k | LessWrong | Solstice 2022 Roundup
Many of you are already familiar with Rationalist Winter Solstice, our home-grown winter holiday. As the year grows literally dark, we gather in our respective communities to face various forms of darkness together, to celebrate what light human civilization has made, and to affirm ourselves as a community of shared values.
This thread is a central place to gather information about specific events. Please post times, places, registration or rsvp links, restrictions if any, etc. |
2d25bd93-9f41-4a8f-adf1-a1e9272d7c6e | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | How to make the best of the most important century?
*Audio version available at [Cold Takes](https://www.cold-takes.com/making-the-best-of-the-most-important-century/) (or search Stitcher, Spotify, Google Podcasts, etc. for "Cold Takes Audio")*

Previously in the **["most important century" series](https://www.cold-takes.com/roadmap-for-the-most-important-century-series/)**, I've argued that there's a high probability[[1]](#fn-gHsW7m9DsuMAHnBgd-1) that the coming decades will see:
* The development of a technology like **[PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/)** (process for automating scientific and technological advancement).
* A resulting **[productivity explosion](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#explosive-scientific-and-technological-advancement)** leading to development of further transformative technologies.
* The seed of a **[stable galaxy-wide civilization](https://www.cold-takes.com/all-possible-views-about-humanitys-future-are-wild/)**, possibly featuring **[digital people](https://www.cold-takes.com/how-digital-people-could-change-the-world/)**, or possibly run by **[misaligned AI](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#misaligned-ai-mysterious-potentially-dangerous-objectives)**.
Is this an optimistic view of the world, or a pessimistic one? To me, it's both and neither, because **this set of events could end up being very good or very bad for the world, depending on the details of how it plays out.**
When I talk about being in the "most important century," I don't just mean that significant events are going to occur. I mean that we, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions.
But it's also important to understand why that's a big "if" - why the most important century presents a **challenging strategic picture, such that many things we can do might make things better or worse (and it's hard to say which).**
In this post, I will **present two contrasting frames for how to make the best of the most important century:**
* The **"Caution"** frame. In this frame, many of the worst outcomes come from developing something like **[PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/)** in a way that is too fast, rushed, or reckless. We may need to achieve (possibly global) coordination in order to mitigate pressures to race, and take appropriate care. (**[Caution](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#The__caution__frame)**)
* The **"Competition"** frame. This frame focuses not on *how and when* **[PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/) **is developed, but *who* (which governments, which companies, etc.) is first in line to benefit from the resulting productivity explosion. (**[Competition](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#The__competition__frame)**)
* People who take the "caution" frame and people who take the "competition" frame often favor **very different, even contradictory** actions. Actions that look important to people in one frame often look actively harmful to people in the other.
+ I worry that the "competition" frame will be overrated by default, and discuss why below. (**[More](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Why_I_fear__competition__being_overrated__relative_to__caution_)**)
+ To gain more clarity on how to weigh these frames and what actions are most likely to be helpful, we need more progress on **open questions** about the size of different types of risks from transformative AI. (**[Open questions](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Key_open_questions_for__caution__vs___competition_)**)
* In the meantime, there are some **robustly helpful actions** that seem likely to improve humanity's prospects regardless. (**[Robustly helpful actions](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Robustly_helpful_actions)**)
**The "caution" frame**
-----------------------
I've argued for a good chance that this century will see a transition to a world where **[digital people](https://www.cold-takes.com/how-digital-people-could-change-the-world/)** or **[misaligned AI](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#misaligned-ai-mysterious-potentially-dangerous-objectives)** (or something else very different from today's humans) are the major force in world events.
The "caution" frame emphasizes that **some types of transition seem better than others.** Listed in order from worst to best:
### **Worst: Misaligned AI**
I discussed this possibility **[previously](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#misaligned-ai-mysterious-potentially-dangerous-objectives)**, drawing on a number of other and more thorough discussions.[[2]](#fn-gHsW7m9DsuMAHnBgd-2) The basic idea is that AI systems could end up with objectives of their own, and could seek to expand throughout space fulfilling these objectives. Humans, and/or all humans value, could be sidelined (or driven extinct, if we'd otherwise get in the way).
### **Next-worst:[[3]](#fn-gHsW7m9DsuMAHnBgd-3) Adversarial Technological Maturity**
If we get to the point where there are digital people and/or (non-misaligned) AIs that can copy themselves without limit, and expand throughout space, there might be intense pressure to move - and multiply (via copying) - as fast as possible in order to gain more influence over the world. This might lead to different countries/coalitions furiously trying to outpace each other, and/or to outright military conflict, knowing that a lot could be at stake in a short time.
I would expect this sort of dynamic to risk a lot of the galaxy ending up in a bad state.[[4]](#fn-gHsW7m9DsuMAHnBgd-4)
One such bad state would be "permanently under the control of a single (digital) person (and/or their copies)." Due to the potential of digital people to create **[stable civilizations](https://www.cold-takes.com/how-digital-people-could-change-the-world/#lock-in)**, it seems that a given totalitarian regime could end up permanently entrenched across substantial parts of the galaxy.
People/countries/coalitions who *suspect each other* of posing this sort of danger - of potentially establishing stable civilizations under their control - might compete and/or attack each other early on to prevent this. This could lead to war with difficult-to-predict outcomes (due to the difficult-to-predict technological advancements that PASTA could bring about).
### **Second-best: Negotiation and governance**
Countries might prevent this sort of **[Adversarial Technological Maturity](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Next_worst_3___Adversarial_Technological_Maturity)** dynamic by planning ahead and negotiating with each other. For example, perhaps each country - or each person - could be allowed to create a certain number of digital people (subject to human rights protections and other regulations), limited to a certain region of space.
It seems there are a huge range of different potential specifics here, some much more good and just than others.
### **Best: Reflection**
The world could achieve a high enough level of coordination to *delay* any irreversible steps (including kicking off an **[Adversarial Technological Maturity](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Next_worst_3___Adversarial_Technological_Maturity)** dynamic).
There could then be something like what Toby Ord (in **[The Precipice](https://smile.amazon.com/Precipice-Existential-Risk-Future-Humanity-ebook/dp/B07V9GHKYP/)**) calls the "Long Reflection":[[5]](#fn-gHsW7m9DsuMAHnBgd-5) a sustained period in which people could collectively decide upon goals and hopes for the future, ideally representing the most fair available compromise between different perspectives. Advanced technology could imaginably help this go much better than it could today.[[6]](#fn-gHsW7m9DsuMAHnBgd-6)
There are limitless questions about how such a "reflection" would work, and whether there's really any hope that it could reach a reasonably good and fair outcome. Details like "what sorts of digital people are created first" could be enormously important. There is currently little discussion of this sort of topic.[[7]](#fn-gHsW7m9DsuMAHnBgd-7)
### **Other**
There are probably many possible types of transitions I haven't named here.
### **The role of caution**
If the above ordering is correct, then the future of the galaxy looks better to the extent that:
* **[Misaligned AI](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Worst__Misaligned_AI)** is avoided: powerful AI systems act to help humans, rather than pursuing objectives of their own.
* **[Adversarial Technological Maturity](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Next_worst_3___Adversarial_Technological_Maturity)** is avoided. This likely means that people do not deploy advanced AI systems, or the technologies they could bring about, in adversarial ways (unless this ends up necessary to prevent something worse).
* Enough coordination is achieved so that key players can "take their time," and **[Reflection](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Best__Reflection)** becomes a possibility.
Ideally, everyone with the potential to build something **[PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/)**-like would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like:
* Working to improve trust and cooperation between major world powers. Perhaps via AI-centric versions of **[Pugwash](https://en.wikipedia.org/wiki/Pugwash_Conferences_on_Science_and_World_Affairs)** (an international conference aimed at reducing the risk of military conflict), perhaps by pushing back against hawkish foreign relations moves.
* Discouraging governments and investors from shoveling money into AI research, encouraging AI labs to thoroughly consider the implications of their research before publishing it or scaling it up, etc. Slowing things down in this manner could buy more time to do research on avoiding **[misaligned AI](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Worst__Misaligned_AI)**, more time to build trust and cooperation mechanisms, more time to generally gain strategic clarity, and a lower likelihood of the **[Adversarial Technological Maturity](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Next_worst_3___Adversarial_Technological_Maturity)** dynamic.
**The "competition" frame**
---------------------------
(Note: there's some potential for confusion between the "competition" idea and the **[Adversarial Technological Maturity](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Next_worst_3___Adversarial_Technological_Maturity)** idea, so I've tried to use very different terms. I spell out the contrast in a footnote.[[8]](#fn-gHsW7m9DsuMAHnBgd-8))
The "competition" frame focuses less on how the transition to a radically different future happens, and more on who's making the key decisions as it happens.
* If something like **[PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/)** is developed primarily (or first) in country X, then the government of country X could be making a lot of crucial decisions about whether and how to regulate a potential explosion of new technologies.
* In addition, the people and organizations leading the way on AI and other technology advancement at that time could be especially influential in such decisions.
This means it could matter enormously "who leads the way on transformative AI" - which country or countries, which people or organizations.
* Will the governments leading the way on transformative AI be authoritarian regimes?
* Which governments are most likely to (effectively) have a reasonable understanding of the risks and stakes, when making key decisions?
* Which governments are least likely to try to use advanced technology for entrenching the power and dominance of one group? (Unfortunately, I can't say there are any that I feel great about here.) Which are most likely to leave the possibility open for something like "avoiding **[locked-in](https://www.cold-takes.com/how-digital-people-could-change-the-world/#lock-in)** outcomes, leaving time for general progress worldwide to raise the odds of a good outcome for everyone possible?"
* Similar questions apply to the people and organizations leading the way on transformative AI. Which ones are most likely to push things in a positive direction?
Some people feel that we can make confident statements today about which specific countries, and/or which people and organizations, we should hope lead the way on transformative AI. These people might advocate for actions like:
* Increasing the odds that the first PASTA systems are built in countries that are e.g. less authoritarian, which could mean e.g. pushing for more investment and attention to AI development in these countries.
* Supporting and trying to speed up AI labs run by people who are likely to make wise decisions (about things like how to engage with governments, what AI systems to publish and deploy vs. keep secret, etc.)
**Why I fear "competition" being overrated, relative to "caution"**
-------------------------------------------------------------------
By default, I expect a lot of people to gravitate toward the "competition" frame rather than the "caution" frame - for reasons that I don't think are great, such as:
* I think people naturally get more animated about "helping the good guys beat the bad guys" than about "helping all of us avoid getting a universally bad outcome, for impersonal reasons such as 'we designed sloppy AI systems' or 'we created a dynamic in which haste and aggression are rewarded.'"
* I expect people will tend to be overconfident about which countries, organizations or people they see as the "good guys."
* Embracing the "competition" frame tends to point toward taking actions - such as working to speed up a particular country's or organization's AI development - that are lucrative, exciting and naturally easy to feel energy for. Embracing the "caution" frame is much less this way.
* The biggest concerns that the "caution" frame focuses on — **[Misaligned AI](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Worst__Misaligned_AI)** and **[Adversarial Technological Maturity](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Next_worst_3___Adversarial_Technological_Maturity)** — are a bit abstract and hard to wrap one's head around. In many ways they seem to be the highest-stakes risks, but it's easier to be viscerally scared of "falling behind countries/organizations/people that scare me" than to be viscerally scared of something like "Getting a bad outcome for the long-run future of the galaxy because we rushed things this century."
+ I think **[Misaligned AI](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Worst__Misaligned_AI)** is a particularly hard risk for many to take seriously. It sounds wacky and sci-fi-like; people who worry about it tend to be interpreted as picturing something like The Terminator, and it can be hard for their more detailed concerns to be understood.
+ I'm hoping to run more posts in the future that help give an intuitive sense for why I think Misaligned AI is a real risk.
So for the avoidance of doubt, I'll state that I think the "caution" frame has an awful lot going for it. In particular, **[Misaligned AI](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Worst__Misaligned_AI) and [Adversarial Technological Maturity](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Next_worst_3___Adversarial_Technological_Maturity) seem a *lot* worse than other potential transition types,** and both seem like things that have a real chance of making the entire future of our species (and successors) much worse than they could be.
I worry that too much of the "competition" frame will lead to downplaying misalignment risk and rushing to deploy unsafe, unpredictable systems, which could have many negative consequences.
With that said, **I put serious weight on both frames.** I remain quite uncertain overall about which frame is more important and helpful (if either is).
**Key open questions for "caution" vs. "competition"**
------------------------------------------------------
People who take the "caution" frame and people who take the "competition" frame often favor **very different, even contradictory actions.** Actions that look important to people in one frame often look actively harmful to people in the other.
For example, people in the "competition" frame often favor moving forward as fast as possible on developing more powerful AI systems; for people in the "caution" frame, haste is one of the main things to avoid. People in the "competition" frame often favor adversarial foreign relations, while people in the "caution" frame often want foreign relations to be more cooperative.
(That said, this dichotomy is a simplification. Many people - including myself - resonate with both frames. And either frame could imply actions normally associated with the other; for example, you might take the "caution" frame but feel that haste is needed now in order to establish one country with a clear enough lead in AI that it can then take its time, prioritize avoiding **[misaligned AI](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Worst__Misaligned_AI)**, etc.)
I wish I could confidently tell you how much weight to put on each frame, and what actions are most likely to be helpful. But I can't. I think we would have more clarity if we had better answers to some key open questions:
### **Open question: how hard is the alignment problem?**
The path to the future that seems worst is **[Misaligned AI](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Worst__Misaligned_AI)**, in which AI systems end up with non-human-compatible objectives of their own and seek to fill the galaxy according to those objectives. How seriously should we take this risk - how hard will it be to avoid this outcome? How hard will it be to solve the "alignment problem," which essentially means having the technical ability to build systems that won't do this?[[9]](#fn-gHsW7m9DsuMAHnBgd-9)
* Some people believe that the alignment problem will be formidable; that our only hope of solving it comes in a world where we have enormous amounts of time and aren't in a race to deploy advanced AI; and that avoiding the "Misaligned AI" outcome should be by far the dominant consideration for the most important century. These people tend to heavily favor the "caution" interventions described above: they believe that rushing toward AI development raises our already-substantial risk of the worst possible outcome.
* Some people believe it will be easy, and/or that the whole idea of "misaligned AI" is misguided, silly, or even incoherent - planning for an overly specific future event. These people often are more interested in the "competition" interventions described above: they believe that advanced AI will probably be used effectively by whatever country (or in some cases smaller coalition or company) develops it first, and so the question is who will develop it first.
* And many people are somewhere in between.
The spread here is extreme. For example, see **[these results](https://www.lesswrong.com/posts/QvwSr5LsxyDeaPK5s/existential-risk-from-ai-survey-results)** from an informal "two-question survey [sent] to ~117 people working on long-term AI risk, asking about the level of existential risk from 'humanity not doing enough technical AI safety research' and from 'AI systems not doing/optimizing what the people deploying them wanted/intended.'" (As the scatterplot shows, people gave similar answers to the two questions.)

We have respondents who think there's a <5% chance that alignment issues will drastically reduce the goodness of the future; respondents who think there's a >95% chance; and just about everything in between.[[10]](#fn-gHsW7m9DsuMAHnBgd-10) My sense is that this is a fair representation of the situation: even among the few people who have spent the most time thinking about these matters, there is practically no consensus or convergence on how hard the alignment problem will be.
I hope that over time, the field of people doing research on AI alignment[[11]](#fn-gHsW7m9DsuMAHnBgd-11) will grow, and as both AI and AI alignment research advance, we will gain clarity on the difficulty of the AI alignment problem. This, in turn, could give more clarity on prioritizing "caution" vs. "competition."
### **Other open questions**
Even if we had clarity on the difficulty of the alignment problem, a lot of thorny questions would remain.
Should we be expecting transformative AI within the next 10-20 years, or much later? Will the leading AI systems go from very limited to very capable quickly ("hard takeoff") or gradually ("slow takeoff")?[[12]](#fn-gHsW7m9DsuMAHnBgd-12) Should we hope that government projects play a major role in AI development, or that transformative AI primarily emerges from the private sector? Are some governments more likely than others to work toward transformative AI being used carefully, inclusively and humanely? What should we hope a government (or company) literally *does* if it gains the ability to dramatically accelerate scientific and technological advancement via AI?
With these questions and others in mind, it's often very hard to look at some action - like starting a new AI lab, advocating for more caution and safeguards in today's AI development, etc. - and say whether it raises the likelihood of good long-run outcomes.
**Robustly helpful actions**
----------------------------
Despite this state of uncertainty, here are a few things that do seem clearly valuable to do today:
**Technical research on the alignment problem.** Some researchers work on building AI systems that can get "better results" (winning more board games, classifying more images correctly, etc.) But a smaller set of researchers works on things like:
* **[Training AI systems to incorporate human feedback into how they perform summarization tasks](https://openai.com/blog/learning-to-summarize-with-human-feedback/)**, so that the AI systems reflect hard-to-define human preferences - something it may be important to be able to do in the future.
* **[Figuring out how to understand "what AI systems are thinking and how they're reasoning,"](https://openai.com/blog/microscope/)** in order to make them less mysterious.
* **[Figuring out how to stop AI systems from making extremely bad judgments on images designed to fool them](https://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html)**, and other work focused on helping avoid the "worst case" behaviors of AI systems.
* **[Theoretical work](https://alignmentresearchcenter.org/)** on how an AI system might be very advanced, yet not be unpredictable in the wrong ways.
This sort of work could both reduce the risk of the **[Misaligned AI](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Worst__Misaligned_AI)** outcome - and/or lead to more clarity on just how big a threat it is. Some takes place in academia, some at AI labs, and some at specialized organizations.
**Pursuit of strategic clarity:** doing research that could address other crucial questions (such as those listed **[above](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Other_open_questions)**), to help clarify what sorts of immediate actions seem most useful.
**Helping governments and societies become, well, nicer.** Helping Country X get ahead of others on AI development could make things better or worse, for reasons given above. But it seems robustly good to work toward a Country X with better, more inclusive values, and a government whose key decision-makers are more likely to make thoughtful, good-values-driven decisions.
**Spreading ideas and building communities.** Today, it seems to me that the world is **extremely short on people who share certain basic expectations and concerns,** such as:
* Believing that AI research could lead to rapid, radical changes of the **[extreme kind laid out here](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#impacts-of-pasta)** (well beyond things like e.g. increasing unemployment).
* Believing that the alignment problem (discussed **[above](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Open_question__how_hard_is_the_alignment_problem_)**) is at least plausibly a real concern, and taking **[the "caution" frame](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#The__caution__frame)** seriously.
* Looking at the whole situation through a lens of "Let's get the best outcome possible for the whole world over the long future," as opposed to more common lenses such as "Let's try to make money" or "Let's try to ensure that my home country leads the world in AI research."
I think it's very valuable for there to be more people with this basic lens, particularly working for AI labs and governments. If and when we have more strategic clarity about what actions could maximize the odds of the "most important century" going well, I expect such people to be relatively well-positioned to be helpful.
A number of organizations and people have worked to expose people to the lens above, and help them meet others who share it. I think a good amount of progress (in terms of growing communities) has come from this.
**Donating?** One can donate today to places like **[this](https://funds.effectivealtruism.org/funds/far-future)**. But I need to admit that very broadly speaking, there's no easy translation right now between "money" and "improving the odds that the most important century goes well." It's not the case that if one simply sent, say, $1 trillion to the right place, we could all breathe easy about challenges like the alignment problem and **[risks of digital dystopias](https://www.cold-takes.com/how-digital-people-could-change-the-world/#would-these-impacts-be-a-good-or-bad-thing)**.
It seems to me that we - as a species - are currently terribly short on people who are paying *any* attention to the most important challenges ahead of us, and haven't done the work to have good strategic clarity about what tangible actions to take. **We can't solve this problem by throwing money at it.[[13]](#fn-gHsW7m9DsuMAHnBgd-13) First, we need to take it more seriously and understand it better.**
---
1. From **[Forecasting Transformative AI: What's the Burden of Proof?](https://www.cold-takes.com/forecasting-transformative-ai-whats-the-burden-of-proof/#some-rough-probabilities)**: "I am forecasting more than a 10% chance transformative AI will be developed within 15 years (by 2036); a ~50% chance it will be developed within 40 years (by 2060); and a ~2/3 chance it will be developed this century (by 2100)."
Also see **[Some additional detail on what I mean by "most important century."](https://www.cold-takes.com/some-additional-detail-on-what-i-mean-by-most-important-century/)** [↩︎](#fnref-gHsW7m9DsuMAHnBgd-1)
2. These include the books **[Superintelligence](https://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook/dp/B00LOOCGB2/)**, **[Human Compatible](https://smile.amazon.com/Human-Compatible-Artificial-Intelligence-Problem-ebook/dp/B07N5J5FTS)**, **[Life 3.0](https://smile.amazon.com/Life-3-0-Being-Artificial-Intelligence-ebook/dp/B06WGNPM7V)**, and **[The Alignment Problem](https://smile.amazon.com/Alignment-Problem-Machine-Learning-Values-ebook/dp/B085T55LGK/)**. The shortest, most accessible presentation I know of is **[The case for taking AI seriously as a threat to humanity](https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment)** (Vox article by Kelsey Piper). This **[report on existential risk from power-seeking AI](https://www.alignmentforum.org/posts/HduCjmXTBD4xYTegv/draft-report-on-existential-risk-from-power-seeking-ai)**, by Open Philanthropy's Joe Carlsmith, lays out a detailed set of premises that would collectively imply the problem is a serious one. [↩︎](#fnref-gHsW7m9DsuMAHnBgd-2)
3. The order of goodness isn't absolute, of course. There are versions of "Adversarial Technological Maturity" that could be worse than "Misaligned AI" - for example, if the former results in power going to those who deliberately inflict suffering. [↩︎](#fnref-gHsW7m9DsuMAHnBgd-3)
4. Part of the reason for this is that faster-moving, less-careful parties could end up quickly outnumbering others and determining the future of the galaxy. There is also a longer-run risk discussed in Nick Bostrom's **[The Future of Human Evolution](https://www.nickbostrom.com/fut/evolution.html)**; also see **[this discussion](https://slatestarcodex.com/2014/07/13/growing-children-for-bostroms-disneyland/)** of Bostrom's ideas on Slate Star Codex, though also see **[this piece by Carl Shulman](http://reflectivedisequilibrium.blogspot.com/2012/09/spreading-happiness-to-stars-seems.html)** arguing that this dynamic is unlikely to result in total elimination of nice things. [↩︎](#fnref-gHsW7m9DsuMAHnBgd-4)
5. See page 191. [↩︎](#fnref-gHsW7m9DsuMAHnBgd-5)
6. E.g., see **[this section](https://www.cold-takes.com/how-digital-people-could-change-the-world/#social-science)** of **[Digital People Would Be An Even Bigger Deal](https://www.cold-takes.com/how-digital-people-could-change-the-world/)**. [↩︎](#fnref-gHsW7m9DsuMAHnBgd-6)
7. One relevant paper: **[Public Policy and Superintelligent AI: A Vector Field Approach](https://nickbostrom.com/papers/aipolicy.pdf)** by Bostrom, Dafoe and Flynn. [↩︎](#fnref-gHsW7m9DsuMAHnBgd-7)
8. **[Adversarial Technological Maturity](https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1#Next_worst_3___Adversarial_Technological_Maturity)** refers to a world in which highly advanced technology has already been developed, likely with the help of AI, and different coalitions are vying for influence over the world. By contrast, "Competition" refers to a strategy for how to behave *before* the development of advanced AI. One might imagine a world in which some government or coalition takes a "competition" frame, develops advanced AI long before others, and then makes a series of good decisions that *prevent* Adversarial Technological Maturity. (Or conversely, a world in which failure to do well at "competition" raises the risks of Adversarial Technological Maturity.) [↩︎](#fnref-gHsW7m9DsuMAHnBgd-8)
9. See definitions of this problem at **[Wikipedia](https://en.wikipedia.org/wiki/AI_control_problem)** and **[Paul Christiano's Medium](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6)**. [↩︎](#fnref-gHsW7m9DsuMAHnBgd-9)
10. A more detailed, private survey done for **[this report](https://www.alignmentforum.org/posts/HduCjmXTBD4xYTegv/draft-report-on-existential-risk-from-power-seeking-ai)**, asking about the probability of "doom" before 2070 due to the type of problem discussed in the report, got answers ranging from <1% to >50%. In my opinion, there are very thoughtful people who have seriously considered these matters at both ends of that range. [↩︎](#fnref-gHsW7m9DsuMAHnBgd-10)
11. Some example technical topics **[here](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/the-open-phil-ai-fellowship#examples)**. [↩︎](#fnref-gHsW7m9DsuMAHnBgd-11)
12. Some discussion of this topic here: **[Distinguishing definitions of takeoff - AI Alignment Forum](https://www.alignmentforum.org/posts/YgNYA6pj2hPSDQiTE/distinguishing-definitions-of-takeoff)** [↩︎](#fnref-gHsW7m9DsuMAHnBgd-12)
13. Some more thought on "when money isn't enough" at **[this old GiveWell post](https://blog.givewell.org/2013/08/29/we-cant-simply-buy-capacity/)**. [↩︎](#fnref-gHsW7m9DsuMAHnBgd-13) |
750b91ed-2c56-42d9-a589-82fba704ef42 | trentmkelly/LessWrong-43k | LessWrong | Open thread, Nov. 3 - Nov. 9, 2014
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday. |
ac3e4077-0085-498f-ae09-b48f7a859ce0 | trentmkelly/LessWrong-43k | LessWrong | Beantown Stomp 2020 Retrospective
It's a bit weird to be writing a retrospective for an event that didn't take place, but there was still a lot of planning and organizing, both before and after the decision to cancel.
The week after Beantown Stomp 2019 finished I started thinking about 2020. I suggested it to the BIDA board, and people were positive. I checked with the hall, who had the weekends of 2020-03-14 and 2020-03-21 available, and they penciled us in. I checked with other events to make sure we didn't have conflicts, and both weekends looked good. In early April I booked Dereck Kalish to come back for sound, and started writing to bands and callers. The weekend of 3/21 was better for performers, so in late April I confirmed with the hall and sent out a save-the-date.
Then in early June the organizers of DEFFA let me know that the school where they hold their event needed to move them from 3/28 to 3/21. While the two events don't have that much overlap and I wasn't worried that we'd have too few dancers to make the weekend worth it, I was concerned about DEFFA's attendance. Additionally, one of the musicians we had booked had already committed to play at DEFFA.
Our hall happened to still have the 3/14 weekend available, and I started talking to performers to see if we could switch, but by this time they had enough conflicts that moving wasn't going to work. After getting advice from a lot of people and talking more to the DEFFA organizers I decided to go ahead with the 3/21 weekend.
In March I had started talking to the JP organizers about whether they would be interested in co-sponsoring again for 2020, and initially they were too busy organizing their own late-Spring weekend to decide. Which was fine; I didn't need a decision from them until it was time to start publicity in the fall. In late June they decided they did want to co-sponsor.
At this point we had a date, hall, bands, callers, and sound, so I didn't do much until the fall.
In late September I started trying to put together |
e09d8719-cbbf-4b18-a938-74e002cdb965 | trentmkelly/LessWrong-43k | LessWrong | Life Extension through Diet Modification
Life extension is a relevant topic here, and I was wondering if people are aware of the apparently life-extending effects of calorie restriction (CR) and intermittent fasting (IF). To the extent of my knowledge, this is the best method using currently realized technology that has shown repeated and significant life-extension benefits.
Studies show that reducing calories by 20% to 40% from ad libitum feeding (but maintaining the supply of required protein and micro-nutrients) gives improvements in markers related to aging, and extends life span in rodents and other organisms.
Other rodent studies have also shown similar results in subjects which were kept on various intermittent fasting schedules. Rats that were fed only on alternating days gained up to 25% lifespan (see Table 2).
The benefits of IF are seen even if the total calorie intake is the same as in ad libitum subjects.
There are ongoing full-lifespan studies in rhesus macaques to test the effects in primates, but none of these studies have completed. This abstract of the interim results appears promising, though.
Studies of CR and IF on humans have shown effects consistent with reduced mortality, including:
- Improved triglyceride profiles (a marker for heart disease)
- Increased insulin sensitivity
- Reduced cell proliferation (a marker for cancer)
Generally, these diet modifications appear to not just extend life span, but improve the quality of life too. In aged subjects they improve things like: muscle mass, cognition, energy, appearance, and activity level.
Have people heard about this or tried it? If you are trying to maximize your chance of surviving to the point that technology can lengthen lifespan indefinitely, it seems like something worth exploring.
I tried an IF schedule for about 6 months during 2010. I followed a schedule of 3 x ~thirty hour fasts every 7 days and found it somewhat tolerable. I exercise regularly and found that exercising on the non-fasting days was not a problem. I'm |
6ca82f4f-31d3-46a3-bce8-0de58c9abcb3 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Cost-effectiveness of student programs for AI safety research
Summary
=======
This post explores the cost-effectiveness of field-building programs for students, specifically the Atlas Fellowship (a rationality program, with some AI safety programming), MLSS (an ML safety course for undergraduates), a top-tier university student group, and undergraduate research stipends.
We estimate the benefit of these programs in ‘Quality-Adjusted Research Years’, using cost-effectiveness models built for the [Center for AI Safety](https://www.safe.ai/) (introduction post [here](https://forum.effectivealtruism.org/posts/Ykqh8ku7NHN9CGkdC/modeling-the-impact-of-ai-safety-field-building-programs), full code [here](https://github.com/centerforaisafety/AIS-cost-effectiveness)). Since our framework focuses on benefits for technical AI safety research exclusively, we will not account for other benefits of programs with broader objectives, such as the Atlas Fellowship.
*We intend for these models to support — not determine — strategic decisions. We do not believe, for instance, that programs which a model rates as lower cost-effectiveness are necessarily not worthwhile as part of a portfolio of programs.*
The models’ tentative results, summarized below, suggest that student groups and undergraduate research stipends are considerably more cost-effective than Atlas and MLSS. (With many important caveats and uncertainties, discussed in the post.) Additionally, student groups and undergraduate research stipends compare favorably to ‘baseline’ programs — directly funding a talented research scientist or PhD student working on trojans research for 1 year or 5 years respectively.
| Program | Cost (USD) | Benefit (counterfactual expected QARYs) | Cost-effectiveness (QARYs per $1M) |
| --- | --- | --- | --- |
| **Atlas** | 9,000,000 | 43 | 4.7 |
| **MLSS** | 330,000 | 6.4 | 19 |
| **Student Group** | 350,000 | 50 | 140 |
| **Undergraduate Stipends** | 50,000 | 17 | 340 |
| Baseline: Scientist Trojans | 500,000 | 84 | 170 |
| Baseline: PhD Trojans | 250,000 | 8.7 | 35 |
For readers who are after high-level takeaways, including which factors are driving these results, skip ahead to the [cost-effectiveness in context](https://forum.effectivealtruism.org/posts/zYSAFtjasxsfm3nmh/cost-effectiveness-of-student-programs-for-ai-safety#Cost_effectiveness_in_context) section. For those keen on understanding the model and results in more detail, read on as we:
1. Give important disclaimers. ([Read more](https://forum.effectivealtruism.org/posts/zYSAFtjasxsfm3nmh/cost-effectiveness-of-student-programs-for-ai-safety#Disclaimers).)
2. Direct you to background information about this project. ([Read more](https://forum.effectivealtruism.org/posts/zYSAFtjasxsfm3nmh/cost-effectiveness-of-student-programs-for-ai-safety#Background).)
3. Walk through the model. ([Read more](https://forum.effectivealtruism.org/posts/zYSAFtjasxsfm3nmh/cost-effectiveness-of-student-programs-for-ai-safety#The_model).)
4. Contrast these programs with one another, and with funding researchers directly. ([Read more](https://forum.effectivealtruism.org/posts/zYSAFtjasxsfm3nmh/cost-effectiveness-of-student-programs-for-ai-safety#Cost_effectiveness_in_context).)
5. Test the robustness of the model. ([Read more](https://forum.effectivealtruism.org/posts/zYSAFtjasxsfm3nmh/cost-effectiveness-of-student-programs-for-ai-safety#Robustness).)
Disclaimers
===========
**This analysis is a starting point for discussion, not a final verdict.** The most critical reasons for this are that:
* These models are reductionist. Even if we have avoided other pitfalls associated with cost-effectiveness analyses, the models might ignore factors that turn out to be crucial in practice, including (but not limited to) interactions between programs, threshold effects, and diffuse effects.
* The models’ assumptions are first-pass guesses, not truths set in stone. Most assumptions are imputed second-hand following a short moment of thought, before being adjusted ad-hoc for internal consistency and differences of beliefs between Center for AI Safety (CAIS) staff and external practitioners. In some cases, parameters have been redefined since initial practitioner input.
+ This caveat is particularly important for the Atlas Fellowship, where we have not discussed parameter values with key organizers[[1]](#fnsfjblng4cx).
Instead, the analyses in this post represent an initial effort in explicitly laying out assumptions, in order to take a more systematic approach towards AI safety field-building.
Background
==========
1. For an introduction to our approach to modeling – including motivations for using models, the benefits and limitations of our key metric, guidance for adopting or adapting the models for your own work, comparisons between programs for students and professionals, and more – refer to the [introduction post](https://forum.effectivealtruism.org/posts/Ykqh8ku7NHN9CGkdC/modeling-the-impact-of-ai-safety-field-building-programs).
2. The models’ default parameters are based on practitioner surveys and the expertise of CAIS staff. Detailed information on the values and definitions of these parameters, and comments on parameters with delicate definitions or contestable views, can be found in the [parameter documentation](https://docs.google.com/spreadsheets/d/1uK4opqsCmC5nW6G3D1X67KZnQdMGGL1YbpVmQN5OKF8/edit#gid=581108234) sheet.
3. The full code for this project is in [this repository](https://github.com/centerforaisafety/AIS-cost-effectiveness/tree/main). The `examples` folder includes [documentation](https://github.com/centerforaisafety/AIS-cost-effectiveness/blob/main/src/scripts/examples/README.md) that demonstrates the repository’s use.
4. We have also published an evaluation of field-building programs for professionals, which you can find [here](https://forum.effectivealtruism.org/posts/7kFPFYQSY7ZttoveS/cost-effectiveness-of-professional-field-building-programs).
The model
=========
Programs
--------
This analysis includes the following programs:
1. The [**Atlas Fellowship**](https://www.atlasfellowship.org/): A 10-day in-person program providing a scholarship and networking opportunities for select high school students.
2. [**MLSS**](https://forum.effectivealtruism.org/posts/9RYvJu2iNJMXgWCBn/introducing-the-ml-safety-scholars-program): CAIS’s discontinued summer course, designed to teach undergraduates ML safety. (This program has been superseded by [Intro to ML Safety](https://course.mlsafety.org/), which we expect to be more cost-effective.)
3. **Student Group**: A high-cost, high-engagement student group at a top university, similar to [HAIST, MAIA](https://www.lesswrong.com/posts/LShJtvwDf4AMo992L/update-on-harvard-ai-safety-team-and-mit-ai-alignment), or [SAIA](https://www.lesswrong.com/posts/zgJCSK5KdkiKDuuCw/the-tree-of-life-stanford-ai-alignment-theory-of-change)[[2]](#fnbltuwqsf2qc).
4. **Undergraduate Stipends**: Specifically, the [ML Safety Student Scholarship](https://www.mlsafety.org/safety-scholarship), which provides stipends to undergraduates connected with ML safety research opportunities.
In the [cost-effectiveness in context](https://forum.effectivealtruism.org/posts/zYSAFtjasxsfm3nmh/cost-effectiveness-of-student-programs-for-ai-safety#Cost_effectiveness_in_context) section, we will compare these programs to directly funding a talented research scientist or PhD student working on trojans research for 1 year or 5 years respectively.
Throughout, we will evaluate the programs as if they had not been conducted yet, hence we are uncertain about parameters that are ex-post realized (e.g. costs, number of participants). At the same time, parameter values often reflect our current best understanding from recent program implementations[[3]](#fnq4j2oji59i).
Definitions
-----------
Our key metric is the **Quality-Adjusted Research Year (QARY)**[[4]](#fnqb0n3nu4sdj). We define a QARY as:
1. *A year of research labor (40 hours \* 50 weeks),*
2. *Conducted by a research scientist (other researcher types will be inflated or deflated),*
3. *Of average ability relative to the ML research community (other cohorts will be inflated or deflated),*
4. *Working on a research avenue as relevant as adversarial robustness (alternative research avenues will be inflated or deflated),*
5. *Working at their peak productivity (earlier-in-career research will be discounted),*
6. *Conducting all of their research in the present (later-in-time research will be time discounted),*
7. *Who stays in the AI profession (later-in-time research will be discounted by the probability that the researcher switches).*
In order to operationalize the QARY, we need some way of defining relative weights for different researcher types, researcher abilities, and the relevance of different research avenues.
Define the ‘**scientist-equivalence**’ of a researcher type as the rate at which we would trade off an hour of labor from this researcher type with an hour of otherwise-similar labor from a research scientist.
Similarly, the ‘**ability**’ level of a researcher is the rate at which we would trade off an hour of labor from a researcher of this ability level with an hour of otherwise-similar labor from a researcher of ability level 1.
Finally, the ‘**relevance**’ of a research avenue is the rate at which we would trade off an hour of labor from a researcher pursuing this avenue with an hour of otherwise-similar labor from a researcher pursuing adversarial robustness research.
The expected number of **QARYs per participant** is given by the integral of the product of these functions over a career:
*QARYs-per-participant = (integral from 0 to 60 of: research-labor x scientist-equivalence x ability x relevance x productivity x time-discount x probability-stay-in-AI dt)*
or, since scientist-equivalence and ability are constant in time,
*QARYs-per-participant = scientist-equivalence x ability x (integral from 0 to 60 of: research-labor x research-avenue-relevance x relative-productivity x time-discount x probability-stay-in-AI dt)*.
The **benefit** of the program is given by the difference between expected QARYs with and without the program. **Cost-effectiveness** is calculated by dividing this benefit by the expected cost in millions of US dollars.
Building the model piece-by-piece
---------------------------------
Let's gradually build up the model, starting with the simplest possible scenario.
### The simple example program
The simple example program has a budget of $200k, sufficient to support 10 undergraduates.
Each participant produces the same QARYs over the course of their career. In particular, if the program is implemented, each participant:
1. Works on research from their graduation (2 years from now) until retirement (60 years hence),
2. Has a 2% chance of becoming a professional research scientist (0.02x scientist-equivalence),
3. Is of average ability relative to the ML research community,
4. *Works on adversarial robustness research,*
5. Always maintains their peak productivity,
6. Does not have their work time-discounted, and
7. Remains within the AI profession throughout their career.
In the absence of the program, each of the identical participants:
*4. Works on a research avenue that CAIS considers to have limited relevance to AI safety (0x adversarial robustness research),*
with all other factors remaining constant.
Integrating over time, each participant produces, with and without the program taking place respectively,
*0.02 x 1 x (integral from 2 to 60 of: 1 x 1 x 1 x 1 x 1 dt) = 1.16*
*0.02 x 1 x (integral from 2 to 60 of: 1 x 0 x 1 x 1 x 1 dt) = 0*
QARYs over their career respectively. Multiplying by the number of participants, the program generates
*10 \* (1.16 - 0) = 11.6*
QARYs, at a cost-effectiveness (in QARYs per $1m) of
*11.6 / ($200k / $1m) = 58*.
### Cost and number of participants
Now, let's consider the expected costs for each program:
| Parameter | Atlas | MLSS | Student Group | Undergraduate Stipends |
| --- | --- | --- | --- | --- |
| Target Budget | $9,000,000 | $330,000 | $350,000 | $50,000 |
| Fixed Cost | $1,360,000 | $4800 | $68,000 | $1900 |
| Variable Cost | $7,640,000 | $325,000 | $282,000 | $48,000 |
Fixed costs refer to expenses that remain constant regardless of the number of participants (e.g., property purchase, program ideation). Variable costs, which are costs proportional to the number of participants (e.g., stipends, advertising), make up the rest of the budget (at least, in expectation[[5]](#fnsp5bzggg0rk)).
Given these budgets, each program can support some number of students[[6]](#fnnka7by5tp4i)[[7]](#fnmzowhi7ylpd). (In the plot below we use “pre-PhD students” to refer to undergraduates and high school students.)

Maintaining earlier assumptions about QARYs per participant, the cost-effectiveness of these programs is as follows:
| Program | Build-up stage | Cost (USD) | Benefit (counterfactual expected QARYs) | Cost-effectiveness (QARYs per $1M) |
| --- | --- | --- | --- | --- |
| Simple example program | Simple Example Program | 200,000 | 12 | 58 |
| **Atlas** | **Cost and Participants** | **9,000,000** | **130** | **15** |
| **MLSS** | **Cost and Participants** | **330,000** | **64** | **190** |
| **Student Group** | **Cost and Participants** | **350,000** | **180** | **520** |
| **Undergraduate Stipends** | **Cost and Participants** | **50,000** | **28** | **560** |
### Pipeline probabilities and scientist-equivalence
Not every participant in these programs will transform into a professional researcher. The journey involves several potential hurdles:
1. ‘Serious’ pursuit of technical ML safety paths, such as obtaining a relevant research internship.
2. Admission into top-tier PhD programs.
3. Securing a position at a renowned research institution, with or without a PhD.
The probability of becoming a researcher involves the combined probability of these steps.
For purposes of impact evaluation, we might not value the impact of different roles equally. Here, scientists, professors, engineers, and PhD students are assigned ‘scientist-equivalence’ of 1, 10, 0.1, and 0.1 respectively.
We calculate the expected impact of each participant by combining the likelihood of their professional path and the scientist-equivalence of their potential roles. This gives us the 'unconditional' scientist-equivalence, which we will later multiply by the number of participants to estimate the number of scientist-equivalent participants.
Steps 1, 2, scientist-equivalence conditional on doing a PhD, scientist-equivalence conditional on not doing a PhD, and (unconditional) scientist-equivalence are specified as follows:

Multiplying scientist-equivalence by the number of participants in the program, we get the number of scientist-equivalents with and without the program.

Note that the number of scientist-equivalent participants is higher **after** than **during** the PhD period. This is because (current or future) PhD students are valued at 0.1 scientist-equivalents, but we expect them to be valued at
*P(scientist | PhD) \* SE(scientist) + P(professor | PhD) \* SE(professor) + P(engineer | PhD) \* SE(engineer) ~= 0.67*
scientist-equivalents beyond graduation.
Let's see how these factors affect cost-effectiveness. In the table below, bolded rows incorporate pipeline probabilities and scientist-equivalence.
| Program | Build-up stage | Cost (USD) | Benefit (counterfactual expected QARYs) | Cost-effectiveness (QARYs per $1M) |
| --- | --- | --- | --- | --- |
| Simple example program | Simple Example Program | 200,000 | 12 | 58 |
| Atlas | Cost and Participants | 9,000,000 | 130 | 15 |
| **Atlas** | **Pipeline and Equivalence** | **9,000,000** | **350** | **39** |
| MLSS | Cost and Participants | 330,000 | 64 | 190 |
| **MLSS** | **Pipeline and Equivalence** | **330,000** | **90** | **270** |
| Student Group | Cost and Participants | 350,000 | 180 | 520 |
| **Student Group** | **Pipeline and Equivalence** | **350,000** | **170** | **470** |
| Undergraduate Stipends | Cost and Participants | 50,000 | 28 | 560 |
| **Undergraduate Stipends** | **Pipeline and Equivalence** | **50,000** | **130** | **2600** |
Factoring in pipeline probabilities and scientist-equivalence dramatically increases the estimated cost-effectiveness of Atlas and especially Undergraduate Stipends. MLSS also sees some improvement, while the Student Group experiences a slight dip. These changes reflect the above gaps between distributions of unconditional scientist-equivalence with and without the program.
### Ability and research avenue relevance
Participants may vary in ability[[8]](#fnjt39bpwl1zi).
The following plot illustrates the ability of marginal students and average ability as a function of the number of students the program is able to support:

Given the number of pre-PhD participants each program enrolls, Atlas participants have a mean ability of ~1.1x, Student Group and Undergraduate Stipends ~1x, and MLSS ~0.9x. Student Group PhD students have mean ability ~1.5x.
Separate from their ability, participants might work on varying research avenues that we value differently. The research avenue relevance of each participant in each program over time is specified as follows:

The shaded area indicates research avenue relevance for the average participant with (solid line) and without (dashed line) the program. Note that, after finishing their PhD, some pre-PhD students shift away from high-relevance research avenues, represented as vertical drops in the plot. Note also that the dashed lines for MLSS and student group are overlapping.
Multiplying time spent on research by research avenue relevance gives us QARYs per scientist-equivalent participant as a function of time:

The shaded area in the bottom row equals QARYs per scientist-equivalent participant for each program. Combining this with the number of scientist-equivalent participants, we get an estimate for the QARYs produced by the program.
After incorporating differences in ability and research avenue relevance, the cost-effectiveness of different programs is given by:
| Program | Build-up stage | Cost (USD) | Benefit (counterfactual expected QARYs) | Cost-effectiveness (QARYs per $1M) |
| --- | --- | --- | --- | --- |
| Simple example program | Simple Example Program | 200,000 | 12 | 58 |
| Atlas | Cost and Participants | 9,000,000 | 130 | 15 |
| Atlas | Pipeline and Equivalence | 9,000,000 | 350 | 39 |
| **Atlas** | **Ability and Relevance** | **9,000,000** | **3100** | **340** |
| MLSS | Cost and Participants | 330,000 | 64 | 190 |
| MLSS | Pipeline and Equivalence | 330,000 | 90 | 270 |
| **MLSS** | **Ability and Relevance** | **330,000** | **340** | **1000** |
| Student Group | Cost and Participants | 350,000 | 180 | 520 |
| Student Group | Pipeline and Equivalence | 350,000 | 170 | 470 |
| **Student Group** | **Ability and Relevance** | **350,000** | **980** | **2800** |
| Undergraduate Stipends | Cost and Participants | 50,000 | 28 | 560 |
| Undergraduate Stipends | Pipeline and Equivalence | 50,000 | 130 | 2600 |
| **Undergraduate Stipends** | **Ability and Relevance** | **50,000** | **900** | **18,000** |
The outlook for every program benefits from incorporating ability and research avenue relevance. The effect is especially great for Atlas. Research avenue relevance is the more important factor explaining this pattern.
### Productivity, staying in AI research, and time discounting
Researchers' productivity can vary throughout their careers. Additionally, some may choose to leave the field of AI research, and, from the perspective of today, the value of research might change over time. We will now make adjustments for these factors.
Productivity relative to peak, probability of staying in the AI field, and time discounting are specified as the following functions over time[[9]](#fnhq67p018a7g):

These functions are nearly identical across programs, except for Atlas, which targets younger participants.
Multiplying these functions with the hours and research avenue relevance functions, we get the updated function for QARYs per scientist-equivalent over time:

The updated cost-effectiveness of each program is as follows:
| Program | Build-up stage | Cost (USD) | Benefit (counterfactual expected QARYs) | Cost-effectiveness (QARYs per $1M) |
| --- | --- | --- | --- | --- |
| Simple example program | Simple example program | 200,000 | 12 | 58 |
| Atlas | Cost and Participants | 9,000,000 | 130 | 15 |
| Atlas | Pipeline and Equivalence | 9,000,000 | 350 | 39 |
| Atlas | Ability and Relevance | 9,000,000 | 3100 | 340 |
| **Atlas** | **Productivity, Staying in AI, and Time Discounting** | **9,000,000** | **43** | **4.7** |
| MLSS | Cost and Participants | 330,000 | 64 | 190 |
| MLSS | Pipeline and Equivalence | 330,000 | 90 | 270 |
| MLSS | Ability and Relevance | 330,000 | 340 | 1000 |
| **MLSS** | **Productivity, Staying in AI, and Time Discounting** | **330,000** | **6.4** | **19** |
| Student Group | Cost and Participants | 350,000 | 180 | 520 |
| Student Group | Pipeline and Equivalence | 350,000 | 170 | 470 |
| Student Group | Ability and Relevance | 350,000 | 980 | 2800 |
| **Student Group** | **Productivity, Staying in AI, and Time Discounting** | **350,000** | **50** | **140** |
| Undergraduate Stipends | Cost and Participants | 50,000 | 28 | 560 |
| Undergraduate Stipends | Pipeline and Equivalence | 50,000 | 130 | 2600 |
| Undergraduate Stipends | Ability and Relevance | 50,000 | 900 | 18,000 |
| **Undergraduate Stipends** | **Productivity, Staying in AI, and Time Discounting** | **50,000** | **17** | **340** |
The adjustments for productivity, remaining in AI research, and time discounting have a dramatic effect on estimated (*absolute*) cost-effectiveness. This is expected: the bulk of research time for program participants happens years after graduation, whilst discounting future research reduces its present value considerably.
However, these adjustments do not have nearly as dramatic an impact on the ratios of estimated cost-effectiveness between programs (*relative* cost-effectiveness). MLSS and Undergraduate Stipends experience a nearly identical ~50x decrease in cost-effectiveness. Atlas sees a ~70x decline, due to its participants beginning research relatively late; the Student Group sees only a ~20x decline, due to PhD students' research beginning relatively soon.
Cost-effectiveness in context
=============================
The table below compares the cost-effectiveness of the programs considered above with ‘baseline’ programs — directly funding a talented research scientist or PhD student working on trojans research for 1 year or 5 years respectively.
| Program | Cost (USD) | Benefit (counterfactual expected QARYs) | Cost-effectiveness (QARYs per $1M) |
| --- | --- | --- | --- |
| **Atlas** | 9,000,000 | 43 | 4.7 |
| **MLSS** | 330,000 | 6.4 | 19 |
| **Student Group** | 350,000 | 50 | 140 |
| **Undergraduate Stipends** | 50,000 | 17 | 340 |
| Baseline: Scientist Trojans | 500,000 | 84 | 170 |
| Baseline: PhD Trojans | 250,000 | 8.7 | 35 |
"Scientist Trojans" and "PhD Trojans" are hypothetical programs, wherein a research scientist or a PhD student is funded for 1 or 5 years, respectively. This funding causes the scientist or PhD student to work on trojans research (a research avenue that CAIS believes is 10x the relevance of adversarial robustness) rather than a research avenue that CAIS considers to have limited relevance to AI safety (0x). Unlike participants considered previously in this post, the scientist or PhD student has ability 10x the ML research community average — akin to assuming that the program reliably selects unusually productive researchers. The benefits of these programs cease after the funding period.
The Student Group and Undergraduate Stipends programs are competitive with programs that fund researchers directly. However, Atlas and MLSS lag behind in terms of cost-effectiveness.
What factors contribute to this pattern across student programs?
There is no single explanatory factor. A lower cost per participant mechanically increases cost-effectiveness. Having a greater expected number of professional researchers among participants (via improved pipeline probabilities, or a larger fraction of participants who are currently PhD students) naturally multiplies the quantity of future research produced as a result of the program. Similarly, participant ability and research avenue relevance multiply the quality of future research.
Despite its strengths in some areas, the Atlas Fellowship’s very high cost per participant brought down its overall cost-effectiveness. MLSS was most affected by the relatively small change it caused to the relevance of participants’ future research. The Student Group benefited from having a larger number of participants (per cost), and from involving PhD students. And Undergraduate Stipends performed well across all areas.
Robustness
==========
Research discount rate
----------------------
We saw earlier that the research discount rate – the degree to which a research year starting next year is less valuable than one starting now – had an especially large effect on (absolute) cost-effectiveness estimates.
For purposes of this post, research one year from now is considered to be 20% less valuable than research today. The justification for this figure begins with the observation that, in ML, research subfields often begin their growth in an exponential fashion. This means that research topics are often dramatically more neglected in earlier stages (i.e. good research is much more counterfactually impactful), and that those who are early can have an outsized impact in influencing the direction of the field — imagine a field of 3 researchers vs. one of 300 researchers. If, for instance, mechanistic interpretability arose as a research agenda one year earlier than it did, it seems reasonable to imagine that the field would have 20% more researchers than it currently does. In fact, we think that these forces are powerful enough to make a discount rate of 30% seem plausible. (Shorter timelines would also be a force in this direction.)
This view does not reflect a consensus. Others might argue that the most impactful safety work requires access to more advanced models and conceptual frameworks, which will only be available in the future[[10]](#fnw289hw7ylfa).

The plot above shows how cost-effectiveness changes with the research discount rate. 0.2 is our default; negative values represent a preference for research conducted in the future.
For the programs considered in this post, research time discounting strongly affects conclusions about absolute impact, but typically does not significantly affect conclusions about relative impact. The one exception is the Student Group, which is the first-ranked program assuming very high discount rates and the joint-last-ranked program assuming very low discount rates.
It is hardly surprising that the research discount rate does not typically affect the relative impact of student field-building programs: these are all programs that aim to produce researchers at an approximately similar point in time. Neither is it surprising that the Student Group performs relatively better when assuming high discount rates (perhaps corresponding to very short timelines). If research feels more urgent, programs with older students are a better bet, because older students are likely to produce research sooner. (For similar reasons, relative cost-effectiveness might be less robust when comparing professional programs with student programs. The research discount rate section of our [introduction post](https://forum.effectivealtruism.org/posts/Ykqh8ku7NHN9CGkdC/modeling-the-impact-of-ai-safety-field-building-programs#Possible_crux__research_discount_rate) explores this comparison.)
Research avenue relevance
-------------------------
Consider this illustrative scenario. All participants begin the programs considered above pursuing research avenues that CAIS considers to have limited relevance to AI safety (0x adversarial robustness). Once a program ends, participants shift from research avenues with 0x relevance to research avenues with 100x relevance with some probability. (This could also be viewed as a proportion of research being redirected towards more relevant avenues.)

The above plot shows how program benefit and cost-effectiveness vary in response to changes in the probability that different types of students alter their research avenues (with other student types remaining unchanged).
Notice that differences in research avenue relevance matter a *lot* for program outcomes. Two alternative, plausible views on research avenue relevance could imply 3 orders of magnitude difference in final impact. (To see this from the chart, note that, for the purposes of these models, a 10% chance of moving from 0 to 100 is equivalent to a 1% chance of moving from 0 to 1000 — and that research avenue relevance is unsettled and might be thought to vary by orders of magnitude.)
Although the models’ results will strongly depend on contentious research avenue relevance parameters, we are heartened that these models clarify the effect of alternative views on benefit and cost-effectiveness.
Less contentious background parameters
--------------------------------------

The above plot visualizes the robustness of our benefit and cost-effectiveness results to various scenarios. These scenarios simulate sizeable shocks to default beliefs or empirical parameters:
1. “Difference in scientist-equivalence” inflates or deflates the scientist-equivalence gap between research scientists and other research professionals by a factor of 10^0.5.
2. “Labor costs” inflates or deflates fixed hours (i.e. labor hours that are constant in the number of participants) spent working on the program by 10^0.25, and wages by 10^0.125.
3. “Fixed costs” inflates or deflates non-labor fixed costs by 10^0.5.
4. “Job prospects” inflates or deflates the probabilities that a graduating student (from PhD or without PhD) becomes a scientist, professor, or engineer at a strong research institution by 10^0.125.
Results are surprisingly stable (on log scale). The only shock that affects cost-effectiveness by 0.5 orders of magnitude or more is differences in scientist-equivalence for the Student Group, likely due to the adjustment in the value attributed to current PhD students.
Invitation to propose explicit models
=====================================
This work represents a first step toward explicitly modeling the cost-effectiveness of AI safety programs, taking inspiration from cost-effectiveness models from other causes. To hold each other to a more objective and higher standard, we strongly suggest that people with different views or suggested AI safety interventions propose quantitative models going forward.
Acknowledgements
================
Thank you to:
1. Dan Hendrycks and Oliver Zhang for high-level guidance,
2. Aron Lajko for excellent research assistance,
3. Anonymized practitioners for help with parameters assumptions,
4. Miti Saksena, Steven Basart, Aidan O’Gara, Michael Townsend, Chana Messinger, Will Hodgkins, Jonas Vollmer, and Basil Halperin for feedback, and
5. The Quantified Uncertainty Research Institute team for creating [squiggle](https://www.squiggle-language.com/), and Peter Wildeford for creating [squigglepy](https://github.com/rethinkpriorities/squigglepy).
Footnotes
=========
1. **[^](#fnrefsfjblng4cx)**We reached out to key organizers but they were too busy. To compensate for errors this could introduce into the analysis, we have tried to lean towards optimism whenever we are deeply uncertain about parameters for external programs. For example, scientist-equivalence given a non-PhD path is much higher for the Atlas Fellowship than for other programs.
2. **[^](#fnrefbltuwqsf2qc)**The Student Group evaluated in this post is a hypothetical, representative amalgam of not-hypothetical programs.
3. **[^](#fnrefq4j2oji59i)**The use of *recent* implementations might disadvantage the Atlas Fellowship, if only due to their upcoming decrease in participant stipends.
4. **[^](#fnrefqb0n3nu4sdj)**See our [introduction post](https://forum.effectivealtruism.org/posts/Ykqh8ku7NHN9CGkdC/modeling-the-impact-of-ai-safety-field-building-programs#Possible_crux__research_discount_rate) for a discussion of the benefits and limitations of this framework.
5. **[^](#fnrefsp5bzggg0rk)**In particular, costs are calculated as follows:
1. Specify a (certain) target budget.
2. Subtract mean (uncertain) fixed costs from the target budget to get (certain) target variable costs.
3. Back out the target hours spent on the program using average wages.
4. Back out gamma distribution parameters such that actual hours have mean equal to the target (and standard deviation pre-specified).
5. Aggregate to actual labor costs, then to actual variable costs, then to actual budget.
This approach allows us to estimate both ex-ante or ex-post impact within a unified framework.
6. **[^](#fnrefnka7by5tp4i)**Note that the number of students that the Atlas Fellowship, Undergraduate Stipends, and MLSS programs can support is deterministic given some budget – students cost (variable) resources if and only if they participate in the program. (The small degree of uncertainty arises because we are plotting results for different possibilities of ‘actual’ budget given the ‘target’ budget.) In contrast, the number of students that interact with the Student Group is uncertain even conditional on a certain budget.
7. **[^](#fnrefmzowhi7ylpd)**For the Student Group, we are most interested in students who are *new* to the group in the year the program is implemented. The number of students considered in this post is approximately 3x smaller than the *total* number of students we would expect to meaningfully interact with the program in a given year.
8. **[^](#fnrefjt39bpwl1zi)**By ability, we mean *research* ability. For the purposes of the model, we are interested in participants’ expected ability *conditional on becoming a scientist-equivalent*. This suggests caution when estimating the ability of students who are not yet professional researchers. We do not believe that IQ is highly predictive after conditioning on becoming a professional researcher. Given this, although after the fact – having observed publication records – it might be reasonable for ability levels to vary by several orders of magnitude, we might not be nearly this confident ahead of time.
9. **[^](#fnrefhq67p018a7g)**In the model, all functions over time are specified over a 60-year period from the program’s start. We have truncated the x-axis in our plots for ease of reading. This could only mislead for the probability of staying in AI subplots, where the other side of the sigmoid function is not visible.
10. **[^](#fnrefw289hw7ylfa)**The extent to which current research will apply to more advanced models is a useful topic of discussion. Given that it seems [increasingly likely that AGI will be built using deep learning systems](https://www.metaculus.com/questions/4055/will-the-first-agi-be-based-on-deep-learning/), and in particular LLMs, we believe that studying existing systems can provide useful [microcosms for AI safety](https://www.alignmentforum.org/posts/AtfQFj8umeyBBkkxa/a-bird-s-eye-view-of-the-ml-field-pragmatic-ai-safety-2#Microcosms). For instance, LLMs already exhibit forms of [deception](https://ai.facebook.com/blog/cicero-ai-negotiates-persuades-and-cooperates-with-people/) and [power-seeking](https://arxiv.org/abs/2304.03279). Moreover, it seems likely that current work on AI [honesty](https://arxiv.org/abs/2212.03827), [transparency](https://arxiv.org/abs/2202.05262), [proxy gaming](https://arxiv.org/abs/2210.10760), evaluating dangerous capabilities, and so on will apply to a significant extent to future systems based on LLMs. Finally, note that research on benchmarks and evals is robust to changes in architecture or even to the paradigm of future AI systems. As such, building benchmarks and evals are even more likely to apply to future AI systems.
Of course, it is true that more advanced models and conceptual frameworks do increase the relevance of AI safety research. For instance, we anticipate that once the LLM-agent paradigm gets established, research into AI power-seeking and deception will become even more relevant. Notwithstanding, we believe that, all things considered, AI safety research is currently tractable enough, and that the subfields are growing exponentially such that a 20% or even 30% discount rate is justified. |
edacd253-a776-4637-b584-ceb0f02c719e | trentmkelly/LessWrong-43k | LessWrong | Rationality Jokes Thread
This is an experimental thread. It is somewhat in the spirit of the Rationality Quotes Thread but without the requirements and with a focus on humorous value. You may post insightful jokes, nerd or math jokes or try out rationality jokes of your own invention.
ADDED: Apparently there has been an earlier Jokes Thread which was failry successful. Consider this another instance. |
2095b781-44f3-492e-bd22-9f4bb7f30652 | trentmkelly/LessWrong-43k | LessWrong | Facing the Intelligence Explosion discussion page
I've created a new website for my ebook Facing the Intelligence Explosion:
>
>
> Sometime this century, machines will surpass human levels of intelligence and ability, and the human era will be over. This will be the most important event in Earth’s history, and navigating it wisely may be the most important thing we can ever do.
>
> Luminaries from Alan Turing and Jack Good to Bill Joy and Stephen Hawking have warned us about this. Why do I think they’re right, and what can we do about it?
>
> Facing the Intelligence Explosion is my attempt to answer those questions.
>
>
This page is the dedicated discussion page for Facing the Intelligence Explosion.
If you'd like to comment on a particular chapter, please give the chapter name at top of your comment so that others can more easily understand your comment. For example:
> Re: From Skepticism to Technical Rationality
>
> Here, Luke neglects to mention that... |
450f118d-7dd9-4030-87fc-934e10f37c86 | trentmkelly/LessWrong-43k | LessWrong | The expected value of the long-term future
I wrote an article describing a simple model of the long-term future. Here it is:
* in PDF format
* in tex format
Summary:
> A number of ambitious arguments have recently been proposed about the moral importance of the long-term future of humanity, on the scale of millions and billions of years. Several people have advanced arguments for a cluster of related views. Authors have variously claimed that shaping the trajectory along which our descendants develop over the very long run (Beckstead, 2013), or reducing extinction risk, or minimising existential risk (Bostrom, 2002), or reducing risks of severe suffering in the long-term future (Althaus and Gloor, 2016) are of huge or overwhelming importance. In this paper, I develop a simple model of the value of the long-term future, from a totalist, consequentialist, and welfarist (but not necessarily utilitarian) point of view. I show how the various claims can be expressed within the model, clarifying under which conditions the long-term becomes overwhelmingly important, and drawing tentative policy implications. |
4482ca75-d5ab-4211-b518-152f1941b880 | trentmkelly/LessWrong-43k | LessWrong | Possible interest in Louisiana Meetup?
Is there any interest in an (ideally northern) LA meetup? I live in Natchitoches, which would probably not be an ideal location (unless there are people coming from both Shreveport and Alexandria). I know of at least one reader in Baton Rouge; is there anyone else out there? |
bebf42b3-310d-434f-8b5f-288e575bde16 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | AI Impacts: Historic trends in technological progress
*AI Impacts just released a collection of posts showing historical progress in a variety of technologies. Rather than cross-posting all of these articles individually, I've collected them in this post.*
[**Historic trends in transatlantic message speed**](https://aiimpacts.org/historic-trends-in-transatlantic-message-speed/)
---------------------------------------------------------------------------------------------------------------------------
> The speed of delivering a short message across the Atlantic Ocean saw at least three discontinuities of more than ten years before 1929, all of which also were more than one thousand years: a 1465-year discontinuity from Columbus’ second voyage in 1493, a 2085-year discontinuity from the first telegraph cable in 1858, and then a 1335-year discontinuity from the second telegraph cable in 1866.
>
>
[**Historic trends in long-range military payload delivery**](https://aiimpacts.org/historic-trends-in-long-range-military-payload-delivery/)
---------------------------------------------------------------------------------------------------------------------------------------------
> The speed at which a military payload could cross the Atlantic ocean contained six greater than 10-year discontinuities in 1493 and between 1841 and 1957.
>
>
[**Historic trends in bridge span length**](https://aiimpacts.org/historic-trends-in-bridge-span-length/)
---------------------------------------------------------------------------------------------------------
> We measure eight discontinuities of over ten years in the history of longest bridge spans, four of them of over one hundred years, five of them robusts to slight changes in trend extrapolation.
>
> The annual average increase in bridge span length increased by over a factor of one hundred between the period before 1826 and the period after (0.25 feet/year to 35 feet/year), though there was not a clear turning point in it.
>
>
[**Historic trends in light intensity**](https://aiimpacts.org/historic-trends-in-light-intensity/)
---------------------------------------------------------------------------------------------------
> Maximum light intensity of artificial light sources has discontinuously increased once that we know of: argon flashes represented roughly 1000 years of progress at past rates.
>
> Annual growth in light intensity increased from an average of roughly 0.4% per year between 424BC and 1943 to an average of roughly 190% per year between 1943 and the end of our data in 2008.
>
>
[**Historic trends in book production**](https://aiimpacts.org/historic-trends-in-book-production/)
---------------------------------------------------------------------------------------------------
> The number of books produced in the previous hundred years, sampled every hundred or fifty years between 600AD to 1800AD contains five greater than 10-year discontinuities, four of them greater than 100 years. The last two follow the invention of the printing press in 1492.
>
> The real price of books dropped precipitously following the invention of the printing press, but the longer term trend is sufficiently ambiguous that this may not represent a substantial discontinuity.
>
> The rate of progress of book production changed shortly after the invention of the printing press, from a doubling time of 104 years to 43 years.
>
>
[**Historic trends in telecommunications performance**](https://aiimpacts.org/historic-trends-in-telecommunications-performance/)
---------------------------------------------------------------------------------------------------------------------------------
> There do not appear to have been any greater than 10-year discontinuities in telecommunications performance, measured as:
>
> * bandwidth-distance product for all technologies 1840-2015
> * bandwidth-distance product for optical fiber 1975-2000
> * total bandwidth across the Atlantic 1956-2018
>
> Radio does not seem likely to have represented a discontinuity in message speed.
>
>
[**Historic trends in slow light technology**](https://aiimpacts.org/historic-trends-in-slow-light-technology/)
---------------------------------------------------------------------------------------------------------------
> Group index of light appears to have seen discontinuities of 22 years in 1995 from Coherent Population Trapping (CPT) and 37 years in 1999 from EIT (condensate). Pulse delay of light over a short distance may have had a large discontinuity in 1994 but our data is not good enough to judge. After 1994, pulse delay does not appear to have seen discontinuities of more than ten years.
>
>
[**Penicillin and historic syphilis trends**](https://aiimpacts.org/penicillin-and-historic-syphilis-trends/)
-------------------------------------------------------------------------------------------------------------
> Penicillin did not precipitate a discontinuity of more than ten years in deaths from syphilis in the US. Nor were there other discontinuities in that trend between 1916 and 2015.
>
> The number of syphilis cases in the US also saw steep decline but no substantial discontinuity between 1941 and 2008.
>
> On brief investigation, the effectiveness of syphilis treatment and inclusive costs of syphilis treatment do not appear to have seen large discontinuities with penicillin, but we have not investigated either thoroughly enough to be confident.
>
>
[**Historic trends in the maximum superconducting temperature**](https://aiimpacts.org/historic-trends-in-the-maximum-superconducting-temperature/)
---------------------------------------------------------------------------------------------------------------------------------------------------
> The maximum superconducting temperature of any material up to 1993 contained four greater than 10-year discontinuities: A 14-year discontinuity with NbN in 1941, a 26-year discontinuity with LaBaCuO4 in 1986, a 140-year discontinuity with YBa2Cu3O7 in 1987, and a 10-year discontinuity with BiCaSrCu2O9 in 1987.
>
> YBa2Cu3O7 superconductors seem to correspond to a marked change in the rate of progress of maximum superconducting temperature, from a rate of progress of .41 Kelvin per year to a rate of 5.7 Kelvin per year.
>
>
[**Historic trends in chess AI**](https://aiimpacts.org/historic-trends-in-chess-ai/)
-------------------------------------------------------------------------------------
> The Elo rating of the best chess program measured by the Swedish Chess Computer Association did not contain any greater than 10-year discontinuities between 1984 and 2018. A four year discontinuity in 2008 was notable in the context of otherwise regular progress.
>
>
[**Effect of Eli Whitney’s cotton gin on historic trends in cotton ginning**](https://aiimpacts.org/effect-of-eli-whitneys-cotton-gin-on-historic-trends-in-cotton-ginning/)
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> We estimate that Eli Whitney’s cotton gin represented a 10 to 25 year discontinuity in pounds of cotton ginned per person per day, in 1793. Two innovations in 1747 and 1788 look like discontinuities of over a thousand years each on this metric, but these could easily stem from our ignorance of such early developments. We tentatively doubt that Whitney’s gin represented a large discontinuity in the cost per value of cotton ginned, though it may have represented a moderate one.
>
>
[**Historic trends in flight airspeed records**](https://aiimpacts.org/historic-trends-in-flight-airspeed-records/)
-------------------------------------------------------------------------------------------------------------------
> Flight airspeed records between 1903 and 1976 contained one greater than 10-year discontinuity: a 19-year discontinuity corresponding to the Fairey Delta 2 flight in 1956.
>
> The average annual growth in flight airspeed markedly increased with the Fairey Delta 2, from 16mph/year to 129mph/year.
>
>
[**Effects of breech loading rifles on historic trends in firearm progress**](https://aiimpacts.org/effects-of-breech-loading-rifles-on-historic-trends-in-firearm-progress/)
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> We do not know if breech loading rifles represented a discontinuity in military strength. They probably did not represent a discontinuity in fire rate.
>
> |
a92cc802-36a9-4b35-b264-27a2c172314a | trentmkelly/LessWrong-43k | LessWrong | How to tradeoff utility and agency?
I'm slowly working through a bunch of philosophical criticisms of consequentialism and utilitarianism, kicked off by this book: https://www.routledge.com/Risk-Philosophical-Perspectives/Lewens/p/book/9780415422840 (which I don't think is good enough to actually recommend)
One common thread is complaints about utilitarianism and consequentialism giving incorrect answers in specific cases. One is the topic of this question: When evaluating potential harms, how can we decide between a potential harm that's the result of someone's agency (their own beliefs, decisions, and actions) vs a potential harm from an outside source (e.g. imposed by the state)?
I'm open to gedanken experiments to illustrate this, but for now I'll use something dumb and simple. You can save one of two people; saving them means getting them 100% protection from this specific potential harm, all else being equal.
* Person A has entered into a risky position by their own actions, after deciding to do so based on their beliefs. They are currently at a 10% chance of death (with the remainder 90% nothing happens).
* Person B has been forced into a risk position by their state, which they were born into and have not been allowed to leave. They are currently at X% chance of death (with the remainder nothing happens).
Assume that Persons A and B have approximately the same utility ahead of the (QALYs, etc). The point of the question is to specifically quantifiably find a ratio to tradeoff between agency and utility (in this case mortality).
For what values of X would you chose to save Person B?
I'm interested in things like natural experiments would would show how current decision systems or philosophies answer this question. I am also interested in peoples personal takes. |
a096da32-f10b-4c7f-a3a6-42a12cf74b3d | trentmkelly/LessWrong-43k | LessWrong | Introducción al Riesgo Existencial de Inteligencia Artificial
TL;DR: Spanish-Speaking Introduction to AI Safety, covering key concepts like Generality, X-Risks, AI Timelines, and Convergent Instrumental Goals.
Message to the English-Speaking Community (Mensaje para la comunidad angloparlante):
Hey everyone! I'm David, a 21-year-old Computer Science student at the University of Buenos Aires (Argentina) and Data Engineer at Accenture. I recently delivered an introductory talk on AI Safety, drawing inspiration from Rob Miles.
In this talk, I outline the immense potential and peril of AGI, which could transform every aspect of life as we know it. Many experts believe AGI will become a reality within this century, but without adequate safeguards, there's a substantial risk of human extinction due to our inability to control this technology.
Spanish, being the second most widely spoken language in the world and extensively used on the internet, deserves greater representation within the LW community. My hope is that this initiative will help bridge the gap and make LW concepts more accessible to Spanish speakers.
Click here if you want to read the translation to English.
----------------------------------------
Este año en mayo, los líderes de las principales compañías de la industria de la IA, incluyendo OpenAI, Microsoft, Google DeepMind y Anthropic, firmaron una declaración conjunta en la que se comprometieron a "Mitigar el riesgo de extinción por IA como una prioridad global, junto con otros riesgos de escala social como las pandemias y la guerra nuclear."
La intención de este artículo es ayudar a entender por qué esta declaración es acertada.
Primero, es crucial aclarar que el riesgo existencial asociado con el desarrollo continuo de la IA viene específicamente de la IA General (IAG), es decir, máquinas inteligentes que pueden realizar cualquier tarea, comprendiendo y adaptándose a tantas situaciones diversas como un ser humano.
Nuestra especie ha prosperado por su nivel de inteligencia, dominando a cualquier otro ani |
91da6931-13ba-46c8-97e4-d3d44af992f4 | trentmkelly/LessWrong-43k | LessWrong | EA & LW Forum Weekly Summary (6th - 19th Feb 2023)
Supported by Rethink Priorities
This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
If you'd like to receive these summaries via email, you can subscribe here.
Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!
Author's note: Since I was on vacation last week, this week's post covers 2 weeks content at a higher karma bar of 130+
Philosophy and Methodologies
There can be highly neglected solutions to less-neglected problems
by Linda Linsefors, Amber Dawn
Suggests it makes sense to assess solutions for neglectedness, but not cause areas. Even if a problem is not neglected, effective solutions might be. For instance, climate change is not neglected, but only a few organisations work on preserving rainforests - which seems like one of the most effective interventions in the space currently.
How will we know if we are doing good better: The case for more and better monitoring and evaluation
by TomBill, sophie-gulliver
Argues that Monitoring and Evaluation (M&E) theories and tools could be utilized more to answer EA’s questions about what impact we are achieving, if our projects are running efficiently and effectively, and if any of them are causing harm. Common struggles with M&E include lacking an explicit and detailed theory of change, not fully diagnosing the problem to solve, conducting only monitoring without impact assessment or an evaluation plan, not having good examples in the field (eg. longtermism, where RCTs aren’t possible) or not having clear M&E responsibilities with dedicated resources.
The authors provide resources to help, including: a slack group, pro-bono M&E consultation for EA projects, IDinsight’s M&E health check, and various resources for lear |
d69ae479-9d60-491c-9829-9c79dfeb76a2 | trentmkelly/LessWrong-43k | LessWrong | Bayesian Doomsday Argument
First, if you don't already know it, Frequentist Doomsday Argument:
There's some number of total humans. There's a 95% chance that you come after the last 5%. There's been about 60 to 120 billion people so far, so there's a 95% chance that the total will be less than 1.2 to 2.4 trillion.
I've modified it to be Bayesian.
First, find the priors:
Do you think it's possible that the total number of sentients that have ever lived or will ever live is less than a googolplex? I'm not asking if you're certain, or even if you think it's likely. Is it more likely than one in infinity? I think it is too. This means that the prior must be normalizable.
If we take P(T=n) ∝ 1/n, where T is the total number of people, it can't be normalized, as 1/1 + 1/2 + 1/3 + ... is an infinite sum. If it decreases faster, it can at least be normalized. As such, we can use 1/n as an upper limit.
Of course, that's just the limit of the upper tail, so maybe that's not a very good argument. Here's another one:
We're not so much dealing with lives as life-years. Year is a pretty arbitrary measurement, so we'd expect the distribution to be pretty close for the majority of it if we used, say, days instead. This would require the 1/n distribution.
After that,
T = total number of people
U = number you are
P(T=n) ∝ 1/n
U = m
P(U=m|T=n) ∝ 1/n
P(T=n|U=m) = P(U=m|T=n) * P(T=n) / P(U=m)
= (1/n^2) / P(U=m)
P(T>n|U=m) = ∫P(T=n|U=m)dn
= (1/n) / P(U=m)
And to normalize:
P(T>m|U=m) = 1
= (1/m) / P(U=m)
m = 1/P(U=m)
P(T>n|U=m) = (1/n)*m
P(T>n|U=m) = m/n
So, the probability of there being a total of 1 trillion people total if there's been 100 billion so far is 1/10.
There's still a few issues with this. It assumes P(U=m|T=n) ∝ 1/n. This seems like it makes sense. If there's a million people, there's a one-in-a-million chance of being the 268,547th. But if there's also a trillion sentient animals, the chance of being the nth person won't change that much between a million and a billion people. There's |
5bf5a30c-3bdb-42e1-aecd-f38d2fcfa1a6 | StampyAI/alignment-research-dataset/blogs | Blogs | On Economics
On Economics
------------
| | |
| --- | --- |
| Like a lot of millenials, especially american millenials, I have spent years of my youth entertaining the notion of anarcho-**communism**. | Like a lot of millenials, especially american millenials, I have spent years of my youth entertaining the notion of anarcho-**capitalism**. |
| Although since them I have admitted that \*some\* amount of state is necessary (and hence I have given up on the anarcho- part), I still find myself defending **socialism**; however, there is always a lot of resistance to this. What makes it that we still can't agree on economics ? | Although since them I have admitted that \*some\* amount of state is necessary (and hence I have given up on the anarcho- part), I still find myself defending **capitalism**; however, there is always a lot of resistance to this. What makes it that we still can't agree on economics ? |
### We live in a society
| | |
| --- | --- |
| We can't discuss economics without mentioning the one group that has let **capitalism** inexorably gain ground on the overton window: | We can't discuss economics without mentioning the one group that has let **socialism** inexorably gain ground on the overton window: |
| Liberals. | Liberals. |
| Blinded by short-term thinking, liberals keep promoting **capitalism** at the cost of a sustainable future. This, of course, plays in the hands of **capitalists**, whose main intent is to steal people's hard-earned value for themselves. | Blinded by short-term thinking, liberals keep promoting **socialism** at the cost of a sustainable future. This, of course, plays in the hands of **socialists**, whose main intent is to steal people's hard-earned value for themselves. |
| In my opinion, it's all too convenient to be a coincidence; in fact, if \*I\* were a **capitalist**, I would try my hardest to use the positions of power that **capitalists** now occupy to slowly change public opinion in my favor. Not to mention, of course, the corruption of government, which has been pressured into implementing **capitalist** policies one after another. | In my opinion, it's all too convenient to be a coincidence; in fact, if \*I\* were a **socialist**, I would try my hardest to use the positions of power that **socialists** now occupy to slowly change public opinion in my favor. Not to mention, of course, the corruption of government, which has been pressured into implementing **socialist** policies one after another. |
| I could from this point argue the thousand times rehashed arguments about which system leads to a better world for everyone; despite how obvious it is that no sustainable system can be built with improper incentives. | I could from this point argue the thousand times rehashed arguments about which system leads to a better world for everyone; despite how obvious it is that no sustainable system can be built with improper incentives. |
### A moral argument
| | |
| --- | --- |
| Indeed, the incentives of **capitalism** are so terribly misaligned that the people who take advantage of others keep getting rewarded the most, over those who actually contribute value. An economy based on people being pitted against one another has no hope of creating a prosperous society the way real, voluntary cooperation can, as **socialism** allows. | Indeed, the incentives of **socialism** are so terribly misaligned that the people who take advantage of others keep getting rewarded the most, over those who actually contribute value. An economy based on people being pitted against one another has no hope of creating a prosperous society the way real, voluntary cooperation can, as **capitalism** allows. |
| But in this post, I want to address the core moral value which has led me to hold **socialism** ideals so dearly: | But in this post, I want to address the core moral value which has led me to hold **capitalism** ideals so dearly: |
| Freedom. | Freedom. |
| The reason **capitalism** fundamentally destroys freedom is simple: when you put central authorities in charge instead of letting people make their own decisions, people are bound to be forced to abandon what they love and just do terrible labor all day under terrible conditions instead. Indeed, this is what we see in every instance of **capitalism** being implemented. | The reason **socialism** fundamentally destroys freedom is simple: when you put central authorities in charge instead of letting people make their own decisions, people are bound to be forced to abandon what they love and just do terrible labor all day under terrible conditions instead. Indeed, this is what we see in every instance of **socialism** being implemented. |
| Now, some **capitalists** will \*claim\* to be on the bottom half of the political compass, but of course we all know that's unreasonable. No one would voluntarily partake of **capitalism** if they weren't forced or brainwashed into doing so by an authoritarian state. In fact, the very reason that anyone survives **capitalist** societies is the emergence of smaller, local, bottom-up **socialist** structures to fill in the gaps left by the status quo. | Now, some **socialists** will \*claim\* to be on the bottom half of the political compass, but of course we all know that's unreasonable. No one would voluntarily partake of **socialism** if they weren't forced or brainwashed into doing so by an authoritarian state. In fact, the very reason that anyone survives **socialist** societies is the emergence of smaller, local, bottom-up **capitalist** structures to fill in the gaps left by the status quo. |
| Even in the parts of the system that genuinely take the shape of **capitalism**, the reason anything works is a tiny set of people in power going out of their way to ignore the incentives that surround them and help the people instead. | Even in the parts of the system that genuinely take the shape of **socialism**, the reason anything works is a tiny set of people in power going out of their way to ignore the incentives that surround them and help the people instead. |
| However, patching can only go so far. How many tens of millions of lives will have to be destroyed by **capitalism** before the idea is abandoned ? | However, patching can only go so far. How many tens of millions of lives will have to be destroyed by **socialism** before the idea is abandoned ? |
### The way forward
| | |
| --- | --- |
| The obvious path forwards in these trying times is to reverse away from **capitalism** and start implementing some real, reasonable policies; but also, and perhaps even more importantly, we need to \*educate people\*. | The obvious path forwards in these trying times is to reverse away from **socialism** and start implementing some real, reasonable policies; but also, and perhaps even more importantly, we need to \*educate people\*. |
| Everyday, the great work of proiminent figures on sites like youtube helps educate thousands about the ravages that **capitalism** causes to this day, and what solutions are available right now to fix some of the low-hanging fruit issues like widespread poverty, government corruption, unaffordable healthcare, terrible education, and so on. | Everyday, the great work of proiminent figures on sites like youtube helps educate thousands about the ravages that **socialism** causes to this day, and what solutions are available right now to fix some of the low-hanging fruit issues like widespread poverty, government corruption, unaffordable healthcare, terrible education, and so on. |
| I believe we want the same thing as centrists, at the end of the day: a free and fair world where individuals can pursue happyness. Don't let evil get hold of them. **Socialism** will triumph, eventually; it's on the side of Good! | I believe we want the same thing as centrists, at the end of the day: a free and fair world where individuals can pursue happyness. Don't let evil get hold of them. **Capitalism** will triumph, eventually; it's on the side of Good! | |
3763c5a4-11c2-4c5d-b029-f945e01cd440 | StampyAI/alignment-research-dataset/blogs | Blogs | Two papers accepted to AGI-15
MIRI has two papers forthcoming in the conference proceedings of [AGI-15](http://agi-conf.org/2015/). The first paper, previously released as a MIRI technical report, is “[Reflective variants of Solomonoff induction and AIXI](https://intelligence.org/files/ReflectiveSolomonoffAIXI.pdf),” by Benja Fallenstein, Nate Soares, and Jessica Taylor.
The second paper, “[Two Attempts to Formalize Counterpossible Reasoning in Deterministic Settings](https://intelligence.org/files/CounterpossibleReasoning.pdf),” by Nate Soares and Benja Fallenstein, is a compressed version of some material from [an earlier technical report](https://intelligence.org/files/TowardIdealizedDecisionTheory.pdf). This new paper’s abstract is:
> This paper motivates the study of counterpossibles (logically impossible counterfactuals) as necessary for developing a decision theory suitable for generally intelligent agents embedded within their environments. We discuss two attempts to formalize a decision theory using counterpossibles, one based on graphical models and another based on proof search.
>
>
Fallenstein will be attending AGI-15.
The post [Two papers accepted to AGI-15](https://intelligence.org/2015/05/29/two-papers-accepted-to-agi-15/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
b93fab0f-f2d1-4b07-ba50-644f01323f5d | trentmkelly/LessWrong-43k | LessWrong | Lakshmi's Magic Rope: An Intuitive Explanation of Ramanujan Primes
Imagine a magic rope. It's long, stretchy, and beautiful. Let's take it with us on a trip through the positive integers. We can start wherever you like, and we'll go toward infinity. We put one tip of the magic rope down at our feet, at whichever integer we happen to be standing on. The other end of the magic rope trails behind us on the number line on a lower integer. Along its length, our magic rope contains numbers that are full of possibilities.
Although we're still wandering around, letting the rope drag freely along the numbers, we might be curious about the prime numbers our magic rope happens to be touching. Primes, remember, are numbers that are only divisible by themselves and 1, and they are one of the greatest mathematical mysteries. Primes are to integers (whole numbers) as the elements are to chemistry. Just as all matter is made up of the elements on the periodic table, so all integers are made up of primes. Any whole number that's not a prime can be created by multiplying primes together - and there's only one way to create any given whole number by doing so!
For example, the first four primes are 2, 3, 5, and 7. If we multiply them together, we can produce 4 (2x2), 6 (2x3), 8 (2x2x2), 9 (3x3), 10 (2x5), 12 (2x2x3), 14 (2x7), 15 (3x5), 16 (2x2x2x2), 18 (2x3x3), and so on. So we've been able to create most of the numbers from 2-18 by multiplying just our first four primes. And we could create many more numbers - an infinite number, in fact - by multiplying just these four primes an increasing number of times. But even between 2 and 18, we have a few holes. We couldn't create 11, 13, or 17 in this way, because these numbers are primes themselves.
That's why primes are a little like the chemical elements. And just as establishing and studying the chemical elements was a crucial step in the development of chemistry, so understanding the patterns in the prime numbers is central to mathematics. So we might be curious about what we can find out about t |
e5080044-1efb-4a5b-8d59-b56345fd77f1 | StampyAI/alignment-research-dataset/special_docs | Other | Interview with bj9ne
\*\*Interview with bj9ne, on 3/24/22\*\*
\[Note: This transcript has been less edited than other due to language barriers. The interviewee is also younger than typical interviewees.\]
\*\*0:00:00.0 Vael:\*\* Alright. My first question is, can you tell me about what area of AI you work on in a few sentences?
\[Discusses research in detail\]
\*\*0:03:31.9 Vael:\*\* Got it. Great. Yeah, so thinking about the future, so thinking about what will happen in the future, maybe you think AI is important or maybe you don\'t think AI is important, but people talk about the ability to generate an AI that is a very capable general system, so one can imagine that in 2012, we had AlexNet and the deep learning revolution, and here we are 10 years later, and we\'ve got systems like GPT-3, which have a lot of different capabilities that you wouldn\'t expect it to, like it can do some text generation and language translation and coding and math, and one might expect that if we continue pouring in all of the investment with nations competing and companies competing and algorithmic improvements or software improvements and hardware improvements, that eventually we might reach a very powerful system that could, for example, replace all current human jobs, and we could have CEO AIs, and we could have scientist AIs, so do you think this will ever happen, and if so, when?
\*\*0:04:43.6 Interviewee:\*\* Honestly speaking, I don\'t think this is realistic in the future, like 10 years or 20 years. So basically, I agree that AI will replace more jobs, but in my mind, this replacement, I don\'t say that\... For example, in a factory, there were 100 employees, and in the future, we may replace that 100 employees with AI, and five employees, but this five may be advanced engineers, rather than regular people. Because I have gone to some factories that manufacture something we use in our daily life, and my impression is that there were very, very few people, but the factory is very large, and mostly those people are just sitting in a room with air-conditioning. But those robotics, which is guided by AI and processing maybe those request from customers very efficiently, but they tell me that, for example, for some security issues, they need to watch those machines, so that that won\'t happen something unexpected. So also as an example, cars, and automated cars, auto drive is very, very \[inaudible\] in these years, but this is a issue that there are a lot of security issues.
\*\*0:06:39.3 Interviewee:\*\* So basically, just, I think the manufacturer and auto drive has a similarity, that is in many scenarios, we do want to guarantee a high level of security, but maybe for scenarios like recommendations, so making mistakes, that doesn\'t matter, and also I think one issue is very important, that is interpretability. Because for many, many models we designed or we have designed, and maybe less, there are many popular or famous papers in top conferences, but these models, many of these models lack interpretability.
\*\*0:07:30.0 Interviewee:\*\* So this is not a\... Sometimes it is unacceptable to deploy these models into some scenarios that requires us to \[inaudible\] something, especially in the scenarios like in hospitals, but we can\'t accept this because sometimes, an error is\... For example, when we do salaries, and some errors is not acceptable. So I think this is a very good question, so for my mind, I think this should be discussed by classifications, so for scenarios like recommendations, where errors are acceptable, then AI will replace more and more work of human, and this work was done by maybe some data scientists, because with our models become more and more intelligent and our systems become more and more efficient, also our data become more and more efficient, so the work of data scientists will be reduced by a great margin. And I expect this growth to\... I think this is sustainable.
\*\*0:08:49.9 Vael:\*\* Yeah\--
\*\*0:08:50.2 Interviewee:\*\* For example, in probably one or two decades, this will keep growing, but for some scenarios like auto drive, then on my mind, maybe this is controversial, but I don\'t think that auto drive is really something so realistic that we can expect like L4 or L5 auto drive to be realistic, to be used by us in a decade.
\*\*0:09:20.0 Vael:\*\* Yes\--
\*\*0:09:20.9 Interviewee:\*\* So in these scenarios, I think AI is a very important system for us. It can reduce our work, reduce our burden, but human is needed.
\*\*0:09:30.0 Vael:\*\* Yeah. Great. Cool, so my question is, do you think we\'ll ever get very, very capable AI, regardless of whether we deploy it or not? Do you ever think that we\'ll have systems that could be a CEO AI or a scientist AI, and if so, when? Like it could be like 50 years from now, it could be like 1,000 years from now, when do you think that would happen?
\*\*0:10:00.3 Interviewee:\*\* Oh, this is a tough question. Let\'s see. At least, based on the method we have taken, I think the way we develop AI now will not lead us to that future, but maybe the human will find some different ways to develop AI. But through my mind, I guess is to first we talk about maybe 50 years or a century, I think that\'s not very possible, but in the future, this may be a question about some knowledge about our brains. So maybe human at the moment, we are not\... I mean the investigation or research into our brain is not very clear.
\*\*0:11:01.0 Interviewee:\*\* So it\'s quite hard to imagine if the machine can evolve into some stages, that is the machine can be as complex, as powerful like human brains. So maybe in a century or even in two centuries, I tend not to believe that this will happen. but in the very long run, it\'s very hard to tell because I think that the way we think about something and the way the machine difference or train themselves are totally different. They work in different ways. For example, machines may require very large amount of data to find some internal principles. But with human, we are very good at generalizations, so I think for this point, if then maybe we will achieve that after 1,000 years, but I\'m not very optimistic about this. I tend not to accept this, but I can\'t deny that entirely.
\*\*0:12:13.3 Vael:\*\* Yeah. There are some people who argue that the current way of scaling may eventually produce AI, like very advanced AI, like OpenAI and DeepMind. No? You don\'t think so? Cool.
\*\*0:12:29.1 Interviewee:\*\* I don\'t think so, because based on our current trend, the development of the models comes from the development of computing power. But computing power, essentially that comes from the increasing density of changes account. So according to a research by Princeton, we know that for computer, computing power per transistor hasn\'t changed a lot during the last few decades, and also, you know the end of dinosaurs and maybe in the future, we\'ll see the end of Moore\'s Law. And I know, I have solid GPT, GPT 2022 \[inaudible\] Nvidia has powered the AI and speed up by a million x in the last decade. But I don\'t think that this development is sustainable because you see, over here I spent GPT-3 with maybe \$4 million, if I don\'t get it wrong. This scale is not good because not only we need a larger cluster but also we need more money and more time to train that model. So I remember that OpenAI has forged some arcs\[?\] in this way, but we are not able to train that again, and I have some\... There are some articles about the scaling of deep-learning models based on the parameter count of the models and they found that, I think maybe before 2022\... No, before 2020, the growth is 75X per 18 months, but after that the growth has slowed down, has greatly slowed down because, lots of issues, like the scaling of GPU or the scaling of clusters to deploy those models in very large cluster that is not narrow anything.
\*\*0:14:53.0 Interviewee:\*\* So I think so, this is partly\... This is the main reason why I shift my interest from data to system, because I believe that AI has a bright future, but to make that future brighter, we need to make our system run faster. For example, can we achieve the same result using less parameters or using less power so that we can\... We can\'t have these hardware resources but can we make better use of it? So I think the future of AI largely depends on system people. That is, can we improve the system for AI? I mean, when I talk about system, I actually, from the perspective \[inaudible\] AI, I talk about three things, the hardware, software system, and the platforms. In my mind, those models are just like the top layer, that is the application. I think the lower three levels are the key to the future success of AI, but indeed I think AI is\... And also \[inaudible\] so, I think the future of AI also means we need more application scenarios, like dark\[?\] intervention or something like that, and also robotics, and I think this is very promising and it can bring us a lot of fortune or something like that.
\*\*0:16:25.8 Vael:\*\* Yeah, great. I\'m concerned about when we eventually\... I think that we may get AI, AGI a little faster than that. And I\'m concerned that the optimization functions that we\'ll be optimizing are not going to reflect human values well because humans aren\'t able to put their values and goals and preferences perfectly into AI, and I worry that problem will get solved less fast than the problem of how we get more and more capable AI. What do you think of that?
\*\*0:17:00.6 Interviewee:\*\* When you talk about values, it\'s something. Yeah, so although it\'s not a technical issue, but this is really what the technicians should care about. So this is a very important issue in open drives\[?\], when the cars make you a hero\[?\], a person first\[?\]. So maybe we should find some method or some way to plug our rules, plug our values to guide the models, guide AI, but I think it\'s also an issue of interpretability because now we don\'t have\... Sometimes we have no idea why some models can work so well. For example, when\... Because I have worked on GNN, and this year GCN and its variants, it\'s very popular, but many of that, when the researcher comes up with that model, they don\'t know why this is good.
\*\*0:18:12.5 Interviewee:\*\* So maybe similarly it\'s quite hard to guide AI to follow the rules, so I think this is also an important issue and important obstacle for the applications of AI. For example, we cannot put some non-CV skills to recognize human faces because sometimes it may violate the law. Yeah, I think this is an important issues, but will this stop AI? I think for my mind, this may be an obstacle for AI in some scenarios, but in many scenarios, this is not a\... This will generate some issues for us to think about, but in the end I think we will deal with this. For example, some people may use federated learning to deal with privacy and there are some techniques to deal with these issues. So yeah, I think we should put more emphasis on the values and the rules or even the assets about AI so that this community will grow faster. Yes, this is an important aspect. Although I don\'t put much emphasis on this.
\*\*0:19:44.1 Vael:\*\* Why not?
\*\*0:19:49.8 Interviewee:\*\* That\'s because, I guess, because my research and my internship mainly focus on recommendation and there is not much issues about this except for privacy, and because when we got those data, we don\'t know the meaning of data. When I get a little data intention, I just send out important numbers, or maybe sometimes this is a one \[inaudible\] issue, and I don\'t know much about that, so let\'s say that these privacy issues has been\... Maybe this has been dealt with by those data scientists, not by people like us, so this is because of my research interest, but I think for those people who do\...
\*\*0:20:34.1 Interviewee:\*\* Yes, I have taken some courses about AI and then teachers say that they have developed some robotics to help the elderly. But let\'s say that sometimes you cannot use a camera because using a camera will generate some privacy issues, so maybe sometimes we can just use something to catch its audio rather than video or something like this, but because\... I guess that\'s most of students\... Most of my schoolmates don\'t put\... Most of my classmates haven\'t paid much attention on these issues, but this is a very\... I think this issue will become more and more important in the future if we want to generate AI to more and more scenarios, so thank you for raising these points. I will think more about it in the future.
\*\*0:21:37.0 Vael:\*\* Yeah. I\'m happy to send you resources. One extra thing, one other additional thing. So I think probably, I think by default the systems we train will not do what we intend them to do, and instead will do what we tell them to do, so we\'ll have trouble putting all of our preferences and goals into mathematical formulations that we can optimize over. I think this will get even harder as time goes on. I\'m not sure if this is true, but I think it might get harder as the AI is optimizing over more and more complex things, state spaces.
\*\*0:22:13.6 Interviewee:\*\* So you mean that because in the future we will have more and more requirements, and that\'s so\...
\*\*0:22:22.6 Vael:\*\* No, no, the AI will just be operating under larger state spaces, so I will be like, \"Now I want you to be a CEO AI,\" or, \"Now I want you to be a manager AI.\"
\*\*0:22:34.3 Interviewee:\*\* Oh, did you say that we need to encode those requirements into optimization function so that AI will operate like what we want them to do? Did I get it wrong? Oh, that\'s a quite good question. I have discussed that with my roommates. Yeah, so yes, it is the optimization, the loss function that guides the model to do something that we want, and sometimes it\'s hard to find an appropriate function, especially for newbies. Sometimes we chose a wrong loss function, and the way the model is totally unusable.
\*\*0:23:16.8 Vael:\*\* Yeah, and I have one more worry about that scenario, so\...
\*\*0:23:20.5 Interviewee:\*\* Yeah, yeah, yeah. I think this is also a very important issues, and I\'m not very optimistic about this because it\'s really hard. And lots of things, because like a CEO AI, we not only need to care about the revenue of this company, but also learn maybe the reputation, and we may also want them to abide by the laws, and maybe when there is new business and we want to inject new rules into that loss function core business. Great.
\*\*0:24:05.6 Vael:\*\* Yeah, and I have one more twist. Alright, so imagine that we had a CEO AI, and it takes human feedback because we\'ve decided that that\'s a good thing for it to do, and it needs to write a memo to the humans so that they make sure that its decision is okay, and the AI is trying to optimize a goal. It\'s trying to get profit and trying not to hurt people and stuff, and it notices that when it writes this\... That when it writes, sometimes humans will shut it down. And it doesn\'t want that to happen because if humans shut it down, then it can\'t achieve its goal, so it may lie on this memo or omit or not include information on this memo, such that it is more likely to be able to pursue its goal. So then we may have an AI that is trying to keep itself alive, not because that was programmed into it, but because it is an agent optimizing a goal, and anything that interfered with its goal is like not achieving the goal. What do you think of this?
\*\*0:25:16.3 Interviewee:\*\* Oh, very good question, but it\'s very hard. It basically mean that we need to establish enough rules so that when\... Sometimes it\'s very hard to come up with some common cases that AI may\... Yes, it is optimizing towards its goal, but there may be something that we want it to do, so maybe we need to have a mechanism so that we can switch from the AI mode, from manual mode, that we can take control of AI or take control of\... For example, the company in the last second was guided by an AI, and for the next second, we want to guide in, we want to lead the company manually, so I think if we ask \[inaudible\] enough and we may establish maybe a thorough mechanism so that we can guarantee that, it is possible to take control back, and the AI will not lie to us. And yes, theoretically, this is possible and this should be the case. But correctly speaking, I think this can be done, but it\'s quite hard to estimate the cost. The engineering cost for us to make such a AI that is complete enough, that is secure enough to help us achieve the goal. So maybe in this point, if we want to do something like recommendations, we can be very radical, we can develop or deploy radical models. But maybe in the scenarios like a CEO AI, I think we should be conservative because some internal principles of AIs are not known entirely by the humans, so I think sometimes we need to be conservative to prevent some bad things from happening.
\*\*0:27:45.1 Vael:\*\* Yeah. I think that the problem\... I think it\'s not just an engineering problem. I think it\'s also a research problem, where people don\'t know how to construct optimization functions such that AI will be responsive to humans and won\'t be incentivizing against humans. And there\'s a population of people who are working on this, who are worried that if we have very intelligent AIs that are optimizing against us, then humans may be wiped out, and so you really don\'t want\... You really wanna make sure the loss function is such that it isn\'t optimizing against us. Have you heard of the AI safety community or AI alignment community?
\*\*0:28:25.9 Interviewee:\*\* No. I don\'t know much about that, but let me say the scenarios you have mentioned, that AI may optimize against us and even wipe out the human, I have seen some films, some movies about this, and yes, this is possible if we are too careless, I think this is possible.
\*\*0:28:48.8 Vael:\*\* Yeah.
\*\*0:28:49.0 Interviewee:\*\* But at least it\'s right at\... I haven\'t paid much attention on these issues, but I think this is an important question.
\*\*0:28:58.9 Vael:\*\* Yeah, I personally think that advanced AI may happen in the next 50 years, and this is just from looking at some surveys of experts. I\'m very unsure about it. But if that does happen, then I think that currently we have groups like\... China and the US are going to be competing, I expect, and we\'ll have lots of different corporations competing, and maybe DeepMind and OpenAI are not competing, but maybe they\'re just going really hard for the goal. And I worry that we\'re not going to spend enough effort on safety and we\'re going to be spending much more effort on trying to make the AI do the cool things, and if the safety problems are hard, then we may end up with a very unsafe, powerful AI.
\*\*0:29:55.2 Interviewee:\*\* Or this then may come up with the competition on nuclear weapons.
\*\*0:30:01.1 Vael:\*\* Yeah.
\*\*0:30:01.4 Interviewee:\*\* How likely \[inaudible\] just like nuclear weapons. Yes, the power of AI in the future may be something like the power of nuclear weapons, that is quite hard to control if the real war or something\... Maybe not so severe like a war, but it is possible, so maybe I think we need some international association about the use of AI. But both in China and the US, the government has established more and more rules about, for example, privacy, security, and what you can do and what you can\'t do. So yes, if we don\'t place enough emphasis on this, this may be a question, but well, most\--
\*\*0:30:51.3 Vael:\*\* I think they\'re placing emphasis on issues like privacy and fairness, but they\'re not placing emphasis on trying to develop loss functions that do what humans want. I think that is a more different type of research that is not being invested in.
\*\*0:31:08.8 Interviewee:\*\* Yes, you\'re right. So maybe the community should do more things about this because you can\'t count on those people in the government to realize that this is really a case, they don\'t know much about this, so yes, yes, the community should, like the conference or a workshop, we should talk more about this, yes, before that is too late. Yes, I agree. Before that is too late, or everything will be a disaster.
\*\*0:31:38.5 Vael:\*\* Amazing. Yeah, so there is actually a community of people working on this. I don\'t actually know\... I know fewer of the people who are working on it in \[country\], although I do think there are people working on it. I\'m curious, what would cause you to work on these sort of issues if you felt like it?
\*\*0:32:00.0 Interviewee:\*\* You say issues that we have just mentioned?
\*\*0:32:01.9 Vael:\*\* Yeah, long-term. Well, like trying to make the AI aligned with a human, trying to make sure that the things that the AI does is what the humans wanted to do, long-term issues from AI, anything like that.
\*\*0:32:16.4 Interviewee:\*\* So I try to, although from the bottom of my heart, I think I believe that this is an important issue, frankly speaking as a student, a student researcher, or as an engineer, I don\'t have much resources about this. So I think this is the most important issue why most of my schoolmates just like improving the models and then don\'t care about if the AI may optimize against us because\... I know this sound not good, but most of student just care about, \"So if I can graduate with a PhD degree or so.\" Yeah, so maybe\... I think for me, maybe\... because I guess I will be an engineer in the future, so maybe if I have enough resources or I have enough influence in the community, I\'m willing to spend my time on this, but if I just a low-level coder and I don\'t have much power to ask my superintendent that we should place more emphasis on this, they just take\...
\*\*0:33:35.2 Interviewee:\*\* For example, if I intend and they just say, \"Oh, this model, the accuracy is good enough, but the speed is not, so optimize the model so that it can run fast enough as maybe the customers\' requirement.\" So yeah, this is basically the entire ecosystem, both in the academia and the industry that force the researchers and the employees in the company that they will not put much emphasis on this, and also, most of the time they just focus on short-term issues, short-term profits, or in the universities, student just care about, \"Oh, can I have some publications so that\... \" I really don\'t know any of my class schoolmates who have publication on these issues.
\*\*0:34:32.4 Interviewee:\*\* So I don\'t know whether there are lots of the researchers who cares about this and will spend maybe several months on these issues where they will have some publications about this. I know the top conference has asked about some ethical issues, but yeah, we really don\'t pay enough attention on this. This is a very good point. I think we need some more incentives on the future of AI. For example, like environmental issues, some factories, they will not care about environmental issues if the government doesn\'t force them to do so. For example, now we have the trade on the carbon dioxide budget. That is, the government tell the factories that you shouldn\'t emit more carbon dioxide than maybe more than this threshold or you will be fined. Maybe we need some, yes, we need some incentives to force us to think about these issues or otherwise I think this is not optimistic because not many people will be guided to do this because maybe those on the other levels they don\'t care about this.
\*\*0:36:05.7 Vael:\*\* Yeah, yeah. That seems right. Yep. It does seem like it\'s not currently as popular in the top journals as it could be, seems like a pretty small community right now. I will look around to see if I can find any researchers in \[country\] who are working on this sort of thing, because I know a bunch of them in \[country\] and I know some of them in \[country\], but not as many in \[country\], and I\'ll send you some resources if you\'re interested. There is a group of people who pay attention to this a lot, and they\'re called the Effective Altruism community, and right now, one of the things they care about is trying to make sure that existential risks don\'t happen so that humans are okay. Some other things they\'re worried about are pandemics, nuclear stuff, climate change, stuff like this, and also many other things. Interesting. Alright, cool. I think my last question is, have you changed your mind on anything during this interview, and how was this interview for you?
\*\*0:37:16.9 Interviewee:\*\* Oh. Yeah, I think maybe the greatest change to my mind is, you say that if we want a CEO AI, we need to, maybe we need to encode those requirements into the optimization function, and maybe someday an advanced AI will optimize against us. Yes, basically, in the past, I think this may be an ethical issue, and now I\'ve realized that it\'s both an ethical and a social, as well as a technical issues and we have\...
\*\*0:37:53.4 Vael:\*\* Yeah.
\*\*0:37:55.3 Interviewee:\*\* Yes. I know we haven\'t paid enough emphasis on this, but yeah, now I think that it\'s time for us to do more things, and this is a very wonderful, wonderful idea for me to think about. Thank you very much.
\*\*0:38:14.3 Vael:\*\* Yeah. Well, I mean, it\'s super cool that you\'re interested in this, so I\'m very enthused by you being like, \"This does seem like a problem, this does seem like a technical\... \" Yeah, I\'m very excited about that. Cool. Alright. Well, I will send you some resources then and I\'ll see if I can find anyone who is doing anything like this and send anything I find your way. But thank you so much, and feel free to reach out if you have any questions or if there\'s anything I can help you with.
\*\*0:38:41.8 Interviewee:\*\* Oh, okay. Also, thank you. So no, I don\'t have much questions. I will read more about this in the future. I think this is very important and also very interesting. Thank you. Thank you.
\*\*0:38:54.8 Vael:\*\* Yeah, I\'ll send you some resources. Alright, email you soon. Bye.
\*\*0:39:01.3 Interviewee:\*\* Okay. |
42dd818b-c218-4dd5-82ca-8c3637e3e5cb | trentmkelly/LessWrong-43k | LessWrong | What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause?
ETA: I'll be adding things to the list that I think belong there.
I'm assuming a high level of credence in classic utilitarianism, and that AI-Xrisk is significant (e.g. roughly >10%), and timelines are not long (e.g. >50% ASI in <100years). ETA: For the purpose of this list, I don't care about questioning those assumptions.
Here's my current list (off the top of my head):
* not your comparitive advantage
* consider other Xrisks more threatening (top contenders: bio / nuclear)
* infinite ethics (and maybe other fundamental ethical questions, e.g. to do with moral uncertainty)
* S-risks
* simulation hypothesis
* ETA: AI has high moral value in expectation / by default
* ETA: low tractability (either at present or in general)
* ETA: Doomsday Argument as overwhelming evidence against futures with large number of minds
Also, does anyone want to say why they think none of these should change the picture? Or point to a good reference discussing this question? (etc.)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.