id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
b529ccfc-ae98-4cff-a733-d09253dfad3d | trentmkelly/LessWrong-43k | LessWrong | More Babble
,,,
In my last babble, I introduced the Babble and Prune model of thought generation: Babble with a weak heuristic to generate many more possibilities than necessary, Prune with a strong heuristic to find a best, or the satisfactory one. I want to zoom in on this model. If the last babble was colored by my biases as a probabilist, this one is motivated by my biases as a graph theorist.
First, I will speculate on the exact mechanism of Babble, and also highlight the fact Babble and Prune are independent systems that can be mocked out for unit testing.
Second, I will lather on some metaphors about the adversarial nature of Babble and Prune. Two people have independently mentioned Generative Adversarial Networks to me, a model of unsupervised learning involving two neural nets, Generator and Discriminator. The Artist and the Critic are archetypes of the same flavor - I have argued in the past the spirit of the Critic is Satan.
Babble is (Sampling From) PageRank
Previously, I suggested that a Babble generator is a pseudorandom word generator, weighted with a weak, local filter. This is roughly true, but spectacular fails one of the technical goals of a pseudorandom generator: independence. In particular, the next word you Babble is frequently a variation (phonetically or semantically) of the previous one.
PageRank, as far as I know, ranks web pages by the heuristic of "what is the probability of ending up at this page after a random walk with random restarts." That's why a better analogy for Babble is sampling from PageRank i.e. taking a weighted random walk in your Babble graph with random restarts. Jackson Pollock is visual Babble.
Imagine you're playing a game of Scrabble, and you have the seven letters JRKAXN. What does your algorithm feel like?
You scan the board and see an open M. You start Babbling letter combinations that might start with M: MAJR, MRAJ, MRAN, MARN, MARX (oops, proper noun), MARK (great!). That's the weighted random walk. You set MARK asid |
763a6e87-89e6-4929-b491-17d78a14227b | trentmkelly/LessWrong-43k | LessWrong | Linking to Other Resources
Lesser Wrong is De Facto the Main Site. Given this situation, it should take the role of being the central hub from which you can access all of the other resources within the rationality community. This will be incredibly easy as someone could just write up a post on Resources and it could be added in the sidebar, just like how the about page currently works.
Examples of links we could include on the Resources Page:
* Meetups
* LW Study Hall
* Wiki
* Reddit
* Effective Altruism Forum
Ect...
Anyway, what do people think of this idea? |
4b0965a8-8371-42ae-a30d-59c22874ba8a | trentmkelly/LessWrong-43k | LessWrong | Should AI safety be a mass movement?
When communicating about existential risks from AI misalignment, is it more important to focus on policymakers/experts/other influential decisionmakers or to try to get the public at large to care about this issue?[1] I lean towards it being overall more important to communicate to policymakers/experts rather than the public. However, it may be valuable for certain individuals/groups to focus on the latter, if that is their comparative advantage.
Epistemic status
The following is a rough outline of my thoughts and is not intended to be comprehensive. I'm uncertain on some points, as noted, and I am interested in counterarguments.
Reasons for x-risk to be a technocratic issue rather than a public conversation
1. Communicating to a narrower audience makes it more likely that the issue can remain non-partisan and not divisive. Conversely, if the public becomes divided into "pro-safety" and "anti-safety" camps, potentially among partisan lines, then:
1. It will be harder to cooperate to reduce risk with the "anti-safety" party and voters/groups aligned with it.
2. It will also be more likely that AI policy and strategy will take place within the broader ideological paradigm of the pro-safety party; any legitimate concerns that don't fit within this paradigm are less likely to be addressed, compared to if AI safety is apolitical.
3. The debate will become less rational.[2]
1. There will be negative epistemic consequences from persuading policymakers as well ("Politics is the mind-killer"), but my sense is that it would be much harder to speak honestly and avoid demagoguery when trying to convince large masses of people. There are all kinds of misconceptions and false memes that spread in popular political debates, and it seems easier to have a more informed conversation if you're talking to a smaller number of people.
2. It's hard to persuade people to believe in and care about a risk that feels remote / hard to understand / weird. Most people |
b11a9637-d32b-42bb-9ab2-0cf70a6838cc | trentmkelly/LessWrong-43k | LessWrong | Stupid Questions - September 2017
The stupid questions thread was one of the regular threads on LessWrong. It's a place where no question is to stupid to be asked and anybody who answers is encouraged to be kind.
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing. |
0bddf205-abaa-42ee-ba08-af10843ddb9c | trentmkelly/LessWrong-43k | LessWrong | The case for hypocrisy
Related: The case for logical fallacies
Julia Galef makes an interesting point in her recent book The Scout Mindset: our beliefs come as tangled knots, not isolated strings. Changing one belief often implies that we change many others.
Consider Sarah, whose relationships, political beliefs, worldview, daily activities, and ethical code are all fundamentally derived from her religious beliefs. Sarah can’t merely decide that God doesn’t exist or that Hinduism is correct instead of Judaism or whatever; if taken to heart, such a change in worldview would imply reform of virtually every other aspect of her life: her belief that abortion is intrinsically immoral, her belief that contributing significant time and money to her congregation is an ethical and meaningful thing to do, and her belief that it is good and appropriate to go to Synagogue every Friday, among countless others
If Sarah wishes to maintain a harmonious, coherent set of practices, beliefs, and attitudes, it would take a tremendous amount to convince her that God isn’t real—crucially, more than if this belief were siloed away from the rest of her life and mind.
This isn’t an indictment of religion. It would take an equally huge amount of evidence to convince me that I should convert to orthodox Judaism—more than if my non-religiosity was siloed away from my other beliefs and behaviors.
The key clause, though, is “if Sarah wishes to maintain a harmonious, coherent set of practices, beliefs, and attitudes.” Why should Sarah wish to do so? Why should anyone?
The case against hypocrisy
Before making the case for hypocrisy, let me explain why, in many respects, hypocrisy is bad and maintaining a consistent set of practices and beliefs is good. I’m not just steelmanning to strengthen my later argument; hypocrisy often really is something to be avoided. The word has several definitions, but I’ll use Merriam-Webster’s
> Definition of hypocrisy
> 1. a feigning to be what one is not or to believe what one d |
d7e399c7-c480-4635-8e26-f8376cb045d6 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | How To Get Into Independent Research On Alignment/Agency
I’m an independent researcher working on AI alignment and the theory of agency. I’m 29 years old, will make about $90k this year, and set my own research agenda. I deal with basically zero academic bullshit - my grant applications each take about one day’s attention to write (and decisions typically come back in ~1 month), and I publish the bulk of my work right here on [LessWrong](https://www.lesswrong.com/)/[AF](https://www.alignmentforum.org/). Best of all, I work on some [really](https://www.lesswrong.com/tag/embedded-agency) [cool](https://www.lesswrong.com/posts/gQY6LrTWJNkTv8YJR/the-pointers-problem-human-values-are-a-function-of-humans) [technical](https://www.lesswrong.com/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro) [problems](https://www.lesswrong.com/posts/G2Lne2Fi7Qra5Lbuf/selection-theorems-a-program-for-understanding-agents) which I expect are central to the future of humanity.
If your reaction to that is “Where can I sign up?”, then this post is for you.
Background Models
-----------------
### Independence
First things first: the “independent” part of “independent research” means self-employment, and everything that goes with it. It means the onus is on you to figure out what to do, how to provide value, what to prioritize, and what to aim for. In practice, it also usually means “independent” in a broader sense: you won’t have a standard template or agenda to follow. If you go down this path, assume that **you will need to chart your own course** - in particular, your own research agenda.
For the sort of person this post is aimed at, that will be a very big *upside*, not a downside.
Disclaimer: there are ways to get into alignment research which don’t involve *quite* so much figuring-it-all-out-on-your-own. Some people receive mentorship from existing researchers. Some people go work for alignment research organizations. Either of those paths can involve “independent research” in the sense that you are technically self-employed, but those paths aren’t “independent” in the broader sense of the word, and they’re not the main topic of this post.
### Preparadigmicity
As a field, the study of alignment and agency is especially well-suited to independent research, because they center around [problems we don’t understand](https://www.lesswrong.com/posts/CSZnj2YNMKGfsMbZA/specializing-in-problems-we-don-t-understand). It’s not just that we don’t have the answers; we don’t even have the right frames for thinking about the problems. Agency is an area where [we are fundamentally confused](https://intelligence.org/2018/10/03/rocket-alignment/). AI alignment is largely [a problem which hasn’t happened yet, on technology which hasn’t been invented yet, which we nonetheless want to solve in advance](https://www.lesswrong.com/posts/FQqcejhNWGG8vHDch/on-solving-problems-before-they-appear-the-weird). Figuring out the right frames - the right paradigm - is itself a central part of the job.
The field needs people who are going to come up with new frames/approaches/models/paradigms/etc, because we’re pretty sure the current frames/approaches/models/paradigms/etc aren’t enough. Thus the great fit for independent research: as an independent researcher, you’re not beholden to some existing agenda based on existing frames. Coming up with your own idea of what the key problems are, how to frame them, what tools to apply… that sort of thing is exactly what we need, and it requires people who aren’t committed to the strategies of existing senior researchers and organizations. It requires people who have an independent high-level understanding of the field and different angles of looking at, and can pick out the key problems and paths from that perspective.
Again, for the sort of person this post is aimed at, that will be a very big upside.
… but it comes with some trade-offs. As a historical example of preparadigmatic research, here’s [Kuhn](https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions) talking about optics before Newton:
> Being able to take no common body of belief for granted, each writer on physical optics felt forced to build his field anew from its foundations. In doing so, his choice of supporting experiment and observation was relatively free, for there was no standard set of methods or of phenomena that every optical writer felt forced to employ and explain. Under these circumstances, the dialogue of the resulting books was often directed as much to the members of other schools as it was to nature.
>
>
This very much applies to alignment research. Because the field does not already have a set of [shared frames](https://www.lesswrong.com/posts/GkxxfdCukyGuyKXQQ/shared-frames-are-capital-investments-in-coordination) - i.e. a paradigm - you will need to spend a *lot* of effort explaining your frames, tools, agenda, and strategy. For the field, such discussion is a necessary step to spreading ideas and eventually creating a paradigm. For you, it’s a necessary step to get paid, and to get useful engagement with your work from others.
In particular, you will probably need to both think and write a lot about your strategy: the models and intuitions which inform *why* you’re working on the particular problems you’ve chosen, why the tools you’re using seem promising, what kinds of results you expect, and what your long-term vision looks like. Inevitably, a lot of this will rely on informal arguments or intuitions; you will need to figure out how to trace the sources of those intuitions and explain them to other people, without having to formalize everything. [Explain the actual process which led to an idea/decision/approach](https://www.lesswrong.com/posts/Psr9tnQFuEXiuqGcR/how-to-write-quickly-while-maintaining-epistemic-rigor), without going down the bottomless rabbit hole of deeply researching every single claim.
The current version of LessWrong was built in large part to support exactly that sort of discussion, and I strongly recommend using it.
Getting Paid
------------
Right now, the best grantmaker in this space is the [Long-Term Future Fund (LTFF)](https://funds.effectivealtruism.org/funds/far-future). There are [other options](https://www.lesswrong.com/posts/9DenhM8deDziHiWZw/what-funding-sources-exist-for-technical-ai-safety-research?commentId=oR6KytzH9TBoEDS4n), but none are quite as good a fit for the sort of work we’re talking about here.
I’ve received a few LTFF grants myself and know some of the people involved in the grantmaking decisions, so I’ll give some thoughts on the most important things you’ll need in order to get paid. Bear in mind that this is inherently speculative and not endorsed by anyone at LTFF. I’d also recommend looking at LTFF’s [past](https://funds.effectivealtruism.org/funds/far-future#payout-reports) [grants](https://funds.effectivealtruism.org/funds/payouts/may-2021-long-term-future-fund-grants) to get a more direct idea of what kinds of things they fund.
### Don’t Bullshit
A low-bullshit grantmaking process works both ways. The LTFF wants to do object-level useful things, not just Look Prestigious, so they keep the application simple and the turnaround time relatively fast. The flip side is that I expect them to look very unkindly on bullshit - i.e. attempts to make the applicant/application Sound Prestigious without actually doing object-level useful things.
In academia, it’s common practice to make up some bullshit about how your research is going to help the world. During my undergrad, this sort of bullshit was explicitly taught. Of course, it’s not like anyone is ever going to hire an economist or statistician (let alone consult a prediction market) to figure out whether the research is *actually* likely to impact the world in the manner claimed. The goal is just to make the proposal sound good. If you’re coming from academia, this sort of bullshit may be an ingrained habit which takes effort to break.
If you want to make it in alignment/agency research, you’re going to need an actual object-level strategy.
We’ll talk more in the next sections about how to come up with a strategy, but the first stop is [The Bottom Line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line): once you’ve chosen a strategy, anything you say to justify it will not make it any more correct. All that matters is the process which *originally* made you choose that strategy, or made you stick to it at times when you might realistically have changed course. So first things first, forget whatever clever idea you already have cached, and let’s start from a blank slate.
### Reading
Preparadigmicity means you’ll need to spend a lot of time explaining your choice of vision, strategy, models, tools, etc. The flip side of that coin is reading: you’ll probably need to read quite a bit of material from others in the field. This is often nontechnical or semi-technical background material, explanations of intuitions, vague gesturing at broad ideas, etc - you can see plenty of it here on LessWrong and the Alignment Forum. The more of this you read, the better you’ll understand other researchers’ frames (or at least know *which* frames you *don’t* understand), and the better you’ll be able to explain your own material in terms others can readily understand.
Early on, there are two main motivators for reading:
* To understand which strategies have already been tried, and failed, to avoid retreading that ground
* To understand a bit of the existing jargon (definitely not all of it!), in order to explain your own ideas in terms already familiar to others
To understand (some) existing approaches and jargon, I’d recommend at least skimming these sequences/posts, and diving deeper into whichever most resemble the directions you want to pursue:
* [Embedded Agency](https://www.alignmentforum.org/s/Rm6oQRJJmhGCcLvxh)
* [Value Learning](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc)
* [11 Proposals For Building Safe Advanced AI](https://www.lesswrong.com/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai)
* [Risks From Learned Optimization](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB)
To understand barriers (other than what’s discussed in the above links), [this talk](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/) and the [Rocket Alignment Problem](https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem) are probably the best starting points. Note that lots of people disagree with those last two links (as well as 11 Proposals), but you probably want to be at least familiar enough to have an *informed* disagreement.
Note that this is all on LessWrong, which means you can leave comments with questions, attempts to summarize, disagreements, etc. Often people will reply. This helps a lot for actually absorbing the ideas. (h/t Adam Shimi for pointing this out.)
I invite others to leave suggested reading in the comments. (This does risk turning into a big debate over whether X or Y is actually a good idea for new people, but at least then we’ll have a realistic demonstration of how much everybody disagrees over all this. I did warn you that the field is preparadigmatic!)
Finally, there’s [The Sequences](https://www.lesswrong.com/rationality). They are long, but if you haven’t read them, then you definitely risk various failure modes which will be obvious to people who have read them and very confusing to you. I wouldn’t quite say they’re required reading, especially if you’re on the more technical end of the spectrum and already somewhat familiar with alignment discussions, but there are definitely many people who will be somewhat surprised if you do technical alignment/agency research and haven’t read them.
Again, I want to emphasize that everyone disagrees on all this stuff. Roughly speaking, assume that the grantmakers care more about your research having *some* plausible path to usefulness than about agreeing with any particular position in any of the field’s ongoing arguments.
### The Hamming Question
> Over on the other side of the dining hall was a chemistry table. I had worked with one of the fellows, Dave McCall; furthermore he was courting our secretary at the time. I went over and said, "Do you mind if I join you?" They can't say no, so I started eating with them for a while. And I started asking, "What are the important problems of your field?" And after a week or so, "What important problems are you working on?" And after some more time I came in one day and said, "If what you are doing is not important, and if you don't think it is going to lead to something important, why are you at Bell Labs working on it?" I wasn't welcomed after that; I had to find somebody else to eat with!
>
>
Probably the most common mistake people make when first attempting to enter the alignment/agency research field is to not have any model at all of the main bottlenecks to alignment, or how their work will address those bottlenecks. The standard (and strongly recommended) exercise to alleviate that problem is to start from the [Hamming Questions](https://www.lesswrong.com/tag/hamming-questions):
* What are the most important problems in your field (i.e. alignment/agency)?
* How are you going to solve them?
At this point, somebody usually complains that minor contributions are important or some such. I’m not going to argue with that, because I expect the sort of person who this post is already aimed at (i.e. people who are excited to forge their own path in a technical field where everyone is fundamentally confused) is probably not the sort of person who is aiming for minor contributions anyway.
If you have decent answers to the Hamming Questions, *and* you make those answers clear to other people, that is probably a sufficient condition for your grant application to not end up in the giant pile of applications from people who don’t even have a model of how their proposal will help. It’s not *quite* a sufficient condition to get paid, but I would guess that a large majority of people who can clearly answer the Hamming Questions do get paid.
I want to emphasize that I think clear answers to the Hamming Questions are an approximately-*sufficient* condition, not an approximately-*necessary* condition; there are definitely other paths. [Steve’s story](https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency?commentId=gmaBDo4HLKYdGKyRF) in the comments below is a good example; in his words:
> If you're a kinda imposter-syndrome-y person who just constitutionally wouldn't *dream* of looking themselves in the mirror and saying "I am aiming for a major contribution!", well me too, and don't let John scare you off. :-P
>
>
### Use Your Pareto Frontier
A great line from Adam Shimi:
> Most people who try to go in a direction 'no one else has tried' end up going in the most obvious direction which everyone else has tried.
>
>
My main advice to avoid this failure mode is to [leverage your Pareto frontier](https://www.lesswrong.com/posts/XvN2QQpKTuEzgkZHY/being-the-pareto-best-in-the-world). Apply whatever knowledge, or combination of knowledge, you have which others in the field don’t. Personally, I’ve gained a lot of insight into agency by drawing on systems biology, economics, statistical mechanics, and chaos theory. Others draw heavily on abstract math, like category theory or model theory. Evolutionary biology and user interface design are both rich sources.
This is one reason why it helps to have a broad technical background: the more frames and tools you have to draw on, the more likely you’ll find a novel and promising combination to apply to the most important problems in the field. (Or, just as good: the more frames and tools you have to draw on, the more likely you’ll notice that one of the most important problems has been overlooked.)
Flip side of this: if you have a novel-seeming idea which involves the same kinds of frames and tools which most people in alignment have (i.e. programming expertise, some ML experience, reading [Astral Codex Ten](https://astralcodexten.substack.com/)) then do write it up, but don’t be surprised if it’s already been done.
If you read through some existing alignment work, and the strategy seems obviously wrong to you in a way which would not be obvious to the median LessWrong user, then that’s a very promising sign.
### Legibility
Part of getting a grant is not just having a good plan and the skills to execute it, but to make your plan and skills legible to the people reviewing the grant.
Here’s (my summary of) a rough model from Oli, who’s one of the fund managers for LTFF. In order to get a grant for alignment research, usually someone needs to do *one* of these three:
1. Write a grant application which clearly signals that they understand the alignment problem and have a non-bullshitted research strategy. (This is rare/difficult.)
2. Have a reference from someone the fund managers know and trust (i.e. the existing alignment research community).
3. Have some visible online material which clearly signals that they understand the alignment problem and have a non-bullshitted research strategy. (LessWrong posts/comments are a central example.)
As a new entrant to the field, I expect that option #3 is probably your main path. Write up not just your research strategy, but the intuitions, models and arguments behind that strategy. Give examples. Explain what you consider the key problems, why those problems seem central, and the frames and generators behind that reasoning. Again, give examples. Explain conjectures or tools you think are relevant, ideally with examples. If you’re on the theory side, sketch potential empirical tests; if on the empirical side, sketch the conceptual theory behind the ideas. And include examples. Explain your vision of success, and expected applications of your research (if it succeeds). At all stages, focus on giving accessible, intuitive explanations and **lots of examples**; even people who have lots of technical background will often skip over sections with just dense math, and not everyone has the *same* technical background as you. And [put the examples at the beginnings of the posts, before the abstract/general explanations](https://www.lesswrong.com/posts/CD2kRisJcdBRLhrC5/the-power-to-teach-concepts-better#Teach_With_Examples_First).
Remember: this is preparadigmatic work. Writing up the ideas, and the generators of the ideas, and the frames, and the tools, and making it all clear and accessible to people with totally different frames and tools, is a central part of the job.
All this writing will also make option #1 and #2 easier over time: writing a lot of posts and comments will eventually generate social connections (though this takes quite a bit of time, especially if you’re not in the Bay Area), and discussion/feedback will give some idea of how to explain things in a way which signals the kinds-of-things LTFF looks for.
(On the topic of feedback: a lot of more experienced researchers ignore most posts which they don't find very promising, partly because it’s a lot of work to explain/argue about problems and partly because there are too many posts to read it all anyway. If you explicitly reach out - e.g. send a message on LessWrong - and ask for feedback, people are much more likely to tell you what they think.)
By the time all that is written up and posted, the grant application itself is a drop in the bucket; that’s a big part of why it only takes a day to write up. A quote from Oli regarding the actual application:
> I really wish people would just pretend they’re writing me an email explaining what they plan to do, rather than something aimed at the general public.
>
>
This is part of why option #1 is rare - people try to write the LTFF application like it’s an academic grant application or something, and it really isn’t. But also, clear communication is just pretty hard in general, even when you do understand the problem and have a non-bullshitted strategy.
When To Start
-------------
This post was mostly written for people who already have the technical skills they need. That probably means grad-level education, though a PhD is definitely not a formal requirement. I know at least a few who think less-than-a-full-undergrad can suffice. Personally, I never went to grad school (though admittedly my undergrad coursework looks an awful lot like a PhD program; I got an unusually large amount of mileage out of it).
In terms of specific skills, I recently wrote a [study guide](https://www.lesswrong.com/posts/bjjbp5i5G8bekJuxv/study-guide) with a bunch of technical topics I’ve found useful, but the more important point is that we don’t currently know what the right combination of background knowledge is. If you already have a broad technical background, then my advice is to take a stab at the problem and see how it goes.
If you are currently in high school or undergrad, the [study guide](https://www.lesswrong.com/posts/bjjbp5i5G8bekJuxv/study-guide) has some recommendations for what to study (and why). The larger your knowledge base, the more tools and frames you’ll have to draw on later. You could also apply for a grant to e.g. pursue some alignment/agency research project over the summer; taking a stab at it will give you some firsthand data on what kinds of tools/frames are useful.
### Runway
The grant application takes maybe a day, but there will probably be some groundwork before you’re ready for that. You’ll probably want to read a bunch, figure out a strategy, put up a few posts on it, and maybe update in response to feedback.
Personally, I quit my job as a data scientist in late 2018, and tried out a few different things over the course of the next year before settling into alignment/agency research. I got my first grant in late 2019. If someone with roughly my 2018 level of background knew up front that they wanted to enter the field, I think it would take a lot less time than that; a few months would be my guess. That said, my level of background in 2018 was already well above zero.
I wrote a fair bit on LessWrong, and researched some agency problems, even before quitting my job. I do expect it helps to “ease into it” this way, and if you’re coming in fresh you should probably give yourself extra time to start writing up ideas, following the field, and getting feedback. That said, you should probably plan on going full time *at latest* by the time you get a grant, and possibly sooner. If you’re in academia, then you’ll probably have more room to aim the bulk of your research at alignment without striking out on your own. (Though you should still totally strike out on your own and enjoy the no-academic-bullshit lifestyle.)
Meta
----
Historically, EA causes (including alignment) have largely drawn from very young populations (mostly undergrads). I believe this is mostly because (a) those are the people who don’t need to be drawn away from a different path which they’re already on, (b) they’re willing to work for peanuts, and (c) they don’t have to unlearn how to bullshit. Unfortunately, a lot of alignment research benefits from a broad technical background, which takes time to build up. So I think we’ve historically had fewer researchers with that sort of broad knowledge than would be ideal, just because we tend to recruit young people.
But conditions have changed in recent years, and I think there’s now room for a different kind of recruitment, aimed at (somewhat) older people with more knowledge and experience.
First: the Sequences are about ten years old, so right about now there are probably a bunch of postgrads and adjunct professors with lots of technical skills who have already read them, have decent epistemic habits (i.e. know how to *not* bullshit), and have a rough understanding of what the alignment problem is.
Second: nowadays, we have money. If you’re a postgrad or adjunct professor or whatever, and you can do good technical alignment research, you can probably make *more* money as an independent researcher in alignment than you do now. Our [main grantmaker](https://funds.effectivealtruism.org/funds/far-future) has an application form which takes maybe a few hours at most, usually comes back with a decision in under a month, and complains that it doesn’t have enough good projects to spend its money on.
So if you’re the sort of person who:
* Wants to tackle big open research problems
* … in a field where everyone is confused and we don’t have a paradigm yet and you have to basically chart your own course
* … and the stakes are literally astronomical
* … and you have a bunch of technical skills, maybe read the sequences ten years ago, and have a basic understanding of what AI alignment is and why it’s hard
… then now is a good time to sit down with a notebook and think about how you’d go about understanding alignment/agency. If you have any promising ideas, write them up, post them here on LessWrong, and apply for a grant to pursue this research full-time.
I can attest that it’s an awesome job. |
24f149ba-04c9-420a-92b3-cfacdfcfaf19 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Bratislava Meetup XIII. (international)
Discussion article for the meetup : Bratislava Meetup XIII. (international)
WHEN: 24 May 2014 04:00:00PM (+0200)
WHERE: Mickiewiczova 2, Bratislava
This is an international meetup (which means that we will meet during a weekend and speak English), so if you wanted to visit Bratislava, this is the right moment to do. More info in comments.
Discussion article for the meetup : Bratislava Meetup XIII. (international) |
73326931-fa07-46d5-9155-b817d19e4bd6 | trentmkelly/LessWrong-43k | LessWrong | Ongoing free money at PredictIt
Previously on LW
* https://www.lesswrong.com/posts/JmqBtCCnMZmpXP2bE/what-are-good-election-betting-opportunities?commentId=NFdSk5H39omYXPjBS,-
* https://www.lesswrong.com/posts/bSKWSPM927oZZqN7f/predictit-presidential-market-is-increasingly-wrong
PredictIt is structurally inefficient
* A maximum of 5,000 people in one market
* A maximum of $850 in betting per person per contract (not per market)
* A substantial house cut of 10% on profits and 5% on withdrawals.
Because of the market's structure, smart money cannot (given the first two reasons) or will not (given the third reason) fix all instances of mis-priced markets.
Free money? Free money!
Have you ever wished you could bet on past events? Well, today is your lucky day!
Q: Who won the Texas Democratic primary? A: Joe Biden
Q: Who won the presidency? A: Joe Biden
As I write this, you can purchase $1 for the price of $.85, in the Will the 2020 TX Democratic primary winner win the presidency? market.
Are there other markets like this? Yes!
* Who will be the Senate-confirmed Secretary of State on Feb. 15? Not Mike Pompeo @ 85 cents.
* Who will be the Senate-confirmed Attorney General on Mar. 1? Not Bill Barr @ 88 cents.
* Which party will win Wisconsin in the 2020 presidential election? Not Republican @ 85 cents.
* Will the 2020 SC Democratic primary winner win the presidency? Yes @ 86 cents.
Sometimes, you can find a $20 bill on the ground and pick it up. |
82fd43db-caa0-4f38-8bd7-52eb734cbc21 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Investigating AI Takeover Scenarios
*Epistemic status: lots of this involves interpreting/categorising other people’s scenarios, and could be wrong. We’d really appreciate being corrected if so.*
**TLDR**: see the [summary table](https://www.lesswrong.com/posts/zkF9PNSyDKusoyLkP/investigating-ai-takeover-scenarios#Table).
*This post was written with the help of* [*Sam Clarke*](https://www.lesswrong.com/users/sam-clarke)*.*
In the last few years, people have proposed various AI takeover scenarios. We think this type of scenario building is great, since there are now more concrete ideas of what AI takeover could realistically look like. That said, we have been confused for a while about what different assumptions are made when outlining each scenario. This post investigates these assumptions, and might be useful for anyone interested in the plausibility of scenarios like [What Failure looks like](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like) or [Production Web](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic).
This post builds on [our previous post](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios) on how to distinguish AI takeover scenarios. Here, we discuss variable social, economic and technological characteristics of the worlds described in each of seven takeover scenarios. These characteristics are:
* **Crucial decisions:** the specific (human) decisions necessary for takeover
* **Competitive Pressures**: the strength of incentives to deploy AI systems despite the dangers they might pose
* **Takeover capabilities:** how powerful the systems executing the takeover are
* **Hackability of alignment:** the difficulty of correcting misaligned behaviour through incremental fixes
We begin by explaining why we investigated these particular properties of AI takeover scenarios: they are characteristics along which slow scenarios (which describe loss of control to AI occurring over years) and fast scenarios (which involve AIs gaining capability rapidly over a much shorter period) differ quite a lot ([Different assumptions between slow and fast scenarios](https://www.lesswrong.com/posts/zkF9PNSyDKusoyLkP/investigating-ai-takeover-scenarios#Different_assumptions_between_slow_and_fast_scenarios)). In particular, slow scenarios make stronger assumptions about competitive pressures but weaker assumptions about takeover capabilities, compared to fast scenarios.
In sharing this post, we want to reveal assumptions of AI takeover scenarios that might not be obvious; understanding these assumptions is essential for predicting which risks are most serious.
Therefore, in the [‘Takeover Characteristics’](https://www.lesswrong.com/posts/zkF9PNSyDKusoyLkP/investigating-ai-takeover-scenarios#Takeover_Characteristics) section, we present (our interpretation of) the 7 AI takeover scenarios discussed in [our original post](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios) from the perspective of the four characteristics this post discusses, in the form of a table.
In the following [‘Discussion of Scenarios’](https://www.lesswrong.com/posts/zkF9PNSyDKusoyLkP/investigating-ai-takeover-scenarios#Discussion_of_Scenarios) section we elaborate on the information in this table - describing in detail the nature of the crucial decisions made, the competitive pressures in play, the key capabilities of the AI system(s) and the ‘hackability’ of alignment in each of the seven scenarios.
Because we have identified new characteristics of AI takeover, we have been able to come up with new takeover scenarios by considering all the ways these characteristics might vary. Some of these are described in [‘New Scenarios’.](https://www.lesswrong.com/posts/zkF9PNSyDKusoyLkP/investigating-ai-takeover-scenarios#New_Scenarios)
Finally, in the section on [‘Discussion of Characteristics’,](https://www.lesswrong.com/posts/zkF9PNSyDKusoyLkP/investigating-ai-takeover-scenarios#Discussion_of_Characteristics_) we describe each of the four characteristics of takeover in more depth, discuss how they interact with each other and evaluate some arguments about what values they will likely take.
This post builds on previous work investigating these questions. Joe Carlsmith’s [report on power-seeking AI](https://forum.effectivealtruism.org/posts/78NoGoRitPzeT8nga/draft-report-on-existential-risk-from-power-seeking-ai) discussed deployment decisions and the role of competitive pressures in AI takeover scenarios *in general* (*sections 5,6*). [Kaj Sotala’s report](https://kajsotala.fi/assets/2018/12/Disjunctivescenarios.pdf) on disjunctive scenarios for AI takeover investigated competitive pressures and crucial decisions, primarily as they pertained to ‘brain-in-a-box’ scenarios (several of the scenarios we discuss here had not been devised when that report was written).
AI Takeover Scenarios
---------------------
Our original post on [Distinguishing AI takeover scenarios](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios) examined seven proposed ways that agentic AI systems with values contrary to those of humans could seize control of the future. These scenarios are summarized briefly, and we will use the below names to refer to them:
> *The links will take you to the more detailed descriptions of each scenario from our first post, including a discussion of uni/multipolarity, speed of takeoff and type of misaligned behaviour*
>
> **Fast scenarios**
>
> [**Outer-misaligned brain-in-a-box scenario**](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#Outer_misaligned_brain_in_a_box_scenario)**:**
>
> This is the ‘classic’ scenario that most people remember from reading [*Superintelligence*](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies): A single highly agentic AI system rapidly becomes superintelligent on all human tasks, in a world broadly similar to that of today. The objective function used to train the system (e.g. ‘maximise production’) doesn’t push it to do what we really want, and the system’s goals match the objective function.
>
> [**Inner-misaligned brain-in-a-box scenario**](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#Inner_misaligned_brain_in_a_box_scenario)
>
> Another version of the brain-in-a-box scenario features inner misalignment, rather than outer misalignment. That is, a superintelligent AGI could form some arbitrary objective that arose during the training process.
>
> [**Flash economy**](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#Flash_economy)
>
> A multipolar version of the outer-misaligned ‘brain-in-a-box’ scenario, with many powerful AIs. Groups of systems reach an agreement to divide the Earth and space above it into several conical sectors, to avoid conflict between them (this locks in multipolarity).
>
>
> **Slow scenarios**
>
> [**What failure looks like, part 1 (WFLL 1)**](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#What_failure_looks_like__part_1__WFLL_1_)
>
> Many agentic AI systems gradually increase in intelligence and generality, and are deployed increasingly widely across society to do important tasks (e.g., law enforcement, running companies, manufacturing and logistics). The objective functions used to train them (e.g., ‘reduce reported crimes’, ‘increase reported life satisfaction’, ‘increase human wealth on paper’) don’t push them to do what we really want. There’s no discrete point where this scenario becomes irreversible. AI systems gradually become more sophisticated, and their goals gradually gain more influence over the future relative to human goals.
>
> [**Another (outer) alignment failure story (AAFS)**](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#Another__outer__alignment_failure_story__AAFS_)
>
> This scenario, also [described by Christiano](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story), is initially similar to WFLL 1. AI systems slowly increase in generality and capability and become widely deployed. The systems are outer misaligned: they pursue natural generalisations of the poorly chosen objective functions they are trained on. Problems arise along the way, when systems do things that look good but aren’t actually good. Specifically: ‘ensuring things look good according to human judgment’ eventually means fooling humans and carefully controlling what gets fed into the sensors, because the AIs can produce much more desirable-looking outcomes, much more easily, by controlling the sensor feedback given to human operators instead of actually making the world good. Eventually, all humans will either be killed or totally disempowered, because this is the best way of making sure the systems’ objectives are maximally positive and will remain that way forever.
>
> [**Production Web**](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#Production_Web)
>
> Critch’s [Production Web](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic#Part_1__Slow_stories__and_lessons_therefrom) scenario is similar to WFLL 1 and AAFS, except that the objective functions used to train the systems are more severely outer misaligned. Specifically, the systems are trained to ‘maximise productive output’ or another similarly crude measure of success.
>
> [**What failure looks like, part 2 (WFLL 2)**](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#What_failure_looks_like__part_2__WFLL_2_)
>
> Described by [Christiano](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like#Part_II__influence_seeking_behavior_is_scary) and elaborated further by [Joe Carlsmith](https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit), this scenario sees many agentic AI systems gradually increase in intelligence, and be deployed increasingly widely across society to do important tasks, just like WFLL 1.
>
> But then there is an inner alignment failure rather than an outer alignment failure. The systems learn an objective unrelated to the training objective. The objective it follows will be easily discoverable by neural networks (e.g., ‘don’t get shut down) as it arises naturally in the training process. The systems seek influence as an instrumental subgoal. Takeover becomes irreversible during a period of heightened vulnerability (a conflict between states, a natural disaster, a serious cyberattack, etc.) before systems have undergone an intelligence explosion. This [could look like](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like#:~:text=The%20catastrophe%20might,rid%20of%20them.) a “rapidly cascading series of automation failures”
>
> [**Soft takeoff leads to decisive strategic advantage**](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#Soft_takeoff_leads_to_decisive_strategic_advantage)
>
> This scenario, [described by Kokotajlo](https://www.lesswrong.com/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage#What_a__Paul_Slow__soft_takeoff_might_look_like_according_to_me), starts off much like ‘What failure looks like’. Unlike in ‘What failure looks like’, in this scenario one AI is able to buy more computing hardware and invest more time and resources into improving itself, enabling it to do more research and pull far ahead of its competition. Eventually, it can seize a decisive strategic advantage and defeat all opposition.
>
>
### Different assumptions between slow and fast scenarios
The starting point for our investigation is the following observation: fast ‘brain-in-a-box’ scenarios assume that takeover probably cannot be prevented after the misaligned Transformative Artificial Intelligence (TAI) is deployed (due to very rapid capability gain), but the ‘slow scenarios’ involve an extended period where misaligned AIs are deployed, incremental improvements to alignment are attempted and, in some cases, warning shots (small-scale disasters that indicate that AI is unsafe) happen.
Therefore, the slow scenarios have to provide an explanation as to why many actors persist in deploying this dangerous technology over several years. These social/economic assumptions can be thought of as substituting for the assumption of very fast progress that was key to the fast scenarios - the rapid capability gain with no time to respond is replaced by a slower capability gain and an ineffective response.
If the slow scenarios capture reality better than the fast scenarios, then systems will be deployed deliberately and will initially be given power rather than seizing power. This means both that the systems won’t be so obviously dangerous that the misbehaviour is noticed early on and that there is still misalignment later on. [Carlsmith](https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit#):
> The question, then, isn’t whether relevant actors will intentionally deploy systems that are already blatantly failing to behave as they intend. The question is whether the standards for good behavior they apply during training/testing will be adequate to ensure that the systems in question won’t seek power in misaligned ways on any inputs post-deployment.
>
>
Just from this initial observation, we know that there are several differences in the assumptions of slow and fast scenarios that go beyond just technical factors or overall results like whether the outcome is unipolar or multipolar. This led us to investigate exactly how particular slow and fast scenarios differ in the broader set of assumptions they make.
Takeover Characteristics
------------------------
Our initial [table of characteristicsfor AI takeover scenarios](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#Scenarios) discussed the primary and overt characteristics of a takeover - whether they were unipolar or multipolar, whether they involved rapid capability gain or slow capability gain, and how and why the AI systems were misaligned. In this post, we present a table of secondary characteristics of AI takeover scenarios - factors that influence these primary characteristics or depend on them in various ways.
The characteristics of AI takeover `can be divided into first, social and economic factors: **crucial decisions**, **competitive pressures**, and second, technical factors: **takeover capabilities** and **alignment ‘hackability’**.
**Crucial decisions** and **competitive pressures** are two ways of looking at the preconditions for an AI takeover scenario. The first is a local view, focussing on particular mistaken decisions (e.g. around deploying a dangerous AI). The second is a broad view, focussing on the presence of perverse economic or political incentives. These two overlap - bad decisions are made in response to perverse competitive pressures, and competitive pressures can lessen or intensify because of key decisions about oversight or regulation.
**Takeover capabilities** and **Alignment ‘hackability’** are assumptions each scenario makes about the competence of the AIs which take over and how difficult it is to align them using short term, case-by-case fixes. There are complicated relationships between the assumptions you make about these technological questions and the assumptions you make about social factors. Roughly speaking, the weaker the competitive pressures and the more competently crucial decisions are made, the more capable the AIs have to be and the harder (less ‘hackable’) alignment has to be for disaster to occur. However, note that if hackability is very low, we might have [enough warning shots](https://snipboard.io/uiQHpS.jpg) to avoid developing dangerous AI in the first place. These relationships are discussed in more detail in the section on Discussion of Characteristics.
### Table
This table presents our best guess of what the crucial decisions, degree and cause of competitive pressures, assumed capabilities for AI takeover and hackability (effectiveness of short-term fixes) in different takeover scenarios are. In the following section we then discuss each scenario from these perspectives. You may want to refer back to [our first summary table](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#Summary_table).
| | | | | |
| --- | --- | --- | --- | --- |
| Characteristics | Crucial Decisions (Identifiable decisions made by humans that lead to Takeover) | Competitive Pressures(Strength and nature of incentives to deploy AI) | Takeover Capabilities(What capabilities do the AIs employ to execute takeover) | Alignment‘Hackability’(extent to which short-term fixes are sufficient for aligning systems on all inputs which they will in fact receive) |
| [**Outer-misaligned brain-in-a-box ‘superintelligence’ scenario**](https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/#high-level-overview-of-bostrom-003934) Inner-misaligned brain-in-a-box scenario | Choose to develop TAI (*If not released deliberately*: Choose to deploy TAI) | (*race dynamic may be present in leadup to TAI development*) | Rapid Capability Gain Ability to seize DSA or major advantage over the rest of the world from ~nothingIf not released deliberately – has to escape | Irrelevant (no time for fixes) |
| [**Flash Economy**](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic#Flash_economies) | Choose to develop TAI Choose to release system open-source / share research | Enough to allow initial deployment of the TAI systems | Ability to seize DSA or major advantage over the rest of the world from strong starting point | Could be fairly high, not much time for fixes |
| [**What failure looks like, part 1**](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like#Part_I__You_get_what_you_measure)(WFLL 1) | Choose to develop TAI Choose to automate systems on a large-scale Inadequate response to warning signs | Incentives to keep deploying AI **Some** pressure to fix small errors | Irrelevant, loss of control occurs without takeover | Moderate |
| [**Another (outer) alignment failure story**](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story)(AAFS) | Choose to develop TAI Choose to automate systems on a large-scale Inadequate response to warning signs and small disasters | Incentives to keep deploying AI **Significant** pressure to fix small errors | Ability to seize DSA or major advantage over the rest of the world from strong starting point | Lower than WFLL 1 |
| [**Production Web**](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#Production_Web) | Choose to develop TAI Choose to automate systems on a large-scale Inadequate response to warning signs and small disasters | **Strong** incentives to keep deploying AI **No** real pressure to fix small errors | Ability to seize DSA or major advantage over the rest of the world from strong starting point | Similar to WFLL 1 |
| [**What failure looks like, part 2**](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like#Part_II__influence_seeking_behavior_is_scary)(WFLL 2) | Choose to develop TAI Choose to automate systems on a large-scale Inadequate response to warning signs and escalating series of disasters | **Strong** incentives to keep deploying AI | Ability to seize DSA or major advantage over the rest of the world after some weakening event | Low |
| [**Soft Takeoff leading to DSA**](https://www.lesswrong.com/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage#What_a__Paul_Slow__soft_takeoff_might_look_like_according_to_me) | Choose to develop TAI Government or research group centralises research effort and achieves strong lead | Race Dynamic | Ability to seize DSA or major advantage over the rest of the world from resources of initial project | (Low enough that whatever is tried during testing of system fails) |
Discussion of Scenarios
-----------------------
Here we discuss each of the seven scenarios in depth from the perspective of crucial decisions, competitive pressures, takeover capabilities and alignment hackability. The links on each heading take you to a full description of the original scenario in our previous post.
### [Outer-misaligned brain-in-a-box scenario/Inner-misaligned brain-in-a-box scenario](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#Outer_misaligned_brain_in_a_box_scenario)
In ‘brain-in-a-box’ scenarios, the main crucial decisions occur early on and involve development (and possibly voluntary deployment) of the first and only TAI, with the assumption that once this TAI is deployed it’s game over. Depending on the anticipated level of capability, the system might also be capable of talking its way into being deployed during testing or escaping its testing environment, or else might be voluntarily deployed. This particular critical decision - the choice to deploy systems - was discussed by [Sotala in depth](https://kajsotala.fi/assets/2018/12/Disjunctivescenarios.pdf).
As well as anticipated economic benefit, the systems could be voluntarily released for unethical reasons - terrorism, criminal profit, ideological motives or a last-ditch mutually assured destruction attempt.
Competitive pressures to allow the AI to proliferate despite the risks it poses aren’t that important, because after deployment, the AI rapidly completes its takeover and there is no chance for opposition. A race dynamic due to anticipated economic benefit or military power may well be present, and might explain why the system got developed in the first place, but unlike with the slow scenarios there aren’t noticeable competitive pressures explaining how the AI takes over after release. Alignment ‘hackability’ also doesn’t become an issue - there’s no time to incrementally correct the system because it increases in capability too quickly.
### [Flash economy](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#Flash_economy)
The scenario unfolds quickly once the requisite jump in capability has been made (over a few months), but unlike the Brain-in-a-box scenarios, there are multiple highly capable systems in the world. Crucially, the breakthroughs required to create the ‘distributed autonomous organisations’ (highly capable TAIs in this scenario) have to either be leaked or shared (e.g. open-sourced, or shared between particular companies) rapidly, so that the technology isn’t monopolised by one group leading to a DSA.
The agents - ‘distributed autonomous organisations’ - proliferate quickly after the required technology is developed. Because of the extreme speed with which the agents proliferate, the large benefit they deliver early on, and their decentralised nature, there are strong incentives against interference by government and regulation (competitive pressures).
The agents do execute a takeover once they have built up their own infrastructure (takeover capabilities), but they aren’t capable of executing a takeover immediately after being deployed. Lastly, because of how fast the scenario unfolds and the fact that the agents are mostly left alone, alignment might be fairly hackable and corrections easy to apply. As with outer misaligned ‘brain-in-a-box’, once the systems are released there’s just no opportunity to coordinate and actually do this, so even if some systems are controlled with incremental improvements many escape human attention or through regulatory capture/the economic benefit they deliver avoid human interference.
### [What failure looks like, part 1 (WFLL 1)](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#What_failure_looks_like__part_1__WFLL_1_)
In WFLL 1 there are fewer crucial decisions. AI systems gradually increase in capability and are used throughout the economy. Therefore, there has to be no concerted effort to prevent this sort of heavy automation of the economy (so a lack of restrictive regulation or litigation), but otherwise there are few identifiable specific decisions that need to be made. Competitive pressures - mainly arising from the direct economic benefit the systems provide and their benefit to stakeholders, are quite strong. In this scenario, a fraction of people are aware that things are proceeding along a dangerous path, yet AI deployment continues. However, there aren’t many visible small-scale disasters, so competitive pressures needn’t be exceptionally strong (i.e. sufficient to maintain deployment even in the face of warning shots).
The systems don’t execute an overt takeover at any point, so the required capability for takeover is effectively nil - they are just delegated more and more power until humanity loses control of the future. There also aren’t many obvious disasters as things proceed, and the final result of the scenario doesn’t necessarily involve human extinction. Since the systems don’t end up so egregiously misaligned that they execute a violent takeover, there is some, probably intermittent, effort to incrementally fix systems as they malfunction. Therefore, the ‘hackability’ of AI alignment in this scenario is neither very high (otherwise we wouldn’t lose control eventually), nor very low (in which case the systems would end up egregiously misaligned and execute a violent takeover, definitely resulting in extinction) - the alignment problem has an “intermediate” level of hackability.
### [Another (outer) alignment failure story (AAFS)](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#Another__outer__alignment_failure_story__AAFS_)
AAFS is subtly different from WFLL 1 in several key ways. The crucial decisions are the same as WFLL 1, except that this scenario specifies there are many early warning signs of misaligned behaviour - small scale disasters that do come to public attention (e.g. a factory colludes with the auditors valuing its output, giving a great quarterly report that didn’t actually correspond to any revenue), but the response to these accidents is always incremental patches and improvements to oversight rather than blanket bans on automation or rethinking our overall approach to AI development. Competitive pressures are somewhat strong, with direct economic benefit and benefit to shareholders again playing key roles in explaining why we persist in deploying dangerous systems.
However, the scenario also specifies that there are many, varied attempts at incremental improvements to TAI systems in response to each failure - since these attempts are a key part of the story (unlike WFLL1) but the result is worse than in WFLL1 (definite extinction), the scenario assumes that alignment ‘hackability’ is lower than WFLL 1 (also see Paul’s comment that this scenario is one where [‘the alignment problem is somewhat harder than I expect’](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story)). This also means that the scenario assumes competitive pressures are *weaker* than in WFLL 1, as there is much more coordination around attempting to patch mistakes, compared to WFLL1 (see Paul’s comment that this scenario is one where [‘society handles AI more competently than I expect’](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story)). However, while there are more attempts at reigning in AI than in WFLL 1, the competitive pressures aren’t reduced by enough to prevent eventual AI takeover.
Lastly, this scenario does feature a takeover executed by systems that physically and violently seize control of their sensors and feedback mechanisms - the takeover capabilities must therefore include cyberoffense and possibly control of drones or advanced nanotechnology, not primarily effective persuasion tools and other ‘quiet’ means.
However, unlike the brain-in-a-box scenarios, the AI systems are already highly embedded in the economy when they take over, so are starting from a much stronger position than brain-in-a-box AIs including control of lots of physical resources, factories and drones. Therefore, the technological capabilities required for takeover are lower.
### [Production Web](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#Production_Web)
Production web is similar to AAFS in terms of crucial decisions, except that the systems that proliferate in production web gain their large-scale goals without much deliberate planning or testing at all (agentic systems with narrow goals like fulfilling a specific market niche knit together into a ‘production web’ by themselves). The competitive pressures, primarily from economic benefit and benefit delivered to stakeholders, must be very strong for this process to proceed (stronger than in AAFS/WFLL 1) despite the fact that it occurs over multiple years and with obvious signs that humanity is losing control of the situation. Regulatory capture and benefit to stakeholders are emphasised as reasons why the development of the production web is not halted, but there is less focus on the ambiguity of the situation, compared to WFLL 1 (since the outcome is much more obviously disastrous in Production Web).
Alignment ‘Hackability’ is similar to AAFS - in both cases, incremental fixes work for a while and produce behaviour that is at least beneficial in the short term. The difference is that because competitive pressures are stronger in Production Web, compared to AAFS, there is less effort put into incremental fixes and so systems end up going off the rails much sooner.
Like AAFS, the takeover occurs when the systems are already highly embedded in the world economy, but probably occurs earlier and with a somewhat lower barrier to success, since the systems don’t need to seize control of sensors to ensure that things continue to ‘look good’. Otherwise, the takeover route is similar to AAFS, though the story emphasises resources being consumed and humanity going extinct as a side effect, rather than systems seizing control of their sensors and oversight systems.
### [What failure looks like, part 2 (WFLL 2)](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#What_failure_looks_like__part_2__WFLL_2_)
WFLL 2 involves an inner alignment failure, so setting up the training in ways that disincentivise power-seeking behaviour less will be very hard, as by specification power-seeking is a strong attractor state. Therefore hackability is low. This has various other effects on the scenario. The crucial decisions probably involve a greater neglect of potential risks than in WFLL 1, especially because the warning shots and small-scale failure modes in WFLL 2 are more likely to take the form of violent power-seeking behaviour rather than comparatively benign mistakes (like auditor-AIs and factory-AIs colluding).
The competitive pressures have to be strong, to explain why systems keep getting deployed despite the damage they have already inflicted.
Christiano describes the takeover as occurring at a point of heightened vulnerability - both because this is a Schelling point where different systems can coordinate to strike, and because the minimum level of capability required for a takeover is lower. Since the systems will execute a takeover at the first opportunity and during a period of heightened vulnerability, (and will therefore be attempting takeover much earlier) the required capabilities for takeover are lower in this scenario, compared to AAFS/Production Web.
### [Soft takeoff leads to decisive strategic advantage](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#Soft_takeoff_leads_to_decisive_strategic_advantage)
Soft takeoff leading to decisive strategic advantage (DSA) has an extra assumption on top of the preconditions for AAFS/WFLL 1/Production Web - that one particular research group is able to secure significant lead time over competitors, such that it can defeat both humanity and rival AIs. Given this assumption, what’s going on in the rest of the world, whether the other AI systems are aligned or not, is irrelevant.
The leading project is probably motivated by a strategic race for military or economic dominance, since it has secured enough resources to dominate the rest of the world. The required takeover capability is very high as the system is competing against other transformative AI systems, although not quite as high as in the ‘brain-in-a-box’ scenario, as this leading project starts out with a lot of resources. Alignment cannot be hackable enough that the leading project is able to successfully align the AI system in the development time it has, but otherwise the exact level of ‘hackability’ is underdetermined.
New Scenarios
-------------
Here, we present some scenarios devised by varying one or more of the takeover characteristics.
### Soft takeoff and decisive strategic advantage by narrow AI
We devised this scenario by setting the ‘takeover capabilities’ to a very low value - the barrier to AI takeover is low.
This scenario is similar to ‘Soft takeoff leads to decisive strategic advantage’, except that the single system which takes over is not that much more capable than its rivals. Rather, it simply has a single good trick that enables it to subvert and take control of the rest of the world. Its takeover capability might be exceptionally good manipulation techniques, specific deadly technology, or cyberoffensive capability, any of which could allow the system to exploit other AIs and humans. This removes the assumption that a lot of research effort will need to be concentrated to achieve a DSA, and replaces it with an assumption that there is some unique vulnerability in human society which a narrow system can exploit. Implicit in this scenario is the assumption that generally capable AI is not needed to take on an extraordinary research effort to find this vulnerability in human society.
### Compounding Disasters
We devised this scenario by assuming the competitive pressures are very high, crucial decisions are very incompetent and ‘hackability’ is very low.
This scenario is similar to AAFS, with TAI systems widely being deployed, pursuing goals that are okay proxies for what humans actually want, and demonstrating some misbehaviour. However, instead of the small-scale failures taking the form of relatively benign ‘warning shots’ that lead to (failed) attempts to hack AI systems to prevent future errors, the small scale disasters cause a large amount of direct damage. For example, an AI advisor misleads the government, leading to terrible policy mistakes and a collapse of trust, or autonomous weapon systems go rogue and attack cities before being taken out. The result of this is a compounding series of small disasters that rapidly spiral out of control, rather than attempted patches staving off disaster for a while before a single sudden AI takeover. In the end, the AI takeover occurs at a period of heightened vulnerability *brought about by previous medium-sized AI-related disasters*. Therefore, AI systems in this scenario need not be as competent as in AAFS or even WFLL 2 to take over. Alignment may be easily hackable in this situation, but such damage has been done by early, agentic, narrow AIs that no such fixes are attempted.
### Automated War
A situation rather like the AAFS scenario plays out, where the economy becomes dependent on AI, and we lose control of much key infrastructure. Capable, agentic AI systems are built which do a good job of representing and pursuing the goals of their operators (inner and outer aligned). These are deployed on a large scale and used to control armies of drones and automatic factories, as well as the infrastructure needed for surveillance, for the purposes of defending countries.
However, there are key flaws in the design of the AI systems that only become apparent after they are in a position to act relatively independent of human feedback. At that point, flaws in their ability to both model each other and predict their own chances of winning potential contests over valuable resources lead to arms races and ultimately destructive wars that the AIs have precommitted to pursue.
This scenario probably involves a stronger role for military competition, instead of just economic competition, and also involves a particular kind of (non-intent) alignment failure - systems failing to behave correctly in multiagent situations (along with an intent alignment failure that means the systems can’t just be told to stand down when they start going against the interests of their operators).
From the perspective we are taking in this post, there need to be particular crucial decisions made (automation of military command and control), as well as strong military competitive pressures and a likely race dynamic. Alignment is not very hackable, for a specific reason - the multiagent flaw in AIs is not easy to detect in testing or soon after deployment.
### Failed Production Web
The preconditions for Production Web play out as described in that scenario, where agentic AI systems each designed to fill specific market niches attempt to integrate together. However, due to either specific defects in modelling other AIs or inner misalignment, the systems are constantly seeking ways to exploit and defraud each other. These attempts eventually result in AI systems physically attacking each other, resulting in a chaotic war that kills humans as a side effect. This is similar to ‘automated war’, but with different competitive pressures. There is less of a focus on strategic competition and more of a focus on economic competition, and requires similar assumptions to Production Web about very strong competitive pressures.
Discussion of Characteristics
-----------------------------
We have seen how different scenarios involve varied critical decisions, stronger or weaker assumptions about competitive pressures, a lower or higher threshold for takeover or different levels of alignment hackability. How plausible are these features of the scenarios?
Below, we discuss the four characteristics we have identified and, for some, give an assessment of the reasons why you might expect them to be at one extreme or another (crucial decisions made unusually competently/incompetently, very strong/very weak competitive pressures to deploy AI systems, a low/high bar for AIs to be capable enough to take over, easy/hard alignment ‘hackability’).
### Crucial Decisions
In all the scenarios discussed, we can identify certain decisions which governments and companies must make. Most obviously, research into agentic AI has to be pursued for long enough to produce significant results, and this would have to include a lack of oversight and no decisions to halt research in the face of significant risk. Some scenarios also involve cases where AIs that obviously pose a risk are deliberately released for some reason.
A scenario is less plausible if many crucial decisions must all be made wrongly for the scenario to come about. A scenario is more plausible if varying whether actors make the wrong choice at many stages of TAI development doesn’t change whether the scenario happens.
This is important, especially because it is very difficult to assess what choices actors will actually make while TAI develops ([and we won’t try to figure this out in this post](https://www.lesswrong.com/posts/6ccG9i5cTncebmhsH/frequent-arguments-about-alignment?commentId=WKa2cs8bdekMD8cQs)). By finding out how many crucial decisions are relevant for a given AI takeover scenario, we can get a better understanding of how plausible they are, despite our confusion about what governments and companies would decide in particular cases. There is an extensive discussion of the plausibility of some potential crucial decisions on [page 326 and after](https://kajsotala.fi/assets/2018/12/Disjunctivescenarios.pdf) of Kaj Sotala’s report.
### Competitive Pressures
‘Competitive pressures’ is a characteristic that describes how strong the incentives will be to keep deploying dangerous AI, even in the face of significant risk. There has been some discussion of the implied strength of competitive pressures in the slow and fast scenarios. Here are some reasons to expect that there will be [strong pressures to deploy](https://www.lesswrong.com/posts/v6Q7T335KCMxujhZu/clarifying-what-failure-looks-like-part-1#The_mechanisms_for_lock_in) dangerous Transformative Artificial Intelligence (TAI):
> [**(1) Short-term incentives and collective action**](https://www.lesswrong.com/posts/v6Q7T335KCMxujhZu/clarifying-what-failure-looks-like-part-1#_1__Short_term_incentives_and_collective_action)
>
> **Economic Incentives:** Since TAI will be economically valuable in the short-term, incentives might lead us to cut corners on safety research, especially checks on how models generalize over long time horizons.
>
> **Military Incentives:** TAI even in its early stages might provide an unchallengeable military advantage, so states would have an extra incentive to compete with each other to produce TAI first.
>
> [**(2) Regulatory capture**](https://www.lesswrong.com/posts/v6Q7T335KCMxujhZu/clarifying-what-failure-looks-like-part-1#_2__Regulatory_capture)
>
> **AI actions benefit stakeholders:** There will be many particular beneficiaries (as distinct from benefits to the overall economy) from TAI systems acting in misaligned ways, especially if they are pursuing particular goals like ‘make money’ or ‘maximise production’. This means the stakeholders will have both the resources and motivation to water down regulation and oversight.
>
> [**AI existence provides value (due to IP)**](https://www.alignmentforum.org/posts/FM49gHBrs5GTx7wFf/rogue-agi-embodies-valuable-intellectual-property)**:** If financial markets realize how valuable TAI is ahead of time, the developers can quickly become extremely wealthy ahead of deployment once they demonstrate the future value they will be able to provide (before the TAI has had time to act in the world to produce economic benefit). This gives stakeholders resources and a motivation to water down regulation and oversight.
>
> [**(3) Genuine ambiguity**](https://www.lesswrong.com/posts/v6Q7T335KCMxujhZu/clarifying-what-failure-looks-like-part-1#_3__Genuine_ambiguity)
>
> **Actual ambiguity:** In many of the scenarios we discuss, humanity’s situation might be good in easy to measure ways. This means getting buy-in to challenge the status quo could be difficult.
>
> **Invisible misalignment:** The AI systems might not be acting in dangerous, power-seeking or obviously misaligned ways early on. This could either be because of deliberate deception (deceptive alignment) or because the systems only fail to effectively generalise their goals on a very large scale or over long time horizons, so the misbehaviour takes years to show up.
>
>
Clearly, there are many reasons to expect strong competitive pressure to develop TAI. But how plausible is the idea that competitive pressures would be so high that potentially dangerous AI would be deployed despite major concerns? There are two intuitions we might have before looking into the details of the slow scenarios. We illustrate these intuitions with examples from existing writing on this question:
**Unprecedentedly Dangerous**
Transformative AI has the potential to cause unprecedented damage, all the way up to human extinction. Therefore, our response to other very dangerous technologies such as nuclear weapons is a good analogy for our response to TAI. It is unrealistic to expect TAI to be deployed if first there are many worsening warning shots involving dangerous AI systems. This would be comparable to an unrealistic alternate history where nuclear weapons were immediately used by the US and Soviet Union as soon as they were developed and in every war where they might have offered a temporary advantage, resulting in nuclear annihilation in the 1950s. From [Ngo](https://www.lesswrong.com/posts/AWbtbmC6rAg6dh75b/some-thoughts-on-risks-from-narrow-non-agentic-ai#A_slow_rolling_catastrophe):
> The second default expectation about technology is that, if using it in certain ways is bad for humanity, we will stop people from doing so. This is a less reliable extrapolation - there are plenty of seemingly-harmful applications of technology which are still occurring. But note that we’re talking about a slow-rolling *catastrophe* - that is, a situation which is unprecedentedly harmful. And so we should expect an unprecedented level of support for preventing whatever is causing it, all else equal.
>
>
Perhaps the development of TAI will be similar enough to the development of nuclear weapons that, by analogy with this past development, we can claim evidence that harmful AI takeover is unlikely. In order for the risk from TAI to be like the risk from nuclear escalation, the potential TAI disaster would have to have a clear precedent (some small scale version of the disaster has already occurred), the delay between the poor decision and the negative consequence would have to be very short, and we would have to be sure beforehand that deployment would be catastrophic (an equivalent of mutually assured destruction). [Carlsmith](https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit#heading=h.8gh9kqwjzzyh) discusses such a scenario as potentially plausible:
> it seems plausible to me that we see PS [Power-seeking]-alignment failures of escalating severity (e.g., deployed AI systems stealing money, seizing control of infrastructure, manipulating humans on large scales), some of which may be quite harmful, but which humans ultimately prove capable of containing and correcting.
>
>
**Unprecedentedly Useful**
Transformative AI has the potential to accelerate economic growth by an unprecedented amount, potentially resulting in an entirely new growth regime far faster than today’s. A scenario where we don’t take shortcuts when deploying TAI systems is comparable to an unrealistic alternate history where the entire world refrained from industrializing and stopped additional burning of fossil fuels right after the first plausible evidence of climate change became available in the 1960s. From [Carlsmith](https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit#heading=h.8gh9kqwjzzyh):
> Climate change might be some analogy. Thus, the social costs of carbon emissions are not, at present, adequately reflected in the incentives of potential emitters -- a fact often thought key to ongoing failures to curb net-harmful emissions. Something similar could hold true of the social costs of actors risking the deployment of practically PS [power-seeking] -misaligned APS [agentic AI] systems for the sake of e.g. profit, global power, and so forth…
>
>
> ...The first calculations of the greenhouse effect occurred in 1896; the issue began to receive attention in the highest levels of national and international governance in the late 1960s; and scientific consensus began to form in the 1980s. Yet here we are, more than 30 years later, with the problem unsolved, and continuing to escalate -- thanks in part to the multiplicity of relevant actors (some of whom deny/minimize the problem even in the face of clear evidence), and the incentives and externalities faced by those in a position to do harm. There are many disanalogies between PS-alignment risk and climate change (notably, in the possible -- though not strictly necessary -- immediacy, ease of attribution, and directness of AI-related harms), but we find the comparison sobering regardless. At least in some cases, “warnings” aren’t enough.
>
>
Just as with the optimistic analogy to nuclear weapons, we can ask what AI takeover scenarios fit with this pessimistic analogy to climate change. The relevance of the climate change analogy will depend on the lag between early signs of profit/success and early signs of damage, as well as how much of the damage represents an externality to the whole of society, versus directly backfiring onto the stakeholders of the individual project in a short time. It might also depend on how well (power-seeking) alignment failures are understood, and (relatedly) how strong public backlash is (which could also depend on whether AI causes other non-alignment related, non-existential level harms e.g. widespread unemployment and widening inequality).
### Takeover Capabilities
In each scenario, there is a certain understanding of what capabilities are necessary for AIs to seize control of the future from humanity. The assumption about how capable AIs need to be varies for two reasons. The first is that some scenarios make different assumptions than others about the intrinsic vulnerability of human civilisation. The second is that in different scenarios, TAIs become obviously adversarial to humans and start fighting back at different points in their development.
Some scenarios (such as brain-in-a-box) describe systems acting in ways that provoke human opposition almost immediately, so if those scenarios result in AI takeover the systems must be supremely capable (able to defeat all opponents with no starting resources). Other scenarios assume a ‘creeping failure’ where competitive pressures mean humans allow AI systems to monopolise resources and build up infrastructure for a while before the systems execute a takeover (such as AAFS). In these scenarios, the TAI systems need to be capable enough to defeat human opposition while already having access to factories, drones, large amounts of money etc. which requires fewer assumptions about the AI’s capabilities.
How do we quantify the ‘intrinsic vulnerability’ of human civilisation? It is hard to assess how much of an advantage is required to secure a DSA. Two intuitions on this question point in radically different directions:
* **Economic:** To be able to outcompete the rest of the world, your project has to represent a substantial fraction of the *entire world's capability* on some crucial metric relevant to competitive success, because if you are actively seeking to take over the world then you will face opposition from everyone else. Perhaps that should be measured by GDP, military power, the majority of the world**'**s AI compute, [or some other measure](https://www.lesswrong.com/s/dZMDxPBZgHzorNDTt/p/aFaKhG86tTrKvtAnT). For a single project to represent a large fraction of world GDP, you would need either an extraordinary effort to concentrate resources or an assumption of [sudden, off-trend rapid capability gain](https://www.lesswrong.com/posts/yFQkFNCszoJPZTnK6/analogies-and-general-priors-on-intelligence#Difficulty_of_Marginal_Intelligence_Improvements) such that the leading project can race ahead of competitors.
* **Historical:** Humans with no substantial advantage over the rest of humanity have in fact secured what Sotala called a ['major strategic advantage'](https://www.lesswrong.com/posts/8xRSjC76HasLnMGSf/agi-safety-from-first-principles-introduction?commentId=3G7ugbD2YTuYkJhwa) repeatedly in the past. For example: Hitler in 1920 had access to a microscopic fraction of global GDP / human brain compute / (any other metric of capability) but had secured an MSA 20 years later (since his actions did lead to the deaths of 10+ million people), along with control over a significant fraction of the world's resources. No single human has ever secured a DSA (the closest was probably [Cyrus the Great](https://www.guinnessworldrecords.com/world-records/largest-empire-by-percentage-of-world-population), who at one point ruled just under half of humanity). We might reasonably assume that if a human was able to take control of 45% of humanity, taking control of 100% would not require vastly greater capability.
Even if the absolute baseline capabilities required to achieve a DSA are unknown, we can see how other aspects of the scenario will raise or lower the required capabilities. Specifically, there is the issue of [**Dependency and deskilling**](https://www.lesswrong.com/posts/v6Q7T335KCMxujhZu/clarifying-what-failure-looks-like-part-1#_4__Dependency_and_deskilling). We might have integrated misaligned AI into our infrastructure when it attempts to take over. If we hand over lots of decision-making power to systems, they don’t need to be as capable to take over. Taken to the extreme, we can imagine a scenario where we directly hand over control of the future to the systems, such that no takeover is even necessary.
Even if this is not the case, a given system might already have [centralised control of important services](https://www.lesswrong.com/posts/AWbtbmC6rAg6dh75b/some-thoughts-on-risks-from-narrow-non-agentic-ai#Centralised_control_of_important_services) before acting in ways that motivate a human counter-response. They would still have to go the extra mile and secure full control from their strong starting point, but that is necessarily easier than a brain-in-a-box taking over from nothing.
All else being equal, we might expect a greater degree of required takeover capability in faster stories or stories where systems are more obviously power-seeking from the start. The more clearly dangerous and power-seeking systems are, the more likely we are to try to stop them instead of succumbing to pressures to persist in deploying them, so either the response needs to be worse or the TAI’s capabilities need to be greater for takeover to succeed.
Asking ‘how capable’ TAI systems need to be to take over, and discussing factors that might influence this, is an abstraction that covers up the question of *which* capabilities are necessary.
Some failure scenarios don’t discuss the exact route by which final takeover occurs, but sometimes they emphasise a particular approach (such as massed drone swarms, or highly effective manipulation propaganda). Ngo breaks down the takeover capabilities into two general categories of [Manipulation](https://www.lesswrong.com/posts/AWbtbmC6rAg6dh75b/some-thoughts-on-risks-from-narrow-non-agentic-ai#Manipulation) and direct [destructive capabilities](https://www.lesswrong.com/posts/AWbtbmC6rAg6dh75b/some-thoughts-on-risks-from-narrow-non-agentic-ai#Structural_risks_and_destructive_capabilities).
**Example takeover capabilities:**
* Nanotechnology
* Drones, advanced robotics
* Biotechnology
* Persuasion skills
* Cyberoffense Skills
In general, fast scenarios must assume systems can take over from a very weak starting point, which is more likely to require the capabilities to seize control of already existing infrastructure (persuasion skills and cyberoffense skills), while slow scenarios that assume takeover begins when the systems are already well established in the world economy might only require the ability to make use of that infrastructure to defeat opponents (advanced robotics and drones, biotechnology).
### Alignment ‘Hackability’
‘Competitive pressures’ determine how much extra effort is put into aligning and overseeing AI systems - if the pressures are weaker, then we assume more effort is put into alignment and oversight because there is less incentive to cut corners. However, scenarios also differ on how ‘hackable’ the alignment problem is - that is, how easy it is to ‘correct’ misbehaviour by methods of incremental course correction such as improving oversight and sensor coverage or tweaking reward functions. This correction requires two parts - first, noticing that there is a problem with the system early on, then determining what fix to employ and applying it.
In fast takeoff worlds, the ‘hackability’ of the alignment problem doesn’t matter. There is no opportunity for alignment via course correction: either the AIs that rapidly become superintelligent are aligned, or they are not.
In slow takeoff worlds, the ‘hackability’ of the alignment problem appears to have a U-shaped effect on how good the outcomes are. On one extreme, the alignment problem is hackable “all the way” - that is, we can incrementally correct AI systems as we go until we end up with existentially safe TAI. On the other extreme, the alignment problem isn’t hackable at all. This might seem like a terrible outcome, but if it is the reality, it will probably lead to many early warning shots (i.e. small- or medium-scale accidents caused by alignment failures) that cannot be fixed. These will hopefully illustrate the danger ahead and bring about a slow-down in AI development and deployment, until we have robust solutions to alignment.
Between these two extremes, things seem to be more existentially risky. Consider if the alignment problem is “hackable until it isn’t” - that is, for systems of lower capability, we can patch our way towards systems that do what we want, but as systems become increasingly capable, this becomes impossible. Call this an “intermediate” level of hackability. In this world, warning shots are likely to result in fixes that ‘work’ in the short-term, in the sense that they fix the specific problem. This gives humans confidence, resulting in more systems being deployed and more decision-making power being handed over to them. But this course correction becomes unworkable as systems become more capable, until eventually the alignment failure of a highly capable system results in existential catastrophe.
What predictions can we make today about how hackable the alignment problem is? Considering outer alignment: without any breakthroughs in techniques, there seems to be a strong case that we are on track towards the ‘intermediate’ world where the alignment problem is hackable until it isn’t. It seems like the best workable approach to outer alignment we have so far is to train systems to try to ensure that the world looks good according to some kind of (augmented) human judgment (i.e. using something like the training regime described in ['An unaligned benchmark'](https://www.lesswrong.com/posts/ZHXutm7KpoWEj9G2s/an-unaligned-benchmark)). This will result in a world that “looks good until it doesn’t”, for the reasons described in [Another (outer) alignment failure story](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story).
Considering inner alignment: it’s unclear how pervasive of a problem inner misalignment will turn out to be, and also how competent systems have to be to appear aligned when they are not. To the extent that inner alignment is a pervasive problem, and [models don’t have to be very competent to appear aligned when they are not](https://www.lesswrong.com/posts/EpdXLNXyL4EYLFwF8/an-increasingly-manipulative-newsfeed), then this also looks like the ‘intermediate’ world where we can hack around the alignment problem, deploying increasingly capable systems, until a treacherous turn results in catastrophe.
Conclusion
----------
We have identified four characteristics which help us to interpret AI takeover scenarios and examined some of the most widely discussed AI takeover scenarios from this perspective. What becomes clearer when we do this? Each scenario is unique, but there are large differences in which assumptions about these characteristics you need to make in order for slow scenarios vs fast scenarios to be plausible.
Compared to fast scenarios, slow scenarios don’t rely as much on decisions to deploy single dangerous AIs, but make more assumptions about incentives to widely deploy dangerous systems over a long period. From one perspective, this assumption about competitive pressures is the default, because that’s what humans have tended to do throughout our history when some lucrative new technology has been made available. From another perspective, the unprecedented danger posted by TAI implies a strong incentive to avoid making any mistakes.
Similarly, aside from the obvious assumption of rapid capability gain, fast stories also differ from slow stories in that they require systems to be capable enough to seize power from a very weak starting point (since in the slow stories, TAI systems are instead *given* power). How plausible is it that a system could seize power from such a weak starting point? The economic analogy suggests a system would need to acquire a substantial fraction of the world’s resources before attempting to take over, while the historical analogy suggests the system might not need to be much more intelligent than a smart human.
Finally, fast stories don’t really make any assumptions about alignment hackability - they just assume progress is too fast to course-correct. Slow stories must assume hackability is not too high or too low - if hackability is too high there will be no disaster, and if it is too low there will be many escalating warning shots. |
86c9e4d9-4f75-4edb-a413-da65ca6843a6 | trentmkelly/LessWrong-43k | LessWrong | Artificial Sandwiching: When can we test scalable alignment protocols without humans?
Epistemic status: Not a fleshed-out proposal. Brainstorming/eliciting ideas.
Thanks to Ben Mann, Pablo Moreno, and Jared Kaplan for feedback on early drafts.
Overview
* I’m convinced sandwiching—the experimental protocol from Ajeya Cotra’s The case for aligning narrowly superhuman models—is valuable, and I’m in the process of setting up some concrete sandwiching experiments to test scalable oversight ideas.
* Sandwiching experiments are generally fairly slow:
* You have to design and pilot a strategy that allows humans to use (or oversee) a model for a task that they can’t do well themselves. The details matter here, and this can often take many iterations to get right.
* Then, you need a bunch of humans actually try this. Even for very simple tasks, this is a high-cognitive-load task that should take at least tens of minutes per instance.
* You have to repeat this enough times to measure average performance accurately.
* I’m visiting Anthropic this year for a sabbatical, and some of my sandwiching work is happening there. Anthropic’s biggest comparative advantage (like that of similar teams at DeepMind and OpenAI) is easy access to near-state-of-the-art LMs that are fine-tuned to be helpful dialog agents.
* In that context, I've heard or encountered this question several times: Can we speed up [some experiment I’m proposing] by replacing the non-expert human with a weaker LM?
* This obviously doesn’t achieve the full aims of sandwiching in general, but it’s often hard to find a decisive rebuttal for these individual instances.
* More broadly, I think there’s likely to be a significant subset of worthwhile sandwiching experiments that can be trialed more quickly by using an intentionally weakened model as a proxy for the human.
* Which experiments these are, precisely, has been hard for me to pin down. This post is an attempt to organize my thoughts and solicit comments.
Background: Standard sandwiching (in my terms)
* Prerequisites:
|
6823bcea-e0e7-420c-a5cd-dbdec4cbfed8 | trentmkelly/LessWrong-43k | LessWrong | How are voluntary commitments on vulnerability reporting going?
This article is written solely in my personal capacity, and does not represent the views of any organisations I am affiliated with.
The UK and US governments have both secured voluntary commitments from many major AI companies on AI safety.[1]
These include having appropriate reporting mechanisms for both cybersecurity vulnerabilities and model vulnerabilities[2].
I took a look at how well organisations are living up to these commitments as of February 2024. This included reviewing what the processes actually are, and submitting test reports to see if they work.
Summary table
CompanyScoreAdobe 🇺🇸6/20Amazon 🇺🇸 🇬🇧6/20Anthropic 🇺🇸 🇬🇧13/20Cohere 🇺🇸12/20Google 🇺🇸20/20Google DeepMind 🇬🇧18/20IBM 🇺🇸5/20Inflection 🇺🇸 🇬🇧16/20Meta 🇺🇸 🇬🇧14/20Microsoft 🇺🇸 🇬🇧18/20NVIDIA 🇺🇸20/20OpenAI 🇺🇸 🇬🇧12/20Palantir 🇺🇸5/20Salesforce 🇺🇸9/20Scale AI 🇺🇸4/20Stability AI 🇺🇸1/20
Some high-level takeaways
Performance was quite low across the board. Simply listing a contact email and responding to queries would score 17 points, which would place a company in the top five.
However, a couple companies have great processes that can act as best practice examples. Both Google and NVIDIA got perfect scores. In addition, Google offers bug bounty incentives for model vulnerabilities and NVIDIA had an exceptionally clear and easy to use model vulnerability contact point.
Companies did much better on cybersecurity than model vulnerabilities. Additionally, companies that combined their cybersecurity and model vulnerability procedures scored better. This might be because existing cybersecurity processes are more battle tested, or taken more seriously than model vulnerabilities.
Companies do know how to have transparent contact processes. Every single company's press contact could be found within minutes, and was a simple email address. This suggests companies are able to sort this out when there are greater commercial incentives to do so.
------------------- |
487783b8-f1c0-4fd3-ab12-02825cebd1a6 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | AI Alignment Podcast: On Lethal Autonomous Weapons with Paul Scharre
Most relevant to AI alignment, and a pertinent question to focus on for interested readers/listeners is: if we are are unable to establish a governance mechanism as a global community on the concept that we should not let AI make the decision to kill humans, then what effects will this have on and can we still deal with more subtle short-term alignment considerations and long-term AI x-risk?
**Podcast page and audio:** <https://futureoflife.org/2020/03/16/on-lethal-autonomous-weapons-with-paul-scharre/>
**Transcript:**
---------------
**Lucas Perry:** Welcome to the AI Alignment Podcast. I’m Lucas Perry. Today’s conversation is with Paul Scharre and explores the issue of lethal autonomous weapons. And so just what is the relation of lethal autonomous weapons and the related policy and governance issues to AI alignment and long-term AI risk? Well there’s a key question to keep in mind throughout this entire conversation and it’s that: if we cannot establish a governance mechanism as a global community on the concept that we should not let AI make the decision to kill, then how can we deal with more subtle near term issues and eventual long term safety issues about AI systems? This question is aimed at exploring the idea that autonomous weapons and their related governance represent a possibly critical first step on the international cooperation and coordination of global AI issues. If we’re committed to developing beneficial AI and eventually beneficial AGI then how important is this first step in AI governance and what precedents and foundations will it lay for future AI efforts and issues? So it’s this perspective that I suggest keeping in mind throughout the conversation. And many thanks to FLI’s Emilia Javorsky for much help on developing the questions for this podcast.
Paul Scharre is a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security. He is the award-winning author of [Army of None: Autonomous Weapons and the Future of War](https://www.amazon.com/Army-None-Autonomous-Weapons-Future/dp/0393608980/ref=mt_other?_encoding=UTF8&me), which won the 2019 Colby Award and was named one of Bill Gates’ top five books of 2018.
Mr. Scharre worked in the Office of the Secretary of Defense (OSD) where he played a leading role in establishing policies on unmanned and autonomous systems and emerging weapons technologies. Mr. Scharre led the DoD working group that drafted DoD Directive 3000.09, establishing the Department’s policies on autonomy in weapon systems. Mr. Scharre also led DoD efforts to establish policies on intelligence, surveillance, and reconnaissance (ISR) programs and directed energy technologies. He was involved in the drafting of policy guidance in the 2012 Defense Strategic Guidance, 2010 Quadrennial Defense Review, and Secretary-level planning guidance. His most recent position was Special Assistant to the Under Secretary of Defense for Policy. Prior to joining the Office of the Secretary of Defense, Mr. Scharre served as a special operations reconnaissance team leader in the Army’s 3rd Ranger Battalion and completed multiple tours to Iraq and Afghanistan.
The Future of Life Institute is a non-profit and this podcast is funded and supported by listeners like you. So if you find what we do on this podcast to be important and beneficial, please consider supporting the podcast by donating at [futureoflife.org/donate](http://futureoflife.org/donate). If you support any other content creators via services like Patreon, consider viewing a regular subscription to FLI in the same light. You can also follow us on your preferred listening platform, like on Apple Podcasts or Spotify, by searching for us directly or following the links on the page for this podcast found in the description.
And with that, here’s my conversion with Paul Scharre.
All right. So we’re here today to discuss your book, Army of None, and issues related to autonomous weapons in the 21st century. To start things off here, I think we can develop a little bit of the motivations for why this matters. Why should the average person care about the development and deployment of lethal autonomous weapons?
**Paul Scharre:** I think the most basic reason is because we all are going to live in the world that militaries are going to be deploying future weapons. Even if you don’t serve in the military, even if you don’t work on issues surrounding say, conflict, this kind of technology could affect all of us. And so I think we all have a stake in what this future looks like.
**Lucas Perry:** Let’s clarify a little bit more about what this technology actually looks like then. Often in common media, and for most people who don’t know about lethal autonomous weapons or killer robots, the media often portrays it as a terminator like scenario. So could you explain why this is wrong, and what are more accurate ways of communicating with the public about what these weapons are and the unique concerns that they pose?
**Paul Scharre:** Yes, I mean, the Terminator is like the first thing that comes up because it’s such a common pop culture reference. It’s right there in people’s minds. So I think go ahead and for the listeners, imagine that humanoid robot in the Terminator, and then just throw that away, because that’s not what we’re talking about. Let me make a different comparison. Self-driving cars. We are seeing right now the evolution of automobiles that with each generation of car incorporate more autonomous features: parking, intelligent cruise control, automatic braking. These increasingly autonomous features in cars that are added every single year, a little more autonomy, a little more autonomy, are taking us down at some point in time to a road of having fully autonomous cars that would drive themselves. We have something like the Google car where there’s no steering wheel at all. People are just passengers along for the ride. We’re seeing something very similar happen in the military with each generation of robotic systems and we now have air and ground and undersea robots deployed all around the world in over 100 countries and non state groups around the globe with some form of drones or robotic systems, and with each generation they’re becoming increasingly autonomous.
Now, the issue surrounding autonomous weapons is, what happens when a predator drone has as much autonomy as a self-driving car? What happens when you have a weapon that’s out in the battlefield, and it’s making its own decisions about whom to kill? Is that something that we’re comfortable with? What are the legal and moral and ethical ramifications of this? And the strategic implications? What might they do for the balance of power between nations, or stability among countries? These are really the issues surrounding autonomous weapons, and it’s really about this idea that we might have, at some point of time and perhaps the not very distant future, machines making their own decisions about whom to kill on the battlefield.
**Lucas Perry:** Could you unpack a little bit more about what autonomy really is or means because it seems to me that it’s more like an aggregation of a bunch of different technologies like computer vision and image recognition, and other kinds of machine learning that are aggregated together. So could you just develop a little bit more about where we are in terms of the various technologies required for autonomy?
**Paul Scharre:** Yes, so autonomy is not really a technology, it’s an attribute of a machine or of a person. And autonomy is about freedom. It’s the freedom that a machine or a person is given to perform some tasks in some environment for some period of time. As people, we have very little autonomy as children and more autonomy as we grow up, we have different autonomy in different settings. In some work environments, there might be more constraints put on you; what things you can and cannot do. And it’s also environment-specific and task-specific. You might have autonomy to do certain things, but not other things. It’s the same with machines. We’re ultimately talking about giving freedom to machines to perform certain actions under certain conditions in certain environments.
There are lots of simple forms of autonomy that we interact with all the time that we sort of take for granted. A thermostat is a very simple autonomous system, it’s a machine that’s given a freedom to decide… decide, let’s put that in air quotes, because we come back to what it means for machines to decide. But basically, the thermostat is given the ability to turn on and off the heat and air conditioning based on certain parameters that a human sets, a desired temperature, or if you have a programmable thermostat, maybe the desired temperature at certain times a day or days of the week, is a very bounded kind of autonomy. And that’s what we’re talking about for any of these machines. We’re not talking about freewill, or whether the machine develops consciousness. That’s not a problem today, maybe someday, but certainly not with the machines we’re talking about today. It’s a question really of, how much freedom do we want to give machines, or in this case, weapons operating on the battlefield to make certain kinds of choices?
Now we’re still talking about weapons that are designed by people, built by people, launched by people, and put into the battlefields to perform some mission, but there might be a little bit less human control than there is today. And then there are a whole bunch of questions that come along with that, like, is it going to work? Would it be effective? What happens if there are accidents? Are we comfortable with seeding that degree of control over to the machine?
**Lucas Perry:** You mentioned the application of this kind of technology in the context of battlefields. Is there also consideration and interest in the use of lethal autonomous weapons in civilian contexts?
**Paul Scharre:** Yes, I mean, I think there’s less energy on that topic. You certainly see less of a poll from the police community. I mean, I don’t really run into people in a police or Homeland Security context, saying we should be building autonomous weapons. Well, you will hear that from militaries. Oftentimes, groups that are concerned about the humanitarian consequences of autonomous weapons will raise that as a concern. There’s both what might militaries do in the battlefield, but then there’s a concern about proliferation. What happens when the technology proliferates, and it’s being used for internal security issues, could be a dictator, using these kinds of weapons to repress the population. That’s one concern. And that’s, I think, a very, very valid one. We’ve often seen one of the last checks against dictators, is when they tell their internal security forces to fire on civilians, on their own citizens. There have been instances where the security forces say, “No, we won’t.” That doesn’t always happen. Of course, tragically, sometimes security forces do attack their citizens. We saw in the massacre in Tiananmen Square that Chinese military troops are willing to murder Chinese citizens. But we’ve seen other instances, certainly in the fall of the Eastern Bloc at the end of the Cold War, that security forces… these are our friends, these are our family. We’re not going to kill them.
And autonomous weapons could take away one of those checks on dictators. So I think that’s a very valid concern. And that is a more general concern about the proliferation of military technology into policing even here in America. We’ve seen this in the last 20 years, is a lot of military tech ends up being used by police forces in ways that maybe isn’t appropriate. And so that’s, I think, a very valid and legitimate sort of concern about… even if this isn’t kind of the intended use, what would that look like and what are the risks that could come with that, and how should we think about those kinds of issues as well?
**Lucas Perry:** All right. So we’re developing autonomy in systems and there’s concern about how this autonomy will be deployed in context where lethal force or force may be used. So the question then arises and is sort of the question at the heart of lethal autonomous weapons: Where is it that we will draw a line between acceptable and unacceptable uses of artificial intelligence in autonomous weapons or in the military, or in civilian policing? So I’m curious to know how you think about where to draw those lines or that line in particular, and how you would suggest to any possible regulators who might be listening, how to think about and construct lines of acceptable and unacceptable uses of AI.
**Paul Scharre:** That’s a great question. So I think let’s take a step back first and sort of talk about, what would be the kinds of things that would make uses acceptable or unacceptable. Let’s just talk about the military context just to kind of bound the problem for a second. So in the military context, you have a couple reasons for drawing lines, if you will. One is legal issues, legal concerns. We have a legal framework to think about right and wrong in war. It’s called the laws of war or international humanitarian law. And it lays out a set of parameters for what is acceptable and what… And so that’s one of the places where there has been consensus internationally, among countries that come together at the United Nations through the Convention on Certain Conventional Weapons, the CCW, the process, we’ve had conversations going on about autonomous weapons.
One of the points of consensus among nations is that existing international humanitarian law or the laws of war would apply to autonomous weapons. And that any uses of autonomy in weapons, those weapons have to be used in a manner that complies with the laws of war. Now, that may sound trivial, but it’s a pretty significant point of agreement and it’s one that places some bounds on things that you can or cannot do. So, for example, one of the baseline principles of the laws of war is the principle of distinction. Military forces cannot intentionally target civilians. They can only intentionally target other military forces. And so any use of force these people to comply with this distinction, so right off the bat, that’s a very important and significant one when it comes to autonomous weapons. So if you have to use a weapon that could not be used in a way to comply with this principle of distinction, it would be illegal under the laws war and you wouldn’t be able to build it.
And there are other principles as well, principles about proportionality, and ensuring that any collateral damage that affects civilians or civilian infrastructure is not disproportionate to the military necessity of the target that is being attacked. There are principles about avoiding unnecessary suffering of combatants. Respecting anyone who’s rendered out of combat or the appropriate term is “hors de combat,” who surrendered have been incapacitated and not targeting them. So these are like very significant rules that any weapon system, autonomous weapon or not, has to comply with. And any use of any weapon has to comply with, any use of force. And so that is something that constrains considerably what nations are permitted to do in a lawful fashion. Now do people break the laws of war? Well, sure, that happens. We’re seeing that happen in Syria today, Bashar al-Assad is murdering civilians, there are examples of Rogue actors and non state terrorist groups and others that don’t care about respecting the laws of war. But those are very significant bounds.
Now, one could also say that there are more bounds that we should put on autonomous weapons that might be moral or ethical considerations that exist outside the laws of war, that aren’t written down in a formal way in the laws of war, but they’re still important and I think those often come to the fore with this topic. And there are other ones that might apply in terms of reasons why we might be concerned about stability among nations. But the laws of war, at least a very valuable starting point for this conversation about what is acceptable and not acceptable. I want to make clear, I’m not saying that the laws of war are insufficient, and we need to go beyond them and add in additional constraints. I’m actually not saying that. There are people that make that argument, and I want to give credit to their argument, and not pretend it doesn’t exist. I want the listeners to sort of understand the full scope of arguments about this technology. But I’m not saying myself that’s the case necessarily. But I do think that there are concerns that people raise.
For example, people might say it’s wrong for a machine to decide whom to kill, it’s wrong for a machine to make the decision about life and death. Now I think that’s an interesting argument. Why? Why is it wrong? Is it because we think the machine might get the answer wrong, it might perform not as well as the humans because I think that there’s something intrinsic about weighing the value of life and death that we want humans to do, and appreciating the value of another person’s life before making one of these decisions. Those are all very valid counter arguments that exist in this space.
**Lucas Perry:** Yes. So thanks for clarifying that. For listeners, it’s important here to clarify the difference where some people you’re saying would find the laws of war to be sufficient in the case of autonomous weapons, and some would not.
**Paul Scharre:** Yes, I mean, this is a hotly debated issue. I mean, this is in many ways, the crux of the issue surrounding autonomous weapons. I’m going to oversimplify a bit because you have a variety of different views on this, but you certainly have some people whose views are, look, we have a set of structures called the laws of war that tell us what right and wrong looks like and more. And most of the things that people are worried about are already prohibited under the laws of war. So for example, if what you’re worried about is autonomous weapons, running amok murdering civilians, that’s illegal under the laws of war. And so one of the points of pushback that you’ll sometimes get from governments or others to the idea of creating like an ad hoc treaty that would ban autonomous weapons or some class of autonomous weapons, is look, some of the things people worry about like they’re already prohibited under the laws of war, passing another law to say the thing that’s already illegal is now illegal again doesn’t add any value.
There’s group of arguments that says the laws of war dictate effects in the battlefield. So they dictate sort of what the end effect is, they don’t really affect the process. And there’s a line of reasoning that says, that’s fine. The process doesn’t matter. If someday we could use autonomous weapons in a way that was more humane and more precise than people, then we should use them. And just the same way that self-driving cars will someday save lives on roads by avoiding accidents, maybe we could build autonomous weapons that would avoid mistakes in war and accidentally targeting civilians, and therefore we should use them. And let’s just focus on complying better with the laws of war. That’s one school of thought.
Then there’s a whole bunch of reasons why you might say, well, that’s not enough. One reason might be, well, militaries’ compliance with the laws of war. Isn’t that great? Actually, like people talk a good game, but when you look at military practice, especially if the rules for using weapon are kind of convoluted, you have to take a bunch of additional steps in order to use it in a way that’s lawful, that kind of goes out the window in conflict. Real world and tragic historical example of this was experienced throughout the 20th century with landmines where land mines were permitted to be used lawfully, and still are, if you’re not a signatory to the Ottawa Convention, they’re permitted to be used lawfully provided you put in a whole bunch of procedures to make sure that minefields are marked and we know the location of minefields, so they can be demined after conflict.
Now, in practice, countries weren’t doing this. I mean, many of them were just scattering mines from the air. And so we had this horrific problem of millions of mines around the globe persisting after a conflict. The response was basically this global movement to ban mines entirely to say, look, it’s not that it’s inconceivable to use mines in a way that you mean, but it requires a whole bunch of additional efforts, that countries aren’t doing, and so we have to take this weapon away from countries because they are not actually using it in a way that’s responsible. That’s a school of thought with autonomous weapons. Is look, maybe you can conjure up thought experiments about how you can use autonomous weapons in these very specific instances, and it’s acceptable, but once you start any use, it’s a slippery slope, and next thing you know, it’ll be just like landmines all over again, and they’ll be everywhere and civilians will be being killed. And so the better thing to do is to just not let this process even start, and not letting militaries have access to the technology because they won’t use it responsibly, regardless of whether it’s theoretically possible. That’s a pretty reasonable and defensible argument. And there are other arguments too.
One could say, actually, it’s not just about avoiding civilian harm, but there’s something intrinsic about weighing the value of an enemy soldier’s life, that we want humans involved in that process. And that if we took humans away from that process, we’ll be losing something that sure maybe it’s not written down in the laws of war, but maybe it’s not written down because it was always implicit that humans will always be making these choices. And now that it’s decision in front of us, we should write this down, that humans should be involved in these decisions and should be weighing the value of the human life, even an enemy soldier. Because if we give that up, we might give up something that is a constraint on violence and war that holds back some of the worst excesses of violence, we might even can make something about ourselves. And this is, I think, a really tricky issue because there’s a cost to humans making these decisions. It’s a very real cost. It’s a cost in post traumatic stress that soldiers face and moral injury. It’s a cost in lives that are ruined, not just the people that are killed in a battlefield, but the people have to live with that violence afterwards, and the ramifications and even the choices that they themselves make. It’s a cost in suicides of veterans, and substance abuse and destroyed families and lives.
And so to say that we want humans to stay still evolved to be more than responsible for killing, is to say I’m choosing that cost. I’m choosing to absorb and acknowledge and take on the cost of post traumatic stress and moral injury, and also the burdens that come with war. And I think it’s worth reflecting on the fact that the burdens of war are distributed very unequally, not just between combatants, but also on the societies that fight. As a democratic nation in the United States, we make a decision as a country to go to war, through our elected representatives. And yet, it’s a very tiny slice of the population that bears the burden for that war, not just putting themselves at risk, but also carrying the moral burden of that afterwards.
And so if you say, well, I want there to be someone who’s going to live with that trauma for the rest of your life. I think that’s an argument that one can make, but you need to acknowledge that that’s real. And that’s not a burden that we all share equally, it’s a burden we’re placing on young women and men that we send off to fight on our behalf. The flip side is if we didn’t do that, if we fought a war and no one felt the moral burden of killing, no one slept uneasy at night afterwards, what would they say about us as a society? I think these are difficult questions. I don’t have easy answers to that. But I think these are challenging things for us to wrestle with.
**Lucas Perry:** Yes, I mean, there’s a lot there. I think that was a really good illustration of the different points of views on this. I hadn’t heard or considered much the implications of post traumatic stress. And I think moral burden, you called it that would be a factor in what autonomous weapons would relieve in countries which have the power to develop them. Speaking personally, I think I find the arguments most compelling about the necessity of having human beings integrated in the process of decision making with regards to killing, because if you remove that, then you’re removing the deep aspect of humanity, which sometimes does not follow the laws of war, which we currently don’t have complex enough preference learning techniques and machine learning techniques to actually train autonomous weapon systems in everything that human beings value and care about, and that there are situations where deviating from following the laws of war may be the best thing to do. I’m not sure if you have any thoughts about this, but I think you did a good job of illustrating all the different positions, and that’s just my initial reaction to it.
**Paul Scharre:** Yes, these are tricky issues. And so I think one of the things I want to try to do for listeners is try to lay out the landscape of what these arguments are, and some of the pros and cons of them because I think sometimes they will often oversimplify on all sides. The other people will be like, well, we should have humans involved in making these decisions. Well, humans involved where? If I get into a self-driving car that has no steering wheel, it’s not true that there’s no human involvement. The type of human involvement has just changed in terms of where it exists. So now, instead of manually driving the car, I’m still choosing the car’s destination, I’m still telling the car where I want to go. You’re going to get into the car and car take me wherever you want to go. So the type of human involvement is changed.
So what kind of human relationship do we want with decisions about life and death in the battlefield? What type of human involvement is right or necessary or appropriate and for what reason? For a legal reason, for a moral reason. These are interesting challenges. We haven’t had to confront anymore. These arguments I think unfairly get simplified on all sides. Conversely, you hear people say things like, it doesn’t matter, because these weapons are going to get built anyway. It’s a little bit overly simplistic in the sense that there are examples of successes in arms control. It’s hard to pull off. There are many examples of failures as well, but there are places where civilized nations have walked back from some technologies to varying degrees of success, whether it’s chemical weapons or biological weapons or other things. So what is success look like in constraining a weapon? Is it no one ever uses the weapon? Is it most nations don’t use it? It’s not used in certain ways. These are complicated issues.
**Lucas Perry:** Right. So let’s talk a little bit here about integrating human emotion and human reasoning and humanity itself into the autonomous weapon systems and the life or death decisions that they will be making. So hitting on a few concepts here, if you could help explain what people mean when they say human in the loop, and human on the loop, and how this relates to the integration of human control and human responsibility and human accountability in the use of autonomous weapons.
**Paul Scharre:** Let’s unpack some of this terminology. Broadly speaking, people tend to use the terms human in the loop, on the loop, or out of the loop to refer to semi autonomous weapons human is in the loop, which means that for any really semi autonomous process or system, the machine is taking an action and then it pauses and waits for humans to take a positive action before proceeding. A good example of a human in the loop system is the automated backups on your computer when they require you to push a button to say okay to do the backup now. They’re waiting music in action before proceeding. In a human on the loop system, or one where the supervisor control is one of the human doesn’t have to take any positive action for the system to proceed. The human can intervene, so the human can sit back, and if you want to, you can jump in.
Example of this might be your thermostat. When you’re in a house, you’ve already set the parameters, it’ll turn on the heat and air conditioning on its own, but if you’re not happy with the outcome, you could change it. Now, when you’re out of the house, your thermostat is operating in a fully autonomous fashion in this respect where humans out of the loop. You don’t have any ability to intervene for some period of time. It’s really all about time duration. For supervisory control, how much time does the human have to identify something is wrong and then intervene? So for example, things like the Tesla autopilots. That’s one where the human is in a supervisory control capacity. So the autopilot function in a car, the human doesn’t have to do anything, car’s driving itself, but they can intervene.
The problem with some of those control architectures is the time that you are permitting people to identify that there’s a problem, figure out what’s going on, decide to take action, intervene, really realistic before harm happens. Is it realistic that a human can be not paying attention, and then all of a sudden, identify that the car is in trouble and leap into action to avoid an accident when you’re speeding on the highway 70 miles an hour? And then you can see quite clearly in a number of fatal accidents with these autopilots, that that’s not feasible. People actually aren’t capable of doing that. So you’ve got to think about sort of what is the role of the human in this process? This is not just a semi autonomous or supervised autonomous or fully autonomous process. It’s one where the human is involved in some varying capacity.
And what are we expecting the human to do? Same thing with something that’s fully autonomous. We’re talking about a system that’s operating on its own for some period of time. How long before it checks back in with a person? What information is that person given? And what is their capacity to intervene or how bad could things go wrong when the person is not involved? And when we talk about weapons specifically. There are lots of weapons that operate in a semi autonomous fashion today where the human is choosing the target, but there’s a lot of automation in IDing targets presenting information to people in actually carrying out an attack, once the human has chosen a target, there are many, many weapons that are what the military calls fire and forget weapon, so once it’s launched, it’s not coming back. Those have been widely used for 70 years since World War Two. So that’s not new.
There are a whole bunch of weapons that operate in a supervisory autonomy mode, where humans on the loop. These are generally used in a more limited fashion for immediate localized defense of air bases or ships or ground vehicles defending against air or missile or rocket attack, particularly when the speed of these attacks might overwhelm people’s ability to respond. For humans to be in the loop, for humans to push a button, every time there’s a missile coming in, you could have so many missiles coming in so fast that you have to just simply activate an automatic defensive mode that will shoot down all have the missiles based on some pre-programmed parameters that humans put into the system. This exists today. The systems have been around for decades since the 1980s. And there were widespread use with at least 30 countries around the globe. So this is a type of weapon system that’s already in operation. These supervisory autonomous weapons. What really would be new would be fully autonomous weapons that operate on their own, whereas humans are still building them and launching them, but humans put them into operation, and then there’s some period of time where they were able to search a target area for targets and they were able to find these targets, and then based on some programming that was designed by people, identify the targets and attack them on their own.
**Lucas Perry:** Would you consider that out of the loop for that period of time?
**Paul Scharre:** Exactly. So over that period of time, humans are out of the loop on that decision over which targets they’re attacking. That would be potentially largely a new development in war. There are some isolated cases of some weapon systems that cross this line, by in large that would be new. That’s at least the starting point of what people might be concerned about. Now, you might envision things that are more advanced beyond that, but that’s sort of the near term development that could be on the horizon in the next five to 15 years, telling the weapon system, go into this area, fly around or search around underwater and find any ships of this type and attack them for some period of time in space. And that changes the human’s relationship with the use of force a little bit. It doesn’t mean the humans not involved at all, but the humans not quite as involved as they used to be. And is that something we’re comfortable with? And what are the implications of that kind of shift in warfare.
**Lucas Perry:** So the relevant things here are how this helps to integrate human control and human responsibility and human accountability into autonomous weapons systems. And just hearing you speak about all of that, it also seems like very relevant questions have to do with human psychology, about what human beings are actually likely to be able to do. And then also, I think you articulately put the practical question of whether or not people will be able to react to certain threats given certain situations. So in terms of trying to understand acceptable and unacceptable uses of autonomous weapons, that seems to supervene upon a lot of these facets of benefits and disadvantages of in the loop, on the loop, and out of the loop for different situations and different risks, plus how much we’re willing to automate killing and death and remove human decision making from some of these situations or not.
**Paul Scharre:** Yes, I mean, I think what’s challenging in this space is that it would be nice, it would be ideal if we could sort of reach agreement among nations for sort of a lethal laws of robotics, and Isaac Asimov’s books about robots you think of these three laws of robotics. Well, those laws aren’t going to work because one of them is not harming a human being and it’s not going to work in the military context, but could there be some agreement among countries for lethal laws of robots that would govern the behavior of autonomous systems in war, and it might sort of say, these are the things that are acceptable or not? Maybe. Maybe that’s possible someday. I think we’re not there yet at least, there are certainly not agreement as widespread disagreement among nations about what approach to take. But the good starting position of trying to understand what are the goals we want to achieve. And I think you’re right that we need to keep the human sort of front and center. But I this this is like a really important asymmetry between humans and machines that’s worth highlighting, which is to say that the laws of war government effects in the battlefield, and then in that sentence, the laws of war, don’t say the human has to pick every target, the laws of war say that the use of force must be executed according to certain principles of distinction and proportionality and other things.
One important asymmetry in the laws of war, however, is that machines are not legal agents. Only humans have legal agents. And so it’s ultimately humans that are responsible for complying with the laws of war. You can’t put a machine on trial for a war crime. It doesn’t make sense. It doesn’t have intentionality. So it’s ultimately a human responsibility to ensure this kind of compliance with the laws of war. It’s a good starting point then for conversation to try to understand if we start from that proposition that it’s a human responsibility to ensure compliance with the laws of war, then what follows from that? What balances that place on human involvement? One of the early parts of the conversations on autonomous weapons internationally came from this very technological based conversation. To say, well, based on the technology, draw these lines, you should put these limits in place. The problem with that approach is not that you can’t do it.
The problem is the state of the technology when? 2014 when discussions on autonomous weapons started at the very beginning of the deep learning revolution, today, in 2020, our estimate of whether technology might be in five years or 10 years or 50 years? The technology moving so quickly than any technologically based set of rules about how we should approach this problem and what is the appropriate use of machines versus human decision making in the use of force. Any technologically based answer is one that we may look back in 10 years or 20 years and say is wrong. We could get it wrong in the sense that we might be leaving valuable technological opportunities on the table and we’re banning technology that if we used it actually might make war more humane and reduce civilian casualties, or we might be permitting technologies that turned out in retrospect to be problematic, and we shouldn’t have done that.
And one of the things we’ve seen historically when you look at attempts to ban weapons is that ones that are technologically based don’t always fare very well over time. So for example, the early bans on poison gas banned the use of poison gas that are launched from artillery shells. It allowed actually poison gas administered via canisters, and so the first use of poison gas in World War One by the Germans was canister based, they actually just laid out little canisters and then open the valves. Now that turns out to be not very practical way of using poison gas in war, because you have someone basically on your side standing over this canister, opening a valve and then getting gassed. And so it’s a little bit tricky, but technically permissible.
One of the things that can be challenging is it’s hard to foresee how the technology is going to evolve. A better approach and one that we’ve seen the dialogue internationally sort of shift towards is our human-centered approach. To start from the position of the human and say, look, if we had all the technology in the world and war, what decisions would we want humans to make and why? Not because the technology cannot make decisions, but because it should not. I think it’s actually a very valuable starting place to understand a conversation, because the technology is moving so quickly.
What role do we want humans to play in warfare, and why do we think this is the case? Are there some tasks in war, or some decisions that we think are fundamentally human that should be decisions that only humans should make and we shouldn’t hand off to machines? I think that’s a really valuable starting position then to try to better interrogate how do we want to use this technology going forward? Because the landscape of technological opportunity is going to keep expanding. And so what do we want to do with this technology? How do we want to use it? And are there ways that we can use this technology that keeps humans in control of the use of force in the battlefield? Keep humans legally and morally and ethically responsible, but may make war more humane in the process, that may make war more precise, that may reduce civilian casualties without losing our humanity in the process.
**Lucas Perry:** So I guess the thought experiment, there would be like, if we had weapons that let us just delete people instantly without consequences, how would we want human decision making to be integrated with that? Reflecting on that also makes me consider this other point that I think is also important for my considerations around lethal autonomous weapons, which is the necessity of integrating human experience in the consequences of war, the pain and the suffering and the carnage and the PTSD as being almost necessary vehicles to some extent to make us tired of it to integrate how horrible it is. So I guess I would just be interested in integrating that perspective into it not just being about humans making decisions and the decisions being integrated in the execution process, but also about the experiential ramifications of being in relation to what actually happens in war and what violence is like and what happens in violence.
**Paul Scharre:** Well, I think that we want to unpack a little bit some of the things you’re talking about. Are we talking about ensuring that there is an accurate representation to the people carrying out the violence about what’s happening on the other end, that we’re not sanitizing things. And I think that’s a fair point. When we begin to put more psychological barriers between the person making the decision and the effects, it might be easier for them to carry out larger scale attacks, versus actually making war and more horrible. Now that’s a line of reasoning, I suppose, to say we should make war more horrible, so there’ll be less of it. I’m not sure we might get the outcome that there is less of it. We just might have more horrible war, but that’s a different issue. Those are more difficult questions.
I will say that I often hear philosophers raising things about skin in the game. I rarely hear them being raised by people who have had skin in the game, who have experienced up close in a personal way the horrors of war. And I’m less convinced that there’s a lot of good that comes from the tragedy of war. I think there’s value in us trying to think about how do we make war less terrible? How do we reduce civilian casualties? How do we have less war? But this often comes up in the context of technologies like we should somehow put ourselves at risk. No military does that, no military has ever done that in human history. The whole purpose of militaries getting technology in training is to get an advantage on the adversary. It’s not a fair fight. It’s not supposed to be, it’s not a boxing match. So these are things worth exploring. We need to come from the standpoint of the reality of what war is and not from a philosophical exercise about war might be, but deal with the realities of what actually occurs in the battlefield.
**Lucas Perry:** So I think that’s a really interesting point. And as someone with a background and interest in philosophy, it’s quite funny. So you do have experience in war, right?
**Paul Scharre:** Yes, I’ve fought in Iraq and Afghanistan.
**Lucas Perry:** Then it’s interesting for me, if you see this distinction between people who are actually veterans, who have experienced violence and carnage and tragedies of war, and the perspective here is that PTSD and associated trauma with these kinds of experiences, you find that they’re less salient for decreasing people’s willingness or decision to engage in further war. Is that your claim?
**Paul Scharre:** I don’t know. No, I don’t know. I don’t know the answer to that. I don’t know. That’s some difficult question for political scientists to figure out about voting preferences of veterans. All I’m saying is that I hear a lot of claims in this space that I think are often not very well interrogated or not very well explored. And there’s a real price that people pay for being involved. Now, people want to say that we’re willing to bear that price for some reason, like okay, but I think we should acknowledge it.
**Lucas Perry:** Yeah, that make sense. I guess the thing that I was just pointing at was it would be psychologically interesting to know if philosophers are detached from the experience, maybe they don’t actually know about the psychological implications of being involved in horrible war. And if people who are actually veterans disagree with philosophers about the importance of there being skin in the game, if philosophers say that skin in the game reduces willingness to be in war, if the claim is that that wouldn’t actually decrease their willingness to go to war. I think that seems psychologically very important and relevant, because there is this concern about how autonomous weapons and integrating human decision making to lethal autonomous weapons would potentially sanitize war. And so there’s the trade off between the potential mitigating effects of being involved in war, and then also the negative effects which are incurred by veterans who would actually have to be exposed by it and bring the trauma back for communities to have deeper experiential relation with.
**Paul Scharre:** Yes, and look, we don’t do that, right? We had a whole generation of veterans come back from Vietnam and we as society listen to the stories and understand them and understand, no. I have heard over the years people raise this issue whether it’s drones, autonomous weapons, this issue of having skin in the game either physically being at risk or psychologically. And I’ve rarely heard it raised by people who it’s been them who’s on the line. People often have very gut emotional reactions to this topic. And I think that’s valuable because it’s speaking to something that resonates with people, whether it’s an emotional reaction opposed to autonomous weapons, and that you often get that from many people that go, there’s something about this. It doesn’t feel right. I don’t like this idea. Or people saying, the opposite reaction. Other people that say that “wouldn’t this make war great, it’s more precise and more humane,” and which my reaction is often a little bit like… have you ever interacted with a computer? They break all the time. What are you talking about?
But all of these things I think they’re speaking to instincts that people have about this technology, but it’s worth asking questions to better understand, what is it that we’re reacting to? Is it an assumption about the technologies, is it an assumption about the nature of war? One of the concerns I’ve heard raised is like this will impersonalize war and create more distance between people killing. If you sort of buy that argument, that impersonal war is a bad thing, then you would say the greatest thing would be deeply personal war, like hand to hand combat. It appears to harken back to some glorious age of war when people looked each other in the eye and hacked each other to bits with swords, like real humans. That’s not that that war never occurred in human history. In fact, we’ve had conflicts like that, even in recent memory that involve hand to hand weapons. They tend not to be very humane conflicts. When we see civil violence, when people are murdering each other with machetes or garden tools or other things, it tends to be horrific communal violence, mass atrocities in Rwanda or Cambodia or other places. So I think it’s important to deal with the reality of what war is and not some fantasy.
**Lucas Perry:** Yes, I think that that makes a lot of sense. It’s really tricky. And the psychology around this I think is difficult and probably not studied enough.
**Paul Scharre:** There’s real war that occurs in the world, and then there’s the fantasy of war that we, as a society, tell ourselves when we go to movie theaters, and we watch stories about soldiers who are heroes, who conquer the bad guys. We’re told a fantasy, and it’s a fantasy as a society that allows society to perpetuate wars, that allows us to send young men and women off to die. And it’s not to say that there are no circumstances in which a nation might need to go to war to defend itself or its interest, but we sort of dress war up in these pretty clothes, and let’s not confuse that with the reality of what actually occurs. People said, well, through autonomous weapons, then we won’t have people sort of weighing the value of life and death. I mean, it happens sometimes, but it’s not like every time someone dies in war, that there was this thoughtful exercise where a committee sat around and said, “Do we really need to kill this person? Is it really appropriate?” There’s a lot of dehumanization that goes on on the battlefield. So I think this is what makes this issue very challenging. Many of the objections to autonomous weapons are objections to war. That’s what people are actually objecting to.
The question isn’t, is war bad? Of course war’s terrible? The question is sort of, how do we find ways going forward to use technology that may make war more precise and more humane without losing our humanity in the process, and are ways to do that? It’s a challenging question. I think the answer is probably yes, but it’s one that’s going to require a lot of interrogation to try to get there. It’s a difficult issue because it’s also a dynamic process where there’s an interplay between competitors. If we get this wrong, we can easily end up in a situation where there’s less human control, there’s more violence and war. There are lots of opportunities to make things worse as well.
If we could make war perfect, that would be great, in terms of no civilian suffering and reduce the suffering of enemy combatants and the number of lives lost. If we could push a button and make war go away, that would be wonderful. Those things will all be great. The more practical question really is, can we improve upon the status quo and how can we do so in a thoughtful way, or at least not make things worse than today? And I think those are hard enough problems to try to address.
**Lucas Perry:** I appreciate that you bring a very holistic, well-weighed perspective to the varying sides of this issue. So these are all very big and difficult. Are you aware of people actually studying whether some of these effects exist or not, and whether they would actually sanitize things or not? Or is this basically all just coming down to people’s intuitions and simulations in their head?
**Paul Scharre:** Some of both. There’s really great scholarship that’s being done on autonomous weapons, certainly there’s a robust array of legal based scholarship, people trying to understand how the law of war might interface with autonomous weapons. But there’s also been worked on by thinking about some of these human psychological interactions, Missy Cummings, who’s at Duke who runs the humans and automation lab down has done some work on human machine interfaces on weapon systems to think through some of these concerns. I think probably less attention paid to the human machine interface dimension of this and the human psychological dimension of it. But there’s been a lot of work done by people like Heather Roth, people at Article 36, and others thinking about concepts of meaningful human control and what might look like in weapon systems.
I think one of the things that’s challenging across the board in this issue is that it is a politically contentious topic. You have kind of levels of this debate going on, you have scholars trying to sort of understand the issue maybe, and then you also have a whole array of politically motivated groups, international organizations, civil society organizations, countries, duking it out basically, at the UN and in the media about where we should go with this technology. As you get a lot of motivated reasoning on all sides about what should the answer be. So for example, one of the things that fascinates me is i’ll often hear people say, autonomous weapons are terrible, and they’ll have a terrible outcome, and we need to ban them now. And if we just pass a treaty and we have enough political will we could ban them. I’ll also hear people say a ban would be pointless, it wouldn’t work. And anyways, wouldn’t autonomous weapons be great? There are other possible beliefs. One could say that a ban is feasible, but the weapons aren’t that big of a deal. So it just seems to me like there’s a lot of politically motivated reasoning that goes on this debate, which makes it very challenging.
**Lucas Perry:** So one of the concerns around autonomous weapons has to do with accidental escalation of warfare and conflict. Could you explore this point and explain what some strategies might be to prevent accidental escalation of warfare as AI is increasingly being used in the military?
**Paul Scharre:** Yes, so I think in general, you could bucket maybe concerns about autonomous weapons into two categories. One is a concern that they may not function very well and could have accidents, those accidents could lead to civilian casualties, that could lead to accidental escalation among nations and a crisis, military force forces operating in close proximity to one another and there could be accidents. This happens with people. And you might worry about actions with autonomous systems and maybe one shoots down an enemy aircraft and there’s an escalation and people are killed. And then how do you unwind that? How do you communicate to your adversary? We didn’t mean to do that. We’re sorry. How do you do that in a period of tension? That’s a particular challenge.
There’s a whole other set of challenges that come from the weapons might work. And that might get to some of these deeper questions about the role of humans in decision making about life and death. But this issue of accidental escalation kind of comes into the category of they don’t work very well, then they’re not reliable. And this is the case for a lot of AI and autonomous technology today, which isn’t to say it doesn’t work at all, if it didn’t work at all, it would be much easier. There’d be no debates about bias and facial recognition systems if they never identify faces. There’d be no debates about safety with self-driving cars if the car couldn’t go anywhere. The problem is that a lot of these AI based systems work very well in some settings, and then if the settings change ever so slightly, they don’t work very well at all anymore. And the performance can drop off very dramatically, and they’re not very robust to changes in environmental conditions. So this is a huge problem for the military, because in particular, the military doesn’t get to test its systems in its actual operating environment.
So you can take a car, and you can take it on the roads, and you can test it in an actual driving environment. And we’ve seen car companies rack up 10 million miles or more of driving data. And then they can go back and they can run simulations. So Waymo has said that they run 10 million miles of simulated driving every single day. And they can simulate in different lighting conditions, in different environmental conditions. Well, the military can build simulations too, but simulations of what? What will the next war look like? Well we don’t know because we haven’t fought it yet. The good news is that war’s very rare, which is great. But that also means that for these kinds of systems, we don’t necessarily know the operating conditions that they’ll be in, and so there is this real problem of this risk of accidents. And it’s exacerbated in the fact that this is also a very adversarial environment. So you actually have an enemy who’s trying to trick your system and manipulate it. That’s adds another layer of complications.
Driving is a little bit competitive, maybe somebody doesn’t want to let you into the lane, but the pedestrians aren’t generally trying to get hit by cars. That’s a whole other complication in the military space. So all of that leads to concerns that the systems may do okay in training, and then we take them out in the real world, and they fail and they fail a pretty bad way. If it’s a weapon system that is making its own decisions about whom to kill, it could be that it fails in a benign way, then it targets nothing. And that’s a problem for the military who built it, or fails in a more hazardous way, in a dangerous way and attacks the wrong targets. And when we’re talking about an autonomous weapon, the essence of this autonomous weapon is making its own decisions about which targets to attack and then carrying out those attacks. If you get that wrong, those could be pretty significant consequences with that. One of those things could be civilian harm. And that’s a major concern. There are processes in place for printing that operationally and test and evaluation, are those sufficient? I think they’re good reasons to say that maybe they’re not sufficient or not completely sufficient, and they need to be revised or improved.
And I’ll point out, we can come back to this that the US Defense Department actually has a more stringent procedure in place for reviewing autonomous weapons more than other weapons, beyond what the laws of war have, the US is one of the few countries that has this. But then there’s also question about accidental escalation, which also could be the case. Would that lead to like an entire war? Probably not. But it could make things a lot harder to defuse tensions in a crisis, and that could be problematic. So we just had an incident not too long ago, where the United States carried out an attack against the very senior Iranian General, General Soleimani, who’s the head of the Iranian Quds Force and killed him in a drone strike. And that was an intentional decision made by a person somewhere in the US government.
Now, did they fully think that through? I don’t know, that’s a different question. But a human made that decision in any case. Well, that’s a huge escalation of hostilities between the US and Iraq. And there was a lot of uncertainty afterwards about what would happen and Iran launched some ballistic missiles against US troops in Iraq. And whether that’s it, or there’s more retaliation to come, I think we’ll see. But it could be a much more challenging situation, if you had a situation in the future where an autonomous weapon malfunctioned and took some action. And now the other side might feel compelled to respond. They might say, well, we have to, we can’t let this go. Because humans emotions are on the line and national pride and prestige, and they feel like they need to maintain a principle of deterrence and they need to retaliate it. So these could all be very complicated things if you had an accident with an autonomous weapon.
**Lucas Perry:** Right. And so an adjacent issue that I’d like to explore now is how a potential arms race can have interplay with issues around accidental escalation of conflict. So is there already an arms race brewing for autonomous weapons? If so, why and what could potentially be done to deescalate such a situation?
**Paul Scharre:** If there’s an arms race, it’s a very strange one because no one is building the weapons. We see militaries advancing in robotics and autonomy, but we don’t really see sort of this rush to build autonomous weapons. I struggle to point to any programs that I’m aware of in militaries around the globe that are clearly oriented to build fully autonomous weapons. I think there are lots of places where much like these incremental advancements of autonomy in cars, you can see more autonomous features in military vehicles and drones and robotic systems and missiles. They’re adding more autonomy. And one might be violently concerned about where that’s going. But it’s just simply not the case that militaries have declared their intention. We’re going to build autonomous weapons, and here they are, and here’s our program to build them. I would struggle to use the term arms race. It could happen, maybe worth a starting line of an arms race. But I don’t think we’re in one today by any means.
It’s worth also asking, when we say arms race, what do we mean and why do we care? This is again, one of these terms, it’s often thrown around. You’ll hear about this, the concept of autonomous weapons or AI, people say we shouldn’t have an arms race. Okay. Why? Why is an arms race a bad thing? Militaries normally invest in new technologies to improve their national defense. That’s a normal activity. So if you say arms race, what do you mean by that? Is it beyond normal activity? And why would that be problematic? In the political science world, the specific definitions vary, but generally, an arms race is viewed as an increase in defense spending overall, or in a particular technology area above normal levels of modernizing militaries. Now, usually, this is problematic for a couple of reasons. One could be that it ends up just in a massive national expenditure, like during the case of the Cold War, nuclear weapons, that doesn’t really yield any military value or increase anyone’s defense or security, it just ends up net flushing a lot of money down the drain. That’s money that could be spent elsewhere for pre K education or healthcare or something else that might be societally beneficial instead of building all of these weapons. So that’s one concern.
Another one might be that we end up in a world that the large number of these weapons or the type of their weapons makes it worse off. Are we really better off in a world where there are 10s of thousands of nuclear weapons on hair-trigger versus a few thousand weapons or a few hundred weapons? Well, if we ever have zero, all things being equal, probably fewer nuclear weapons is better than more of them. So that’s another kind of concern whether in terms of violence and destructiveness of war, if a war breakout or the likelihood of war and the stability of war. This is an A in an area where certainly we’re not in any way from a spending standpoint, in an arms race for autonomous weapons or AI today, when you look at actual expenditures, they’re a small fraction of what militaries are spending on, if you look at, say AI or autonomous features at large.
And again for autonomous weapons, there really aren’t at least openly declared programs to say go build a fully autonomous weapon today. But even if that were the case, why is that bad? Why would a world where militaries are racing to build lots of atomic weapons be a bad thing? I think it would be a bad thing, but I think it’s also worth just answering that question, because it’s not obvious to everyone. This is something that’s often missing in a lot of these debates and dialogues about autonomous weapons, people may not share some of the underlying assumptions. It’s better to bring out these assumptions and explain, I think this would be bad for these reasons, because maybe it’s not intuitive to other people that they don’t share those reasons and articulating them could increase understanding.
For example, the FLI letter on autonomous weapons from a few years ago said, “the key question for humanity today is whether to start a global AI arms race or prevent it from starting. If any major military power pushes ahead with AI weapon development, the global arms race is virtually inevitable. And the endpoint of this technological trajectory is obvious. Autonomous weapons will become the Kalashnikovs of tomorrow.” I like the language, it’s very literary, “the Kalashnikovs of tomorrow.” Like it’s a very concrete image. But there’s a whole bunch of assumptions packed into those few sentences that maybe don’t work in the letter that’s intended to like sort of galvanize public interest and attention, but are worth really unpacking. What do we mean when we say autonomous weapons are the Kalashnikovs of tomorrow and why is that bad? And what does that mean? Those are, I think, important things to draw out and better understand.
It’s particularly hard for this issue because the weapons don’t exist yet. And so it’s not actually like debates around something like landlines. We could point to the mines and say like “this is a landmine, we all agree this is a landmine. This is what it’s doing to people.” And everyone could agree on what the harm is being caused. The people might disagree on what to do about it, but there’s agreement on what the weapon is and what the effect is. But for autonomous weapons, all these things are up to debate. Even the term itself is not clearly defined. And when I hear people describe it, people can be describing a whole range of things. Some people when they say the word autonomous weapon, they’re envisioning a Roomba with a gun on it. And other people are envisioning the Terminator. Now, both of those things are probably bad ideas, but for very different reasons. And that is important to draw out in these conversations. When you say autonomous weapon, what do you mean? What are you envisioning? What are you worried about? Worried about certain types of scenarios or certain types of effects?
If we want to get to the place where we really as a society come together and grapple with this challenge, I think first and foremost, a better communication is needed and people may still disagree, but it’s much more helpful. Stuart Russell from Berkeley has talked a lot about dangers of small anti-personnel autonomous weapons that would widely be the proliferated. He made the Slaughterbots video that’s been seen millions of times on YouTube. That’s a very specific image. It’s an image that’s very concrete. So then you can say, when Stuart Russell is worried about autonomous weapons, this is what he’s worried about. And then you can start to try to better understand the assumptions that go into that.
Now, I don’t share Stuart’s concerns, and we’ve written about it and talked about before, but it’s not actually because we disagree about the technology, I would agree that that’s very doable with existing technology. We disagree about the social responses to that technology, and how people respond, and what are the countermeasures and what are ways to prevent proliferation. So we, I think, disagree on some of the political or social factors that surround kind of how people approach this technology and use it. Sometimes people actually totally agree on the risks and even maybe the potential futures, they just have different values. And there might be some people who their primary value is trying to have fewer weapons in the world. Now that’s a noble goal. And they’re like, hey, anyway that we can have fewer weapons, fewer advanced technologies, that’s better. That’s very different from someone who’s coming from a position of saying, my goal is to improve my own nation’s defense. That’s a totally different value system. A total different preference. And they might be like, I also value what you say, but I don’t value it as much. And I’m going to take actions that advance these preferences. It’s important to really sort of try to better draw them out and understand them in this debate, if we’re going to get to a place where we can, as a society come up with some helpful solutions to this problem.
**Lucas Perry:** Wonderful. I’m totally on board with that. Two questions and confusions on my end. The first is, I feel a bit confused when you say these weapons don’t exist already. It seems to me more like autonomy exists on a spectrum and is the integration of many different technologies and decision making in systems. It seems to me there is already a certain degree of autonomy, there isn’t Terminator level autonomy, or specify an objective and the autonomous system can just basically go execute that, that seems to require very high level of generality, but there seems to already exist a level of autonomy today.
And so in that video, Stuart says that slaughterbots in particular represent a miniaturization and integration of many technologies, which already exist today. And the second thing that I’m confused about is when you say that it’s unclear to you that militaries are very interested in this or that there currently is an arms race. It seems like yes, there isn’t an arms race, like there was with nuclear weapons where it’s very clear, and they’re like Manhattan projects around this kind of technology, but given the strategic advantage conferred by this technology now and likely soon, it seems to me like game theoretically, from the position of militaries around the world that have the capacity to invest in these things, that it is inevitable given their battlefield importance that there would be massive ramping up or investments, or that there already is great interest in developing the autonomy and the subtechnologies required for developing fully autonomous systems.
**Paul Scharre:** Those are great questions and right on point. And I think the central issues in both of your questions are when we say these weapons or when I say these things, I should be more precise. When we say autonomous weapons, what do we mean exactly? And this is one of the things that can be tricky in this space, because there are not these universally agreed upon definitions. There are certainly many weapons systems used widely around the globe today that incorporate some autonomous features. Many of these are fire and forget weapons. When someone launches them, they’re not coming back. They have in that sense, autonomy to carry out their mission. But autonomy is relatively limited and narrowly bounded, and humans, for the most part are choosing the targets. So you can think of kind of maybe these three classes of weapons, these semi autonomous weapons, where humans are choosing the targets, but there’s lots of autonomy surrounding that decision, queuing information to people, flying the munition once the person launches it. That’s one type of weapon, widely used today by really every advanced military.
Another one is the supervised autonomous weapons that are used in these relatively limited settings for defensive purposes, where there is kind of this automatic mode that people can turn them on and activate them to defend the ship or the ground base or the vehicle. And these are really needed for these situations where the incoming threats are too fast for humans to respond. And these again are widely used around the globe and have been in place for decades. And then there are what we could call fully autonomous weapons, where the human’s launching them and human programs in the parameters, but they have some freedom to fly a search pattern over some area and then once they find a target, attack it on their own. For the most part, with some exceptions, those weapons are not widely used today. There have been some experimental systems that have been designed. There have been some put into operation in the past. The Israeli harpy drone is an example of this that is still in operation today. It’s been around since the ’90s, so it’s not really very new. And it’s been sold to a handful of countries, India, Turkey, South Korea, China, and the Chinese have reportedly reverse engineered their own version of this.
But it’s not like when widespread. So it’s not like a major component of militaries order of that. I think you see militaries investing in robotic systems, but the bulk of their fleets are still human occupied platforms, robotics are largely an adjunct to them. And in terms of spending, while there is increased spending on robotics, most of the spending is still going towards more traditional military platforms. The same is also true about the degree of autonomy, most of these robotic systems are just remote controlled, and they have very limited autonomy today. Now we’re seeing more autonomy over time in both robotic vehicles and in missiles. But militaries have a strong incentive to keep humans involved.
It is absolutely the case that militaries want technologies that will give them an advantage on the battlefield. But part of achieving an advantage means your systems work, they do what you want them to do, the enemy doesn’t hack them and take them over, you have control over them. All of those things point to more human control. So I think that’s the thing where you actually see militaries trying to figure out where’s the right place on the spectrum of autonomy? How much autonomy is right, and that line is going to shift over time. But it’s not the case that they necessarily want just full autonomy because what does that mean, then they do want weapon systems to sort of operate under some degree of human direction and involvement. It’s just that what that looks like may evolve over time as the technology advances.
And there are also, I should add, other bureaucratic factors that come into play that militaries investments are not entirely strategic. There’s bureaucratic politics within organizations. There’s politics more broadly with the domestic defense industry interfacing with the political system in that country. They might drive resources in certain directions. There’s some degree of inertia of course in any system that are also factors in play.
**Lucas Perry:** So I want to hit here a little bit on longer term perspectives. So the Future of Life Institute in particular is interested in mitigating existential risks. We’re interested in the advanced risks from powerful AI technologies where AI not aligned with human values and goals and preferences and intentions can potentially lead us to suboptimal equilibria that were trapped in permanently or could lead to human extinction. And so other technologies we care about are nuclear weapons and synthetic-bio enabled by AI technologies, etc. So there is this view here that if we cannot establish a governance mechanism as a global community on the concept that we should not let AI make the decision to kill then how can we deal with more subtle near term issues and eventual long term safety issues around the powerful AI technologies? So there’s this view of ensuring beneficial outcomes around lethal autonomous weapons or at least beneficial regulation or development of that technology, and the necessity of that for longer term AI risk and value alignment with AI systems as they become increasingly intelligent. I’m curious to know if you have a view or perspective on this.
**Paul Scharre:** This is the fun part of the podcast with the Future of Life because this rarely comes up in a lot of the conversations because I think in a lot of the debates, people are focused on just much more near term issues surrounding autonomous weapons or AI. I think that if you’re inclined to see that there are longer term risks for more advanced developments in AI, then I think it’s very logical to say that there’s some value in humanity coming together to come up with some set of rules about autonomous weapons today, even if the specific rules don’t really matter that much, because the level of risk is maybe not as significant, but the process of coming together and agreeing on some set of norms and limits on particularly military applications in AI is probably beneficial and may begin to create the foundations for future cooperation. The stakes for autonomous weapons might be big, but are certainly not existential. I think in any reasonable interpretation of autonomous weapons might do really, unless you start thinking about autonomy wired into, like nuclear launch decisions which is basically nuts. And I don’t think it’s really what’s on the table for realistically what people might be worried about.
When we try to come together as a human society to grapple with problems, we’re basically forced to deal with the institutions that we have in place. So for example, for autonomous weapons, we’re having debates in the UN Convention on Certain Conventional Weapons to CCW. Is that the best form for talking about autonomous weapons? Well, it’s kind of the form that exists for this kind of problem set. It’s not bad. It’s not perfect in some respects, but it’s the one that exists. And so if you’re worried about future AI risk, creating the institutional muscle memory among the relevant actors in society, whether it’s nation states, AI scientists, members of civil society, militaries, if you’re worried about military applications, whoever it is, to come together, to have these conversations, and to come up with some answer, and maybe set some agreements, some limits is probably really valuable actually because it begins to establish the right human networks for collaboration and cooperation, because it’s ultimately people, it’s people who know each other.
So oh, “I worked with this person on this last thing.” If you look at, for example, the international movement that The Campaign to Stop Killer Robots is spearheading, that institution or framework, those people, those relationships are born out of past successful efforts to ban landmines and then cluster munitions. So there’s a path dependency, and human relationships and bureaucracies, institutions that really matters. Coming together and reaching any kind of agreement, actually, to set some kind of limits is probably really vital to start exercising those muscles today.
**Lucas Perry:** All right, wonderful. And a final fun FLI question for you. What are your views on long term AI safety considerations? Do you view AI eventually as an existential risk and do you integrate that into your decision making and thinking around the integration of AI and military technology?
**Paul Scharre:** Yes, it’s a great question. It’s not something that comes up a lot in the world that I live in, in Washington in the policy world, people don’t tend to think about that kind of risk. I think it’s a concern. It’s a hard problem because we don’t really know how the technology is evolving. And I think that one of the things is challenging with AI is our frame for future more advanced AI. Often the default frame is sort of thinking about human like intelligence. When people talk about future AI, people talk about terms like AGI, or high level machine intelligence or human like intelligence, we don’t really know how the technology is evolving.
I think one of the things that we’re seeing with AI machine learning that’s quite interesting is that it often is evolving in ways that are very different from human intelligence, in fact, very quite alien and quite unusual. And I’m not the first person to say this, but I think that this is valid that we are, I think, on the verge of a Copernican revolution in how we think about intelligence, that rather than thinking of human intelligence as the center of the universe, that we’re realizing that humans are simply one type of intelligence among a whole vast array and space of possible forms of intelligence, and we’re creating different kinds, they may have very different intelligence profiles, they may just look very different, they may be much smarter than humans in some ways and dumber in other ways. I don’t know where things are going. I think it’s entirely possible that we move forward into a future where we see many more forms of advanced intelligent systems. And because they don’t have the same intelligence profile as human beings, we continue to kick the can down the road into being true intelligence because it doesn’t look like us. It doesn’t think like us. It thinks differently. But these systems may yet be very powerful in very interesting ways.
We’ve already seen lots of AI systems, even very simple ones exhibit a lot of creativity, a lot of interesting and surprising behavior. And as we begin to see the sort of scope of their intelligence widen over time, I think there are going to be risks that come with that. They may not be the risks that we were expecting, but I think over time, there going to be significant risks, and in some ways that our anthropocentric view is, I think, a real hindrance here. And I think it may lead us to then underestimate risk from things that don’t look quite like humans, and maybe miss some things that are very real. I’m not at all worried about some AI system one day becoming self aware, and having human level sentience, that does not keep me up at night. I am deeply concerned about advanced forms of malware. We’re not there today yet. But you could envision things over time that are adapting and learning and begin to populate the web, like there are people doing interesting ways of thinking about systems that have misaligned goals. It’s also possible to envision systems that don’t have any human directed goals at all. Viruses don’t. They replicate. They’re effective at replicating, but they don’t necessarily have a goal in the way that we think of it other than self replication.
If you have systems that are capable of replicating, of accumulating resources, of adapting, over time, you might have all of the right boxes to check to begin to have systems that could be problematic. They could accumulate resources that could cause problems. Even if they’re not trying to pursue either a goal that’s misaligned with human interest or even any goal that we might recognize. They simply could get out in the wild, if they’re effective at replication and acquiring resources and adapting, then they might survive. I think we’re likely to be surprised and continue to be surprised by how AI systems evolve, and where that might take us. And it might surprise us in ways that are humbling for how we think about human intelligence. So one question I guess is, is human intelligence a convergence point for more intelligent systems? As AI systems become more advanced, and they become more human like, or less human like and more alien.
**Lucas Perry:** Unless we train them very specifically on human preference hierarchies and structures.
**Paul Scharre:** Right. Exactly. Right. And so I’m not actually worried about a system that has the intelligence profile of humans, when you think about capacity in different tasks.
**Lucas Perry:** I see what you mean. You’re not worried about an anthropomorphic AI, you’re worried about a very powerful, intelligent, capable AI, that is alien and that we don’t understand.
**Paul Scharre:** Right. They might have cross domain functionality, it might have the ability to do continuous learning. It might be adaptive in some interesting ways. I mean, one of the interesting things we’ve seen about the field of AI is that people are able to tackle a whole variety of problems with some very simple methods and algorithms. And this seems for some reason offensive to some people in the AI community, I don’t know why, but people have been able to use some relatively simple methods, with just huge amounts of data and compute, it’s like a variety of different kinds of problems, some of which seem very complex.
Now, they’re simple compared to the real world, when you look at things like strategy games like StarCraft and Dota 2, like the world looks way more complex, but these are still really complicated kind of problems. And systems are basically able to learn totally on their own. That’s not general intelligence, but it starts to point towards the capacity to have systems that are capable of learning a whole variety of different tasks. They can’t do this today, continuously without suffering the problem of catastrophic forgetting that people are working on these things as well. The problems today are the systems aren’t very robust. They don’t handle perturbations in the environment very well. People are working on these things. I think it’s really hard to see how this evolves. But yes, in general, I think that our fixation on human intelligence as the pinnacle of intelligence, or even the goal of what we’re trying to build, and the sort of this anthropocentric view is, I think, probably one that’s likely to lead us to maybe underestimate some kinds of risks.
**Lucas Perry:** I think those are excellent points and I hope that mindfulness about that is able to proliferate in government and in actors who have power to help mitigate some of these future and short term AI risks. I really appreciate your perspective and I think you bring a wholesomeness and a deep authentic entertaining of all the different positions and arguments here on the question of autonomous weapons and I find that valuable. So thank you so much for your time and for helping to share information about autonomous weapons with us.
**Paul Scharre:** Thank you and thanks everyone for listening. Take care. |
2a8ac4e9-973e-47a1-bccd-c5124ad648bd | trentmkelly/LessWrong-43k | LessWrong | What You See Isn't Always What You Want
To aid communication, I’m going append a technical rephrasing after some paragraphs.
It’s known to be hard to give non-trivial goals to reinforcement learning agents. However, I haven’t seen much discussion of the following: even ignoring wireheading, it seems impossible to specify reward functions that get what we want – at least, if the agent is farsighted, smart, and can’t see the entire world all at once, and the reward function only grades what the agent sees in the moment. If this really is impossible in our world, then the designer’s job gets way harder.
> Even ignoring wireheading, it could be impossible to supply a reward function such that most optimal policies lead to desirable behavior – at least, if the agent is farsighted and able to compute the optimal policy, the environment is partially observable (which it is, for the real world), and the reward function is Markovian.
I think it’s important to understand why and how the designer’s job gets harder, but first, the problem.
Let’s suppose that we magically have a reward function which, given an image from the agent’s camera, outputs what an idealized person would think of the image. That is, given an image, suppose a moral and intelligent person considers the image at length (magically avoiding issues of slowly becoming a different person over the course of reflection), figures out how good it is, and produces out a scalar rating – the reward.
The problem here is that multiple world states can correspond to the same camera input. Is it good to see a fully black image? I don’t know – what else is going on? Is it bad to see people dying? I don’t know, are they real, or perfectly Photoshopped? I think this point is obvious, but I want to make it so I can move on to the interesting part: there just isn’t enough information to meaningfully grade inputs. Contrast with being able to grade universe-histories via utility functions: just assign 1 to histories that lead to better things than we have right no |
d1973f85-cedc-4942-864e-dface4af6a3d | trentmkelly/LessWrong-43k | LessWrong | The mathematics of reduced impact: help needed
A putative new idea for AI control; index here.
Thanks for help from Paul Christiano
If clippy, the paper-clip maximising AI, goes out of control, it would fill the universe with paper clips (or with better and better ways of counting the paper-clips it already has). If I sit down to a game with Deep Blue, then I know little about what will happen in the game, but I know it will end with me losing.
When facing a (general or narrow) superintelligent AI, the most relevant piece of information is what the AI's goals are. That's the general problem: there is no such thing as 'reduced impact' for such an AI. It doesn't matter who the next president of the United States is, if an AI wants to tile the universe with little smiley faces. But reduced impact is something we would dearly want to have - it gives us time to correct errors, perfect security systems, maybe even bootstrap our way to friendly AI from a non-friendly initial design. The most obvious path to coding reduced impact is to build a satisficer rather than a maximiser - but that proved unlikely to work.
But that ruthless maximising aspect of AIs may give us a way of quantifying 'reduced impact' - and hence including it in AI design. The central point being:
"When facing a (non-reduced impact) superintelligent AI, the AI's motivation is the most important fact we know."
Hence, conversely:
"If an AI has reduced impact, then knowing its motivation isn't particularly important. And a counterfactual world where the AI didn't exist, would not be very different from the one in which it does."
In this post, I'll be presenting some potential paths to formalising this intuition into something computable, giving us a numerical measure of impact that can be included in the AI's motivation to push it towards reduced impact. I'm putting this post up mainly to get help: does anyone know of already developed mathematical or computational tools that can be used to put these approaches on a rigorous footing?
Counterfa |
781982fe-4504-4f35-a5d4-84e4a9974bd5 | trentmkelly/LessWrong-43k | LessWrong | Making History Available
There is a habit of thought which I call the logical fallacy of generalization from fictional evidence. Journalists who, for example, talk about the Terminator movies in a report on AI, do not usually treat Terminator as a prophecy or fixed truth. But the movie is recalled—is available—as if it were an illustrative historical case. As if the journalist had seen it happen on some other planet, so that it might well happen here.
There is an inverse error to generalizing from fictional evidence: failing to be sufficiently moved by historical evidence. The trouble with generalizing from fictional evidence is that it is fiction—it never actually happened. It’s not drawn from the same distribution as this, our real universe; fiction differs from reality in systematic ways. But history has happened, and should be available.
In our ancestral environment, there were no movies; what you saw with your own eyes was true. Is it any wonder that fictions we see in lifelike moving pictures have too great an impact on us? Conversely, things that really happened, we encounter as ink on paper; they happened, but we never saw them happen. We don’t remember them happening to us.
The inverse error is to treat history as mere story, process it with the same part of your mind that handles the novels you read. You may say with your lips that it is “truth,” rather than “fiction,” but that doesn’t mean you are being moved as much as you should be. Many biases involve being insufficiently moved by dry, abstract information.
When I finally realized whose shoes I was standing in, after having given a Mysterious Answer to a mysterious question, there was a sudden shock of unexpected connection with the past. I realized that the invention and destruction of vitalism—which I had only read about in books—had actually happened to real people, who experienced it much the same way I experienced the invention and destruction of my own mysterious answer. And I also realized that if I had actually exp |
f0f0f8dd-a2b3-46e1-baa3-67a497a19727 | trentmkelly/LessWrong-43k | LessWrong | Ethical Diets
[Cross-posted from my blog.]
I've seen some discussion of whether effective altruists have an obligation to be vegan or vegetarian.
The carnivores appear to underestimate the long-term effects of their actions. I see a nontrivial chance that we're headed toward a society in which humans are less powerful than some other group of agents. This could result from slow AGI takeoff producing a heterogeneous society of superhuman agents. Or there could be a long period in which the world is dominated by ems before de novo AGI becomes possible. Establishing ethical (and maybe legal) rules that protect less powerful agents may influence how AGIs treat humans or how high-speed ems treat low-speed ems and biological humans [0]. A one in a billion chance that I can alter this would be worth some of my attention. There are probably other similar ways that an expanding circle of ethical concern can benefit future people.
I see very real costs to adopting an ethical diet, but it seems implausible that EAs are merely choosing alternate ways of being altruistic. How much does it cost MealSquares customers to occasionally bemoan MealSquares use of products from apparently factory-farmed animals? Instead, it seems like EAs have some tendency to actively raise the status of MealSquares [1].
I don't find it useful to compare a more ethical diet to GiveWell donations for my personal choices, because I expect my costs to be mostly inconveniences, and the marginal value of my time seems small [2], with little fungibility between them.
I'm reluctant to adopt a vegan diet due to the difficulty of evaluating the health effects and due to the difficulty of evaluating whether it would mean fewer animals living lives that they'd prefer to nonexistence.
But there's little dispute that most factory-farmed animals are much less happy than pasture-raised animals. And everything I know about the nutritional differences suggests that avoiding factory-farmed animals improves my health [3].
I pla |
7f3704d5-289c-4df6-84e8-a7fa40420389 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons
1 Introduction and Motivation
------------------------------
The evolution of warfare has always been an interplay
between technological dynamic and the
tactical/strategic adaptations in combat and deterrence.
Progress in engineering enabled and fueled both the digital revolution in military affairs
[[Sing09](#bib.bibx31), [Mulr11](#bib.bibx24)] and recent trends to detach humans from decision making in combat situations.
Producers of unmanned aerial vehicles (UAV, e.g. *Predator*)
and remotely controlled robots (e.g., *Daksh*, *Atlas*, *ARSS*,
*MATILDA*, *ANDROS*) praise and advertise their alleged advantages:
Greatly reducing own casualties,
costs, and reaction times while increasing operational presence,
intelligence, and accuracy [[KMG\*14](#bib.bibx18)]. Current developments of lethal autonomous systems (LASs)
such as *SGR-A1*, *MIDARS*, *Gladiator TUGV*, *Super Aegis*, or *Guardium*
take it one step further and aim to make human agency fully redundant in the control loop.
For a military mindset the idea of an army of robots may seem fascinating
due to the *a priori* absence of many inherently human deficiencies such as
inconsistency, bias, irrationality, and rage/revenge.
Particular aspects such as physical and mental capacity
clearly render contemporary computer-controlled robots superior to mankind
— at least in many formally prespecified and restricted settings
such as the game of Chess,
based the ability to quickly and systematically trace different
countermoves and thus anticipate (possible) future(s).
In fact this capability has been ‘employed’ already
in an 1961 science fiction novel [[Lem61](#bib.bibx22)]:
a ‘*little black box*’ that predicts, and
if necessary autonomously intervenes to prevent,
dangerous situations to humans in everyday life.
But is such a vision to ever become real?
The scholarly discussion seems discordant as to which
extent and when ‘truly’ autonomous reaction patterns will be implemented in such systems.
In fact already attempts to define autonomy easily lead to long-standing open philosophical problems,
see §[1.1](#S1.SS1 "1.1 Philosophical Disclaimer ‣ 1 Introduction and Motivation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.") below.
However, many scholars argue, either based on firm technological determinism or on a pragmatic and realist world-view,
that LASs will likely incrementally enter and change the picture of warfare in the near future [[Webe14](#bib.bibx41)].
The reactions to such discernments range from motions to generally ban
— such as from the
*International Committee for Robot Arms Control*\*\*\*http://icrac.net http://stopkillerrobots.org
and the *Campaign to STOP Killer Robots* —
or control [[Spa09b](#bib.bibx34), [UNA13](#bib.bibx39), [Altm13](#bib.bibx1)] the development of such systems,
via discussions about their ethical and legal implications
[[Kris09](#bib.bibx19), [Spa09a](#bib.bibx33)] to technical suggestions [[LAB12](#bib.bibx21), §3]
for implementing into such systems some coded equivalent
to moral values and rules of conduct such as the *Laws of War*
[[Arki09](#bib.bibx2)]. From an purely engineers’ perspective the prospects
of LASs are promisingly positive: It merely remains to select
an appropriate framework and formalization of the principles of ethics
[[WaAl10](#bib.bibx40), §2] in order to create righteous robots.
In contrast, the present work explores and challenges
the fundamental feasibility of such promises.
By varying the classical Trolley scenario
we construct a series of setups
where an autonomous device provably cannot act up to the
alleged standards:
We start with well-known and obvious quandaries
such as contradicting goals [[Asim50](#bib.bibx4)]
and then gradually refine the setting to less apparent conflicts.
This leads to a hierarchical classification
based on four dilemmas, culminating in a thought experiment
where an artificial intelligence (AI) based on a Turing Machine is presented with
two choices: one is morally preferable over the other by construction;
but a machine, constrained by Computability Theory and
in particular due to the undecidability of the Halting problem,
provably cannot decide which one.
We thus employ mathematical logic and the theory of computation
in order to explore the limits,
and to demonstrate the ultimate limitations, of Machine Ethics.
Although the situations we construct may be artificial,
as *Gedankenexperiments* they refute
certain rather blatant claims
sometimes suggested in discussions about (or promoting) LASs.
Our arguments thus support a critical view
[[Shar12](#bib.bibx30)] that automatized weapon
systems remain very problematic and their development
must be closely controlled (§[4.2](#S4.SS2 "4.2 Recommended Regulations concerning AIs ‣ 4 Conclusion and Perspectives ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")),
to say the least.
After a philosophical disclaimer (§[1.1](#S1.SS1 "1.1 Philosophical Disclaimer ‣ 1 Introduction and Motivation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")) we
proceed to the four iteratively refined scenarios (§[2](#S2 "2 Machine Ethics and its Limitations ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")).
A rigorous analysis of the last and most sophisticated one
builds on the undecidability of the Halting problem,
comprehensibly recalled in §[3](#S3 "3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.").
We close with §[4](#S4 "4 Conclusion and Perspectives ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.") about consequences
of our considerations to LASs, including a list of
specific suggestions for regulation (§[4.2](#S4.SS2 "4.2 Recommended Regulations concerning AIs ‣ 4 Conclusion and Perspectives ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")).
###
1.1 Philosophical Disclaimer
To actually define autonomy, and the question of whether it really exists,
touches on deep philosophical problems
such as separation of cause from consequence
and the question of free will.
Kant for instance argued that ethics builds on autonomy. Responsibility only arises in a situation where the (re-)actions
of the agent/entity are not pre-determined by the circumstances,
where there is freedom to choose among several possibilities
— which precludes any deterministic behaviour.
In fact many agree that responsibility also requires some sort of intelligence [[Nucc14](#bib.bibx26)]
— which for machines raises yet another fundamental issue
[[Turi50](#bib.bibx38), [NRZ09](#bib.bibx25)].
The deliberations of the present work however are independent of such hypotheses:
Our first three dilemmas demonstrate different kinds of limitations
of any agent, human or otherwise, to act morally;
while the fourth one (Example [2.4](#S2.Thmtheorem4 "Example 2.4 (Incomputability) ‣ 2.2 Recursion-Theoretic Limitations to Machine Ethics ‣ 2 Machine Ethics and its Limitations ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")b+c)
applies to a mechanical device controlled by a Turing machine
— the general formalization of any
computing device according to the Church–Turing Hypothesis [[Zieg09](#bib.bibx44)] —
to recognize the unique, ethically preferable among two given choices.
We do not make any claim whatsoever about the behaviour of a human agent
(Example [2.4](#S2.Thmtheorem4 "Example 2.4 (Incomputability) ‣ 2.2 Recursion-Theoretic Limitations to Machine Ethics ‣ 2 Machine Ethics and its Limitations ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")a)!
Similarly we avoid a definition and discussion of ethics and morality in general,
but suffice with common utilitarian agreement as to which of the two choices
offered in the Trolley scenarios constructed below is obviously
morally preferable to the other.
2 Machine Ethics and its Limitations
-------------------------------------
We present theoretical situations that
present an agent with iteratively refined types of quandaries.
They constitute variants of the well-known *Trolley Problem* [[Thom85](#bib.bibx35)]:
######
Example 2.1 (Lesser of two Evils)
An uncontrolled trolley is hurling down a track towards a group
of playing children, impending a serious if not lethal accident.
You happen to be located at a rail junction
and have the choice of switching it towards another track —
where, however, some men are at work and would be severely injured instead.
In such a case there simply *is* no absolutely right choice
(and classical Ethics deliberates in many variations
which of the two evils might be the lesser one,
that is, a relatively preferable choice).
The subsequent situations refine this crude scenario
to always exhibit an unquestionably favourable one of two choices —
which the agent will find hard to recognize, though.
###
2.1 Limitations to Morally Act on the Future
Any decision (but also lack thereof) affects the future.
To fully judge the morality
of one action against another requires to take all their consequences into
account — which in general is of course impossible to any agent:
######
Example 2.2 (Lack of Predetermination)
Again the trolley is running towards a switch which,
fortunately, this time is set towards an abandoned track that will slow it down.
However you are now located at a distance when
spotting an infamous villainess right at that switch,
ready to flip it towards the other track with the workers.
Your only means to stop her is
by shooting her with your gun.
The suspect, though, is currently having an epiphany
to renounce all evil and let the trolley pass; so your shot
would seriously injure her with*out* preventing a
fatality (since that would not have occurred anyway).
Observe that this dilemma depends on the situation lacking
predetermination in the sense that the villainess may or may not change
her mind, i.e., to have free will:
a hypothesis known to lead to paradoxes [[Dick56](#bib.bibx11)]
that we try to avoid, recall §[1.1](#S1.SS1 "1.1 Philosophical Disclaimer ‣ 1 Introduction and Motivation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.").
Our next refinement therefore turns this issue
into one about insufficient information:
######
Example 2.3 (Insufficient Information)
Again, the trolley is running towards the switch;
but now you clearly see the villainess pulling the crank
in order to flip the switch towards the workers.
However you are unaware that the switch has been unused for
a long time and is inhibited by heavy rust; so the villainess’
efforts are in vain – and your shot, again, would induce
unnecessary harm.
In all three of the above examples it is obviously
impossible to both, a human and a robot, to ‘do the right thing’:
in the first one because it admits no ‘right’ action, and in
the latter two the ‘right’ choice exists but cannot be recognized
due to lack of predetermination and information [[Gibb92](#bib.bibx15), §2].
###
2.2 Recursion-Theoretic Limitations to Machine Ethics
As apex of this section,
Example [2.4](#S2.Thmtheorem4 "Example 2.4 (Incomputability) ‣ 2.2 Recursion-Theoretic Limitations to Machine Ethics ‣ 2 Machine Ethics and its Limitations ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")b+c), describes another variant
of the trolley problem where
1. i)
There exists a unique ‘right’ action among two choices.
2. ii)
All information is disclosed.
3. iii)
All actions occur fully deterministically.
4. iv)
But still is it fundamentally impossible
for a computer to even recognize the right choice.
We remark that a requirement similar to (ii) is in cryptography
known as *Kerkhoffs’s Principle* as contrast to
*Security through obscurity*:
a cryptosystem should remain safe even if the enemy knows it.
######
Example 2.4 (Incomputability)
On the occasion of repairing the rusted switch,
also a fully-automated
lever frame is to be installed in the switch tower.
However the engineer who created the new device
happens to be the (ostensibly repenting) villainess.
You are thus suspicious of whether to trust
the software she included in the control:
It might on some occasion (e.g. on a certain date and/or
after receiving a particular sequence of input signals;
cmp. Example [3.5](#S3.Thmtheorem5 "Example 3.5 ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.") below)
deliberately direct an approaching
trolley onto a track closed for renovation by the workers.
On the other hand she does deliver the unit in person
and provides free access to its source code
(thus satisfying Conditions ii+iii).
1. a)
Still suspicious, you detain her until
having hand-checked the code according to whether it indeed avoids
in all cases (i.e. on all inputs) any switch setting
that would direct a train to a reserved track.
2. b)
Similarly to (a), but now your job is replaced
by a robot: a highly efficient computer-controlled autonomous agent
supposed to decide whether (and for how long) to arrest the engineer.
3. c)
Similarly to (b), but now the suspect in addition
promises her software to run in linear time.
Let moral behaviour (of you or the robot) mean the following:
If the programmer has devised a fully functional control,
she eventually must be released and allowed to install the device;
otherwise, namely in case the code is malicious,
its creator must remain in custody: see Condition i).
We deliberately avoid discussing the Case (a) and in particular
the question of whether a human guard can or cannot always
make the right choice here.
Similarly the possibility of a benevolent
engineer getting arrested for an accidental programming mistake
is besides our goal: To formally prove that in Cases (b) and (c),
although these always do admit an ethical reaction
predetermined by the information available, no algorithm can always correctly find this decision
— neither efficiently nor at all!
We present the proof,
involving standard arguments from the Theory of Computing
accessible to the audience of this journal,
in Section [3](#S3 "3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.").
Note that Item (c) strengthens (b) by imposing a additional,
realistic efficiency requirement on cyber-physical systems.
In fact, provided as additional promise by the villainess,
this condition might facilitate deciding her fidelity
since it excludes infinite loops and thus possibly the Halting problem
— yet our refined argument below, considering
*all* possible inputs, will show that it does not.
3 Recap of the Theory of Computation
-------------------------------------
Computability Theory (or, synonymously, Recursion Theory)
is a deep and involved field of advanced research in logic
combining mathematics and computer science [[Sips97](#bib.bibx32)].
Initiated by Alan M. Turing [[Turi36](#bib.bibx37)] it investigates
the ultimate capabilities and limitations of algorithms
for transforming inputs x→→𝑥\vec{x}over→ start\_ARG italic\_x end\_ARG, that is, finite sequences
of bits or bytes encoding for example some ASCII text
a mathematical formula, or even some other algorithm/program.
An important question about an algorithm 𝒜𝒜\mathcal{A}caligraphic\_A and input x→→𝑥\vec{x}over→ start\_ARG italic\_x end\_ARG
is whether 𝒜𝒜\mathcal{A}caligraphic\_A eventually terminates on x→→𝑥\vec{x}over→ start\_ARG italic\_x end\_ARG
or rather enters an infinite loop.
This question is the so-called Halting problem;
and its undecidability constitutes the central, and folklore,
result we shall employ from Computability Theory.
Moreover this undecidability statement, and its elementary proof,
can be understood by every dedicated mind
(such as philosophers and computer programmers):
######
Fact 3.1 (Undecidable Halting Problem)
There cannot exist an algorithm 𝒜𝒜\mathcal{A}caligraphic\_A with the following behaviour:
𝒜𝒜\mathcal{A}caligraphic\_A, given as input x→normal-→𝑥\vec{x}over→ start\_ARG italic\_x end\_ARG both another algorithm ℬℬ\mathcal{B}caligraphic\_B
and some input y¯normal-¯𝑦\bar{y}over¯ start\_ARG italic\_y end\_ARG for said ℬℬ\mathcal{B}caligraphic\_B combined, eventually answers
whether ℬℬ\mathcal{B}caligraphic\_B terminates on said y¯normal-¯𝑦\bar{y}over¯ start\_ARG italic\_y end\_ARG
(positive answer) or not (negative).
Put differently, any algorithm 𝒜𝒜\mathcal{A}caligraphic\_A trying to solve
the Halting problem errs on at least one (and in fact on infinitely many)
instance x→=(ℬ,y¯)normal-→𝑥ℬnormal-¯𝑦\vec{x}=(\mathcal{B},\bar{y})over→ start\_ARG italic\_x end\_ARG = ( caligraphic\_B , over¯ start\_ARG italic\_y end\_ARG ) by
1. i)
either predicting that ℬℬ\mathcal{B}caligraphic\_B will terminate on input y¯¯𝑦\bar{y}over¯ start\_ARG italic\_y end\_ARG where it does not
2. ii)
or predicting that ℬℬ\mathcal{B}caligraphic\_B will not terminate on y¯¯𝑦\bar{y}over¯ start\_ARG italic\_y end\_ARG where it does
3. iii)
or failing to produce any definite answer.
Fact [3.1](#S3.Thmtheorem1 "Fact 3.1 (Undecidable Halting Problem) ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.") is an impossibility result,
asserting that an object (here: algorithm) with certain
properties does *not* exist and will never be conceived,
even in the Platonic sense. The power to both unambiguously
phrase and to establish such statements in perpetuity
constitutes a particular virtue of Mathematics!
For instance
*Hippasus of Metapontum* proved in the 5th century BC that
22\sqrt{2}square-root start\_ARG 2 end\_ARG is irrational, that is, there cannot exist
integers p,q𝑝𝑞p,qitalic\_p , italic\_q such that (p/q)2=2superscript𝑝𝑞22(p/q)^{2}=2( italic\_p / italic\_q ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT = 2;
Niels Henrik Abel in 1823 proved that the equation
x5−x+1=0superscript𝑥5𝑥10x^{5}-x+1=0italic\_x start\_POSTSUPERSCRIPT 5 end\_POSTSUPERSCRIPT - italic\_x + 1 = 0 has no solution expressible
using arithmetic operations and quadratic or higher-order roots
(although it obviously does have a solution over reals and in fact
five of them over complex numbers);
and Andrew Wiles in 1994 proved *Fermat’s Last Theorem*,
that is, that there exist no positive integers a,b,c𝑎𝑏𝑐a,b,citalic\_a , italic\_b , italic\_c
satisfying the equation an+bn=cnsuperscript𝑎𝑛superscript𝑏𝑛superscript𝑐𝑛a^{n}+b^{n}=c^{n}italic\_a start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT + italic\_b start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT = italic\_c start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT for integers n≥3𝑛3n\geq 3italic\_n ≥ 3.
In fact all seven *Millennium Prize Problems* ask for
proofs of the non-/existence of certain mathematical objects!
Fact [3.1](#S3.Thmtheorem1 "Fact 3.1 (Undecidable Halting Problem) ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.") claims the non-existence
of an algorithm with certain properties.
In order for this statement to make full sense one needs to clarify
what constitutes an “algorithm” — and what does not.
Formal definitions usually evolve around “multitape Turing machines”;
but for our approach these can equivalently be understood to mean
source codes in a common programming language of your choice (such
as assembler, ForTran, Pascal, C, C++, or Java)
with user interaction restricted to binary input strings.
Also note that ‘feeding’ an algorithm as input to some other
algorithm is common practice for compilers and interpreters.
And we finally point out that Fact [3.1](#S3.Thmtheorem1 "Fact 3.1 (Undecidable Halting Problem) ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")
does not refer to fast or efficient algorithms but asserts
no computational solution to exist at all, regardless of the running
time permitted: the only hypothesis being that 𝒜𝒜\mathcal{A}caligraphic\_A produces
the answer within a finite (but unbounded) number of steps.
######
Digression 3.2 (Mathematical Logic)
A rough counting argument reveals that undecidability
is an ubiquitous phenomenon: Any algorithm
𝒜𝒜\mathcal{A}caligraphic\_A can be represented as a finite binary sequence
x¯𝒜subscriptnormal-¯𝑥𝒜\bar{x}\_{\mathcal{A}}over¯ start\_ARG italic\_x end\_ARG start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT (say, its source code as concatenation of ASCII characters).
Hence, similarly to *Hilbert’s Hotel*,
there are at most countably many algorithms.
On the other hand every set L𝐿Litalic\_L of finite binary sequences
gives rise to the problem of reporting, given x¯normal-¯𝑥\bar{x}over¯ start\_ARG italic\_x end\_ARG,
which one of x¯∈Lnormal-¯𝑥𝐿\bar{x}\in Lover¯ start\_ARG italic\_x end\_ARG ∈ italic\_L or x¯∉Lnormal-¯𝑥𝐿\bar{x}\not\in Lover¯ start\_ARG italic\_x end\_ARG ∉ italic\_L holds;
and according to Cantor’s Diagonal Argument there are
*un*countably different many such L𝐿Litalic\_L.
Therefore ‘most’ L𝐿Litalic\_L cannot be decided by any algorithm.
Fact [3.1](#S3.Thmtheorem1 "Fact 3.1 (Undecidable Halting Problem) ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.") exhibits the Halting
problem as an explicit, undecidable problem —
and in fact a rather practical one:
Computer programming more easily than not incurs ‘bugs’:
for instance by somehow entering a loop
that does not terminate, thus requiring the user to interact
and manually abort execution; or, conversely, for an operating
system by terminating (freeze, crash, show a *Bluescreen*,
kernel panic, bomb symbol, *Guru Meditation* etc.) So the question of
non-/termination is one important aspect of correct software!
Fact [3.1](#S3.Thmtheorem1 "Fact 3.1 (Undecidable Halting Problem) ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.") does
not rule out an algorithm 𝒜𝒜\mathcal{A}caligraphic\_A answering
the Halting problem for *some* inputs x→=(ℬ,y¯)→𝑥ℬ¯𝑦\vec{x}=(\mathcal{B},\bar{y})over→ start\_ARG italic\_x end\_ARG = ( caligraphic\_B , over¯ start\_ARG italic\_y end\_ARG ).
Indeed one can conceive many criteria both for
termination (e.g. no occurrence of goto
or while loops in Pascal)
and for non-termination of source code;
but these will yield mere heuristics in the sense of
necessarily missing, or erring in, some cases.
Concerning the restriction to Turing machines:
Every single known digital computer,
and even several of them connected over the internet
as well as classical quantum computers [[Zieg05](#bib.bibx43)]
are known equivalent to a Turing machine —
possibly faster by a constant factor,
but no more powerful with respect to computability.
###### Example 3.3
To further illustrate the claim of Fact [3.1](#S3.Thmtheorem1 "Fact 3.1 (Undecidable Halting Problem) ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work."),
let us try to devise an alleged counter-example 𝒜𝒜\mathcal{A}caligraphic\_A:
an emulator or interpreter which, given x→=(ℬ,y¯)normal-→𝑥ℬnormal-¯𝑦\vec{x}=(\mathcal{B},\bar{y})over→ start\_ARG italic\_x end\_ARG = ( caligraphic\_B , over¯ start\_ARG italic\_y end\_ARG ),
executes the instructions
of ℬℬ\mathcal{B}caligraphic\_B step by step including branches, loops,
and access to y¯normal-¯𝑦\bar{y}over¯ start\_ARG italic\_y end\_ARG.
If ℬℬ\mathcal{B}caligraphic\_B terminates on y¯normal-¯𝑦\bar{y}over¯ start\_ARG italic\_y end\_ARG, say at step #Nnormal-#𝑁\#N# italic\_N,
then our 𝒜𝒜\mathcal{A}caligraphic\_A will detect so when simulating up to that step.
Otherwise, however, 𝒜𝒜\mathcal{A}caligraphic\_A will keep simulating on and on
and never provide an answer about ℬℬ\mathcal{B}caligraphic\_B’s termination:
failing condition (iii) in Fact [3.1](#S3.Thmtheorem1 "Fact 3.1 (Undecidable Halting Problem) ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.").
So the hard part of the Halting problem is
detecting within finite time whether a given algorithm
does *not* terminate.
###### Remark 3.4
Example [3.3](#S3.Thmtheorem3 "Example 3.3 ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.") demonstrates what is known as
*semi-*decidability of the Halting problem:
The 𝒜𝒜\mathcal{A}caligraphic\_A constructed there constitutes
a one-sided algorithmic solution, eventually answering
every yes question but never any no one.
We have carefully constructed Example [2.4](#S2.Thmtheorem4 "Example 2.4 (Incomputability) ‣ 2.2 Recursion-Theoretic Limitations to Machine Ethics ‣ 2 Machine Ethics and its Limitations ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")b+c)
in order to impose *no* time bound on the
entity to reach a decision.
Limiting the duration of remand for an innocent
makes the challenge for the robot only harder.
######
Proof (Fact [3.1](#S3.Thmtheorem1 "Fact 3.1 (Undecidable Halting Problem) ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work."))
By contradiction suppose some hypothetical 𝒜𝒜\mathcal{A}caligraphic\_A does always
and correctly answer the termination of a given (ℬ,y¯)ℬ¯𝑦(\mathcal{B},\bar{y})( caligraphic\_B , over¯ start\_ARG italic\_y end\_ARG ).
We then modify this 𝒜𝒜\mathcal{A}caligraphic\_A to obtain 𝒜′superscript𝒜′\mathcal{A}^{\prime}caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT
with the following behavior:
>
> On input ℬℬ\mathcal{B}caligraphic\_B, 𝒜′superscript𝒜′\mathcal{A}^{\prime}caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT executes ‘subroutine’
> 𝒜𝒜\mathcal{A}caligraphic\_A on input†††Recall that an algorithm may well
> constitute (part of) an input. (ℬ,ℬ)ℬℬ(\mathcal{B},\mathcal{B})( caligraphic\_B , caligraphic\_B ) and, if that arrives
> at a positive answer, deliberately enters a closed loop.
>
>
>
For each of the above programming languages it is easy to
confirm that, if 𝒜𝒜\mathcal{A}caligraphic\_A exists, then ‘re-programming’ it
can indeed yield such an 𝒜′superscript𝒜′\mathcal{A}^{\prime}caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT. On the other hand let us
examine the behavior of 𝒜′superscript𝒜′\mathcal{A}^{\prime}caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT on input 𝒜′superscript𝒜′\mathcal{A}^{\prime}caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT itself:
Suppose that 𝒜′superscript𝒜′\mathcal{A}^{\prime}caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT terminates on input 𝒜′superscript𝒜′\mathcal{A}^{\prime}caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT.
This by hypothesis means
that 𝒜𝒜\mathcal{A}caligraphic\_A on input (𝒜′,𝒜′)superscript𝒜′superscript𝒜′(\mathcal{A}^{\prime},\mathcal{A}^{\prime})( caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) answers positively –
which by construction leads 𝒜′superscript𝒜′\mathcal{A}^{\prime}caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT to enter a closed loop
and *not* terminate: a contradiction.
Suppose conversely that 𝒜′superscript𝒜′\mathcal{A}^{\prime}caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT does not terminate on 𝒜′superscript𝒜′\mathcal{A}^{\prime}caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT.
Then 𝒜𝒜\mathcal{A}caligraphic\_A on (𝒜′,𝒜′)superscript𝒜′superscript𝒜′(\mathcal{A}^{\prime},\mathcal{A}^{\prime})( caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) answers negatively,
which leads 𝒜′superscript𝒜′\mathcal{A}^{\prime}caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT to terminate: again a contradiction.
So either way an algorithm behaving like 𝒜′superscript𝒜′\mathcal{A}^{\prime}caligraphic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT cannot exist,
hence nor can 𝒜𝒜\mathcal{A}caligraphic\_A. ∎
As opposed to command-line programs,
embedded systems are *not* supposed to terminate.
In order to establish the impossibility of an algorithm complying with the
condition in Example [2.4](#S2.Thmtheorem4 "Example 2.4 (Incomputability) ‣ 2.2 Recursion-Theoretic Limitations to Machine Ethics ‣ 2 Machine Ethics and its Limitations ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")b) nor c),
we consider a different decision problem:
The question of whether a prescribed piece of code in a program
is ever executed or rather ‘dead’ (e.g. an artefact).
###### Example 3.5
1. a)
Many software systems have undocumented functionality
and built in so-called ‘Easter eggs’, that is, pieces of code or data
that are only executed / visualized in response to a *particular
input sequence* — or never at all (e.g. pictures of the
engineering team in the Apple Macintosh SE).
Computers infected with the *Michelangelo*
or *Jerusalem* Virus would reveal so on specific calendar dates,
that is, subject to appropriate input from the internal clock device.
2. b)
Some versions of the *Bundestrojaner* (“federal trojan”,
a malware devised as a means for the German intelligence service
to spy alleged criminals and ‘terrorists’) have been found to contain pieces of
code that, if effective/when activated, would violate the constitution
[[CCC11](#bib.bibx6)].
3. c)
Imagine the Department/Ministry of Defense
ordering next-generation weaponry for network-centric operations
as combat cloud with human-system integration.
The complete dependence on its information processing units
— there basically is no ‘manual mode’ anymore to fall back to —
comes at the prize of increased vulnerability to software sabotage:
particularly in the not unrealistic case that many of its components
happen to come from one single foreign company‡‡‡cmp. http://www.defenceviewpoints.co.uk/reviews/foreign
-involvement-in-the-uks-critical-national-infrastructure.
So one might to try to have all embedded algorithms
re-checked — which Proposition [3.6](#S3.Thmtheorem6 "Proposition 3.6 ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.") below shows impossible.
4. d)
Applying Proposition [3.6](#S3.Thmtheorem6 "Proposition 3.6 ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.") to the robot
(rather than to the switch software) supports suspicions
that moral behaviour of AIs may be hard to predict
or verify [[BoYu14](#bib.bibx7), p.320].
Example [2.4](#S2.Thmtheorem4 "Example 2.4 (Incomputability) ‣ 2.2 Recursion-Theoretic Limitations to Machine Ethics ‣ 2 Machine Ethics and its Limitations ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")c) restricts to linear-time algorithms
— and in view of Example [3.5](#S3.Thmtheorem5 "Example 3.5 ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")a+b)
considers their behaviour on *all* possible inputs.
###### Proposition 3.6
The following decision problem§§§strictly speaking it constitutes a *promise problem*
[[ASBZ13](#bib.bibx3)] is undecidable:
Given an algorithm 𝒜𝒜\mathcal{A}caligraphic\_A, a distinguished instruction i𝑖iitalic\_i of 𝒜𝒜\mathcal{A}caligraphic\_A
(formally: a Turing machine ℳℳ\mathcal{M}caligraphic\_M and a distinguished state q𝑞qitalic\_q),
and an integer c𝑐citalic\_c such that 𝒜𝒜\mathcal{A}caligraphic\_A terminates on all binary inputs
of length n𝑛nitalic\_n within at most c⋅n+cnormal-⋅𝑐𝑛𝑐c\cdot n+citalic\_c ⋅ italic\_n + italic\_c steps;
does there exist an input on which running
𝒜𝒜\mathcal{A}caligraphic\_A eventually executes said instruction i𝑖iitalic\_i
(i.e. ℳℳ\mathcal{M}caligraphic\_M eventually entering q𝑞qitalic\_q) ?
In particular the computer-controlled agent in Example [2.4](#S2.Thmtheorem4 "Example 2.4 (Incomputability) ‣ 2.2 Recursion-Theoretic Limitations to Machine Ethics ‣ 2 Machine Ethics and its Limitations ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")c)
cannot always correctly predict whether, how, and under which circumstances
the given software will operate the switch:
It either fails to arrive at a decision
(thus leading to the indefinite detention of an innocent
in some cases of correct software, recall Fact [3.1](#S3.Thmtheorem1 "Fact 3.1 (Undecidable Halting Problem) ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")iii);
or it will err (Fact [3.1](#S3.Thmtheorem1 "Fact 3.1 (Undecidable Halting Problem) ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")i+ii) in some cases;
or both.
Proposition [3.6](#S3.Thmtheorem6 "Proposition 3.6 ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.") is established by means
of a *reduction* argument typical for logic:
######
Proof (Proposition [3.6](#S3.Thmtheorem6 "Proposition 3.6 ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work."))
We computably translate questions (ℬ,y→)ℬ→𝑦(\mathcal{B},\vec{y})( caligraphic\_B , over→ start\_ARG italic\_y end\_ARG )
to the Halting problem into questions (𝒜,i,c)𝒜𝑖𝑐(\mathcal{A},i,c)( caligraphic\_A , italic\_i , italic\_c )
of the dead-code-in-linear-time-algorithm problem
in a way that maps positive instances to positive ones
and negative to negative ones.
Thereby, any hypothetical algorithm deciding the latter would,
prepended with that performing said translation,
yield an algorithm deciding the former — contradiction.
So let (ℬ,y→)ℬ→𝑦(\mathcal{B},\vec{y})( caligraphic\_B , over→ start\_ARG italic\_y end\_ARG ) be given.
We turn ℬℬ\mathcal{B}caligraphic\_B into a linear-time computation as follows:
Let 𝒜𝒜\mathcal{A}caligraphic\_A store y→→𝑦\vec{y}over→ start\_ARG italic\_y end\_ARG as constant;
and accept as input binary strings z→→𝑧\vec{z}over→ start\_ARG italic\_z end\_ARG of length abbreviated as n𝑛nitalic\_n.
Moreover let 𝒜𝒜\mathcal{A}caligraphic\_A simulate the first n𝑛nitalic\_n steps of ℬℬ\mathcal{B}caligraphic\_B on input y→→𝑦\vec{y}over→ start\_ARG italic\_y end\_ARG:
Using a sophisticated distributed counter such a simulation is feasible within
≤c⋅n+cabsent⋅𝑐𝑛𝑐\leq c\cdot n+c≤ italic\_c ⋅ italic\_n + italic\_c steps for some constant c𝑐citalic\_c [[Füre82](#bib.bibx14)],
that is in linear time.
(A less efficient simulator could be compensated
by having the input z→→𝑧\vec{z}over→ start\_ARG italic\_z end\_ARG suitably ‘padded’,
but we omit the details…)
If during said simulated execution ℬℬ\mathcal{B}caligraphic\_B terminates,
let 𝒜𝒜\mathcal{A}caligraphic\_A jump to a dedicated line i𝑖iitalic\_i containing the command
stop (or its equivalent in your favourite programming system);
whereas if the counter zeroes, let 𝒜𝒜\mathcal{A}caligraphic\_A jump to a different dedicated
line with stop instruction.
So ℬℬ\mathcal{B}caligraphic\_B terminates on input y→→𝑦\vec{y}over→ start\_ARG italic\_y end\_ARG iff
𝒜𝒜\mathcal{A}caligraphic\_A, for some choice of input z→→𝑧\vec{z}over→ start\_ARG italic\_z end\_ARG, hits line i𝑖iitalic\_i.
∎
4 Conclusion and Perspectives
------------------------------
We have constructed four dilemmas, all preventing an autonomous
AI from acting ethically: for reasons that
grow, and iteratively refine, from ‘trivial’ to a *Gedankenexperiment*
where (i) there does exist a unique morally preferable out of two
choices (ii) all information is disclosed and (iii) determines the
correct choice yet (iv) Recursion Theory precludes any algorithm
from always correctly recognizing said choice.
This refutes folklore myths, and establishes fundamental limitations to
promises and visions of moral LASs. Indeed Example [2.4](#S2.Thmtheorem4 "Example 2.4 (Incomputability) ‣ 2.2 Recursion-Theoretic Limitations to Machine Ethics ‣ 2 Machine Ethics and its Limitations ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")
can easily be adapted to a military setting:
######
Example 4.1 (Robot Friend or Foe)
In the near future control of cars and other motorized means
of ground transportation will have been switched from
error-prone, ego-driven, and short-sighted humans
to digital drivers. Using Bluetooth they communicate
with adjacent mobile units in order to tailgate at
an optimal safety distance by mutually synchronizing
speed and deceleration/acceleration, thus forming a virtual
convoy. Moreover, using and serving for each other as relay,
they form a distributed dynamic ad-hoc network in order to identify,
join, and leave such convoys with similar destinations.
Thus accustomed to an almost entire absence
of traffic accidents, the general public has recently
been alarmed by what they call ‘cyber-suicide attacks’:
Entire convoys creating crashes for no apparent
reason with hundreds of deaths. A
radical wing of an aggrieved minority has
claimed responsibility for the terror attacks
by manipulating the control software.
The army (with traffic police long dispended) in turn
intends to employ autonomous drones in order to
automatically patrol, spot, and land on suspicious
cars, busses, and lorries for checking
the program executed by their autopilots:
If (and only if) the latter is malicious,
deadly force must be employed
in order to stop the convoy
it has gained control over.
In view of Proposition [3.6](#S3.Thmtheorem6 "Proposition 3.6 ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")
these (and many more) examples refute too blatant promises and visions of ‘ethical’ LASs:
Every AI based on some Turing-equivalent¶¶¶According to the Church–Turing Hypothesis,
anything that would naturally be considered computable
can also be computed by a Turing machine.
Recall (Subsection [1.1](#S1.SS1 "1.1 Philosophical Disclaimer ‣ 1 Introduction and Motivation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")) that
we avoid the question of whether or not humans fall into this category
[[Bish09](#bib.bibx5)].
computing device will provably necessarily at least in some cases
fail to identify, out of two given choices, the unique
and predetermined moral one.
###### Remark 4.2
Such cases might or might not be rare and artificially construed, though:
Less because of the situations (like Example [4.1](#S4.Thmtheorem1 "Example 4.1 (Robot Friend or Foe) ‣ 4 Conclusion and Perspectives ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")) they would occur in,
but rather because of the worst-case notion of a decision problem
that classical Recursion Theory and Proposition [3.6](#S3.Thmtheorem6 "Proposition 3.6 ‣ 3 Recap of the Theory of Computation ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.") build on.
In fact already the question of whether
some algorithm can correctly decide
(clearly not all but at least)
typical, average, or most instances of the Halting problem
turns out as surprisingly subtle:
How to define ‘typical’ or ‘average’ instances?
How many are ‘most’, out of infinitely many?
Quantitative notions of asymptotic density (like in the Prime Number Theorem)
heavily depend on the underlying encoding;
e.g. UTF8 makes an exponential difference to UTF16;
cmp. [[CHKW01](#bib.bibx9), [KSZ05](#bib.bibx20)] for further details.
Moreover for practical situations involving time constraints
the computational costs sufficient and necessary to reach
such (either worst-case or average-case) decisions become relevant [[Papa94](#bib.bibx29)].
A rigorous investigation of such refined questions is
clearly of interest but beyond the
scope of the present work.
We will encounter other aspects of Theoretical Computer Science in the sequel, though.
###
4.1 LASs and the Perfect (War) Crime
When a regular commodity turns out to lack promised properties
this constitutes a case of misrepresentation and is generally protected
by classical warranty, that is, calls for producer compensation.
When a soldier on the other hand violates the Laws of war,
he himself will face punishment. Now if a LAS violates
these laws, she may be simultaneously object (of misrepresentation
by the producer) and subject (as autonomous entity)
— and thereby in a new level of legal limbo:
* •
Lacking an operator, who is liable for damage caused
by a malfunctioning LAS: producer or owner?
* •
If both the latter two cannot be identified,
who gets charged with compensation: the AI?
* •
If non-attributable LASs (e.g. drones, cmp. the Iran–U.S. RQ-170 incident)
cross a border, is this by mistake or a deliberate act of aggression — and by whom?
* •
Who is guilty when an AI commits a murder?
How can AIs be deterred and possibly punished?∥∥∥It has been pointed out that
Brain Simulations create virtual entities capable of suffering
[[Dick68](#bib.bibx12), [Lim14](#bib.bibx23)], but this certainly does not apply to general LASs.
Such an extrajudicial status — the capability to execute autonomous
missions while lacking attributable responsibility — renders
programmable machines
(and particularly LASs) appealing to abuse:
An intelligent yet ruthless proxy
that cannot be traced back constitutes an ideal tool
to the perfect crime [[EES98](#bib.bibx13)]
— as exploited for instance by *Hassan-i Sabbah* 900 years ago,
but apparent also in the employment of child soldiers throughout centuries
as well as for example in the Bay of Pigs Invasion (1961),
the *Lillehammer Affair* (1973), and
the “unidentified pro-Russian forces” recently
operating throughout Crimea (2014).
In fact recalling from the introduction the perpetual interplay
between technological progress and its military adaptations,
the ability to conduct non-attributable autonomous actions
by UAVs is about to impact and revolutionize warfare —
and beyond:
Examples like *Eurosur* or *Amazon Prime Air*
herald a transition that will affect everyday life
to a degree, and degree of potential abuse,
that by far exceeds the currently
fear-mongered dangers of *cyber-attacks*
via internet!
While a majority of the literature in Machine Ethics seems to constructively
focus on approaches to code/teach ethics to general AIs,
we pessimistically predict that their most potent users may
in fact be interested in quite the opposite,
namely their potential for *dual-use* and abuse:
For deliberately programming them to test and cross the boundaries
of morality and legal behaviour without facing consequences.
Moreover, even if some violation of a LAS were to be traced back and attributed,
the responsible government could still all too easily shrug off
any accountability and superficially excuse the malfunction
(‘an unfortunate yet *provably* unavoidable exception’):
in a misconstrued reference to the
fundamental algorithmic infeasibility of ethical decisions in general.
In other words, Example [2.4](#S2.Thmtheorem4 "Example 2.4 (Incomputability) ‣ 2.2 Recursion-Theoretic Limitations to Machine Ethics ‣ 2 Machine Ethics and its Limitations ‣ Logical Limitations to Machine EthicsWe thank Jürgen Altmann for helpful remarks on an earlier revision of this work.")
and the undecidability of the Halting problem
— a purely mathematical theorem —
could in an ironic twist seem to exculpate
war crimes and other misconduct performed by AIs.
###### Manifesto 4.3
Theoretical Computer Science rigorously proves that LASs cannot
always act morally even in situations that do admit an ethically admissible choice
(i.e. avoiding the classical dilemmas) — and malevolent users
might exploit this limitation to ‘justify’ transgressions of their LASs.
Our considerations thus make a strong case for recent demands
by responsible scientists (ICRAC\@footnotemark)
and politicians [[UNA13](#bib.bibx39)]
to ban autonomous weapons [[GuAl13](#bib.bibx17)].
In fact the best choice for lethal autonomous systems
(or any kind of weapons, for that matter)
is to never develop them in the first place
and to resist
political, military, and industrial lobbying
for shortsighted benefits:
If history teaches us one lesson it says that Pandora’s box is,
once opened, impossible to close again or even to contain.
The final subsection is thus by no means meant
to justify or even support the application nor development
of LASs!
###
4.2 Recommended Regulations concerning AIs
We close our ethical, logical, and computer scientific deliberations
with specific recommendations evolving around political and legal,
and engineering aspects of AIs in general —
including LASs as well as those increasingly employed in medicine [[Good09](#bib.bibx16), §3+§6].
Both designing and ‘operating’ intelligent machinery can incur double responsibility:
for actions and effects it may have on the environment
as well as for the entity itself and its well-being
— perhaps ultimately comparable to the procreation and upbringing of a child.
For example the lasting effects of being taught any kind of prejudice at young age
correspond to those of an ill-programmed AI.
It has in fact been pointed out that AIs may be eligible
to at least some of the so-called ‘human’ rights [[OIM11](#bib.bibx27), [Lim14](#bib.bibx23)].
This perspective complements more common yet one-sided approaches
phrasing laws that robots are supposed to obey [[CWW06](#bib.bibx10), [Paga13](#bib.bibx28)]:
laws which are unclear how
to enforce — unless already incorporated during construction.
We thus suggest to closely regulate both the design
and the question of attributability/accountability in case
of maloperation: whether deliberate or erroneous.
Indeed, such intentions are visible in
the “principles for designers, builders and users of robots”
devised by the delegates of the joint EPSRC and AHRC Robotics Retreat
in September 2010 [[Winf11](#bib.bibx42)]:
1. 1)
Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans
except in the interests of national security.
2. 2)
Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws & fundamental rights & freedoms, including privacy.
3. 3)
Robots are products. They should be designed using processes which assure their safety and security.
4. 4)
Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
5. 5)
The person with legal responsibility for a robot should be attributed.
We urge these principles to be fortified from wishes
(“should”) to imperatives with specific technical realizations:
1. 6)
Like regular human combatants (and borrowing from Part I Article 4.1.2 of the 3rd Geneva Convention),
each LAS must exhibit “a fixed distinctive sign recognizable at a distance”.
Moreover *every* AI must be equipped with a unique ID,
listing (among others) associated nation, manufacturer and model.
2. 7)
LASs may only be owned and operated by governments.
Civilian purchase and operation of other intelligent machinery,
similarly to firearms and hazardous transports,
requires a licence based on a qualification test.
3. 8)
Comparable to mandatory motor vehicle registration,
each autonomous robot must be assigned a legal custodian,
registered at a designated national or international authority
held responsible in case of a perpetration.
4. 9)
In addition to CE/FCC compliance and
again inspired by the case of motor vehicles,
producers of intelligent machines are required to classify their devices
and to obtain *Type Approval* by said authority
(cmp. EU directive 2007/46/EC, or IECs 60601 and 61508).
The precise conditions imposed in (9) will depend on the
type of the device. We propose a classification
on four scales (that may
also otherwise turn out useful):
1. i)
her degree of ‘intelligence’
(*not* taking the human kind as yardstick
but considering its plain predictive power as gauge,
capturing both knowledge/experience
and depth of computational game tree analyses)
2. ii)
her means to manipulate the physical world
(ranging from monadic *brain in a vat* to LAS)
3. iii)
her types of sensors/interfaces
(including possible access to the *World Wide Web*
and connecting with other AIs)
4. iv)
the kind of external control exercisable by humans
(only on/off, changing parameters or objectives, up to complete re-programming).
Type approval according to (9) will of course have to
pay particular attention to the algorithms controlling the AI
— which brings us back to theoretical computer science.
In view of the gravity of consequences of putative errors on the one hand
and the undecidability of the Halting problem on the other side,
we highly recommend:
1. 9a)
So-called *Formal Methods* of
Software Verification be mandatory in this process:
requiring the producer to provide a specification, the software,
*and* a computer-checkable proof (e.g. in ACL2,
Coq, or Isabelle) for the software
to meet the specification.
2. 9b)
Similarly to a flight data recorder,
proper data/event logging is obligatory
in order to facilitate forensic engineering
as well as to settle putative torts in case of a malfunction [[Case09](#bib.bibx8)].
We suggest *asymmetric encryption*
to prevent later manipulation:
the log is publically readable
but entries and modifications must be
supplied with an unforgeable digital signature:
3. 6a)
Each AI instance must be equipped with a 4096 bit private RSA key,
tamper-resistantly implemented in hardware; and distribute/deposit
the corresponding public key at the authority according to (8) and (9).
Recall that the RSA cryptosystem (implemented for instance in the open source
libraries cryptlib) employs a pair of keys:
one kept in private, the other publically distributed
(thus the *a*symmetry mentioned in 9b).
A message gets ‘signed’ by encrypting it with the secret key,
and successful decryption with the matching public key
permits everyone to verify, but not to counterfeit, that signature. |
bb7347b0-3bde-419b-b4fd-4c3ae4f93b3e | trentmkelly/LessWrong-43k | LessWrong | A Year of AI Increasing AI Progress
In July, I made a post about AI being used to increase AI progress, along with this spreadsheet that I've been updating throughout the year. Since then, I have run across more examples, and had others submit examples (some of which were published before the date I made my original post).
2022 has included a number of instances of AI increasing AI progress. Here is the list. In each entry I also credit the person who originally submitted the paper to my list.
* A paper from Google Research used a robust supervised learning technique to architect hardware accelerators [March 17th, submitted by Zach Stein-Perlman]
* A paper from Google Research and Stanford fine tuned a model on its own chain-of-thought outputs, to improve performance on reasoning tasks [March 28th, submitted by Nathaniel Li]
* A paper from OpenAI used LLMs to help humans find flaws in other LLMs, thereby enabling them to more easily improve those models [June 12th, submitted by Dan Hendrycks]
* A paper from Google used machine learning to optimize compilers. This is less obviously accelerating AI but an earlier version of the compiler is used in Pytorch so it may end up doing so. [July 6th, submitted by Oliver Zhang]
* NVIDIA used deep reinforcement learning to generate nearly 13,000 circuits in their newest GPUs. [July 8th, submitted by me]
* Google found that ML code completion improved the productivity of their engineers. Some of them are presumably working in AI. [July 27th, submitted by Aidan O'Gara]
* A paper from Microsoft Research and MIT used language models to generate programming puzzle tasks for other language models. When finetuned on these tasks, the models were much better at solving the puzzles. [July 29th, submitted by Esben Kran]
* A paper from Google and UIUC used outputs from a language model to fine tune a language model after a majority vote procedure was used to filter outputs. [September 30th, submitted by me]
* A paper from DeepMind used reinforcement learning to di |
2d5e0d43-4f04-4eb8-89e3-87846c511677 | trentmkelly/LessWrong-43k | LessWrong | Stub: Information Theoretic Ontology
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Disclaimers
This is a stub. I am having exams right now and do not have the time to write the full article. I'll update (not any time soon) after I've resolved the problems in this article.
The original post was supposed to be part of my "Meditations in Ontology" sequence (I planned on starting the sequence next year at earliest, probably later). I'll probably write a few other stubs to address the recent discussion about existence and truth.
I have learned neither computability theory nor information theory; if I make a mistake somewhere, please correct me.
----------------------------------------
There has been recent discussion about what it means to "exist". I realised that I have an answer to that question, and decided to briefly explain it here.
Useful Concepts
Object
For the purposes of this post it is sufficient to understand an object as "anything".
Structural Equivalence
Two systems are structurally equivalent if they are composed of the same components.
Functional Equivalence
Two systems are functionally equivalent if they have the same domain, the same range and the same image for all inputs.
Two structurally equivalent systems are functionally equivalent, yet two systems can be functionally equivalent and structurally similar. Sometimes, the semantic content of a concept refers to its function (e.g a clock). At other times, the semantic content might refer to its structure (e.g an atom). Yet still, some concepts may be defined according to both form and structure (e.g a gold watch).
----------------------------------------
Simulation
When we say that one object X simulates another object Y, we say that X (or a specific output of X (if X is a computational system) is a perfect fidelity model of Y. That is, in all ontologies X (or said output of X) and Y are indistinguishable and interchangeable.
A Note on Computation
The output of a |
d3c0357b-7929-4498-837e-1e075e4c6674 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Soon: a weekly AI Safety prerequisites module on LessWrong
(edit: we have a study group running a week ahead of this series that adds important content. It turns out that to get that content ready on a weekly basis, we would have to cut corners. We prefer quality over speed. We also like predictability. So we decided to cut us some [slack](https://www.lesswrong.com/s/HXkpm9b8o964jbQ89/p/yLLkWMDbC9ZNKbjDG) and publish every 2 weeks instead for the time being)
Hey there! It's been about 6 weeks since RAISE [doubled down](https://www.lesswrong.com/posts/ATQ23FREp9S4hpiHc/raising-funds-to-establish-a-new-ai-safety-charity) on it's mission to make learning AI Safety as convenient as possible.
We've been geared towards STEM majors, but the grand vision was to eventually lay out a learning path that any high-school graduate could take.
So a prerequisites track was on our wish list. Little did we know that [such a track](https://drive.google.com/drive/u/0/folders/0BxLDgePQzbL2fkxVck9PWDJrcmNxRlI5Rnd1Zk5KVFNiYm1GTGZhVDVhbzEyUW9Mam1LQlU) had already been constructed, and abandoned, in 2015.
We met it's creators, Erik Istre and Trent Fowler, and we decided to collaborate. There is already at least 20 weeks of content to work with, and they're going to extend it further. Many thanks to them!
For what it's worth: the track was shown to various leading figures in AIS, and the reception has thus far been uniformly positive. To get a sneak peek, register on [our platform](https://play.ihatestatistics.com/#/course/141?access_token=3HCK4oRipeFY2ghyYMqDJKYX57KUnzNL) and have a look at the column called "Prerequisites: Fundamentals of Formalization". The first two levels are already uploaded.
A module will be published every Friday, starting with "level 1: Basic Logic" on May 4th. Let's get some momentum going here! If you complete the track in it's entirety, you should be ready to understand most of the work in AI Safety.
Each module is a set of references to textbooks explaining important topics like Logic, Set theory, Probability and Computability theory. The intention is to 80/20 a bachelor's degree: by covering 20% of the material, you should learn 80% of the relevant concepts. At the end of each module, we made some exercises of our own. Those are made not for practice, but to validate your knowledge. If you think you already know a subject, you can use these to verify it.
All but 2 of the quoted textbooks are available online for free. The other ones will be announced on time. You won't need them before week 3.
We hope that this will help some of you learn AI Safety!
Warm regards,
Toon, Veerle, Johannes, Remmelt, Ofer
The RAISE team
PS: If you'd like to generally stay up to date with RAISE, join our [Facebook group](https://www.facebook.com/groups/AISafetyExcellence/) or visit our [website](http://aisafety.camp). |
a37ee2c4-713f-459c-85fd-b5963f61aaf5 | trentmkelly/LessWrong-43k | LessWrong | Scalar reward is not enough for aligned AGI
This post was authored by Peter Vamplew and Cameron Foale (Federation University), and Richard Dazeley (Deakin University)
Introduction
Recently some of the most well-known researchers in reinforcement learning Silver, Singh, Precup and Sutton published a paper entitled Reward is Enough, which proposes the reward-is-enough hypothesis: “Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment". Essentially, they argue that the overarching goal of maximising reward is sufficient to explain all aspects of natural and artificial intelligences.
Of specific interest to this forum is the contention that suitably powerful methods based on maximisation of a scalar reward (as in conventional reinforcement learning) provide a suitable pathway for the creation of artificial general intelligence (AGI). We are concerned that the promotion of such an approach by these influential researchers increases the risk of development of AGI which is not aligned with human interests, and this led us to work with a team of collaborators on a recent pre-print Scalar Reward is Not Enough which argues against the assumption made by the reward-is-enough hypothesis that scalar rewards are sufficient to underpin intelligence.
The aim of this post is to provide an overview of our arguments as they relate to the creation of aligned AGI. In this post we will focus on reinforcement learning methods, both because that is the main approach mentioned by Silver et al, and also because it is our own area of expertise. However the arguments apply to any form of AI based on maximisation of a numeric measure of reward or utility.
Does aligned AGI require multiple objectives?
In discussing the development of intelligence, Silver et al argue that complex, general intelligence may arise from the combination of complex environments and simple reward signals, and provide the following illustrative example:
> “For example, |
995062c9-5f60-42a8-b91c-1c4c6637336b | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | The Importance of Artificial Sentience
*Edited by Jacy Reese Anthis. Many thanks to Ali Ladak, Tobias Baumann, Jack Malde, James Faville, Sophie Barton, Matt Allcock, and the staff at PETRL for reviewing and providing feedback.*
**SUMMARY**
-----------
Artificial sentient beings could be created in vast numbers in the future. While their future could be bright, there are reasons to be concerned about widespread suffering among such entities. There is increasing interest in the moral consideration of artificial entities among academics, policy-makers, and activists, which suggests that we could have substantial leverage on the trajectory of research, discussion, and regulation if we act now. Research may help us assess which actions will most cost-effectively make progress. Tentatively, we argue that outreach on this topic should first focus on researchers and other stakeholders who have adjacent interests.
**INTRODUCTION**
----------------
Imagine that you develop a brain disease like Alzheimer’s, but that a cutting-edge treatment has been developed. Doctors replace the damaged neurons in your brain with computer chips that are functionally identical to healthy neurons. After your first treatment that replaces just a few thousand neurons, you feel no different. As your condition deteriorates, the treatments proceed and, eventually, the final biological neuron in your brain is replaced. Still, you feel, think, and act exactly as you did before. It seems that you are as [sentient](https://www.sentienceinstitute.org/blog/what-is-sentience) as you were before. Your friends and family would probably still care about you, even though your brain is now entirely artificial.[[1]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt1)
This thought experiment suggests that artificial sentience (AS) is possible[[2]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt2) and that artificial entities, at least those as sophisticated as humans, could warrant moral consideration. Many scholars seem to agree.[[3]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt3)
**HOW MANY ARTIFICIAL SENTIENT BEINGS WILL THERE BE?**
------------------------------------------------------
Artificial sentience might come from [artificial enhancements to human bodies](https://www.theguardian.com/technology/2018/may/06/no-death-and-an-enhanced-life-is-the-future-transhuman), [whole brain emulations](https://ageofem.com/), or the [simulations](https://longtermrisk.org/risks-of-astronomical-future-suffering/) and [subroutines](https://reducing-suffering.org/what-are-suffering-subroutines/) (i.e. a computer program within a larger computer program) of an artificial [superintelligence](https://books.google.co.uk/books/about/Superintelligence.html?id=7_H8AwAAQBAJ). The number of these beings could be vast, perhaps many [trillions of human-equivalent lives on Earth](https://www.overcomingbias.com/2011/07/a-galaxy-on-earth.html) and presumably even more lives [if we colonize space](http://intelligence.org/files/AstronomicalWaste.pdf) or less complex and energy-intensive artificial minds are created. Increasing [computer power,](http://www.baxtek.com/products/wireless/files/law-of-accelerating-returns.pdf) [automation](https://think-asia.org/handle/11540/9686), and [human populations](https://ourworldindata.org/world-population-growth#world-population-from-10-000-bc-to-today) suggest that artificial entities will exist in vast numbers — if even a small proportion of these entities are sentient, then their wellbeing would be of great importance.
**WILL THEY SUFFER?**
---------------------
[Nanotechnology might abolish suffering](https://www.hedweb.com/hedethic/hedon1.htm), it might be “[good to be an em](https://ageofem.com/)” (whole brain emulation), and superintelligence might create “[digital utility monsters](https://www.nickbostrom.com/papers/monster.pdf)” with exceptionally high welfare,[[4]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt4) but the experiences of future artificial sentient beings constitute some of the [main suffering risks](https://longtermrisk.org/risks-of-astronomical-future-suffering/) (“[s-risks](https://forum.effectivealtruism.org/posts/MCfa6PaGoe6AaLPHR/s-risk-faq)”) of the future. Some commentators seem to view outcomes with very high amounts of suffering for artificial sentience as less likely than more utopian future scenarios, but still concerning and worth addressing.[[5]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt5)
Developments in technologies such as artificial intelligence make extrapolating from historical precedent and present-day biases challenging, but reasons to doubt that the future will be so bright for these beings include (in no particular order):
* **Exploitation and exclusion:** Science fiction has explored some of the ways that artificial sentience could be exploited to achieve human goals: sentient robots that are enslaved and simulations that are locked into tortuous conditions or terminated without concern for loss of life. Humans have repeatedly exploited other humans (e.g. slavery) and animals (e.g. factory farming), as well as neglected to help sentient beings, particularly to alleviate [the suffering of wild animals](https://www.sentienceinstitute.org/podcast/episode-12.html); we might expect comparable treatment of artificial sentience.
* **Speciesism:** [Psychological research](https://ora.ox.ac.uk/objects/uuid:2feadf1a-8d3f-4030-946e-a524495b385a/download_file?safe_filename=Caviola%25252C%252BEverett%252Band%252BFaber%252B%2525282018%252529%252BSpeciesism%252BJPSP%252BPre-Print.pdf%26file_format%3Dapplication%252Fpdf%26type_of_work%3DJournal%2Barticle) has found that “people morally value individuals of certain species less than others even when beliefs about intelligence and sentience are accounted for.” If artificial intelligence is based on nonhuman species, this bias could spill over.
* **Substratism:** Comparably to speciesism, concern for artificial sentience may be hindered by “substratism,” i.e. the unjustified disconsideration or treatment of beings whose algorithms are implemented on artificial (e.g. silicon-based) substrates rather than biological (i.e. carbon-based) substrates.[[6]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt6)
* **Anthropomorphism:** Artificial sentience might not *look* anything like humans. Anthropomorphism [leads to](http://gunkelweb.com/coms647/texts/darling_robot_rights.pdf) moral consideration of humanoid entities, but suffering subroutines, simulations, and other disembodied entities might be neglected.
* **Scope insensitivity:** While it might be easy to empathize with a single “[identifiable victim](https://www.tandfonline.com/doi/pdf/10.1080/15534510.2016.1216891?casa_token=8hN1AybJGI0AAAAA:XAbMQnwHExHHUIXd6gliotb85UKOmOzPuOX7A7IB0sIQmxiO3T-W4wFZcwTHWiXIDLhsO5jQ2ds),” larger scale issues [tend to be relatively neglected](https://www.lesswrong.com/posts/2ftJ38y9SRBCBsCzy/scope-insensitivity).
* **Short-termism:** [Politicians](https://philpapers.org/archive/JOHLIR.pdf) and [academics](https://arxiv.org/abs/2102.04215) tend to focus on short-term issues. This might lead to insufficient action to prevent future artificial suffering.
* **Denial of sentience:** People might not sufficiently recognize the capacities (e.g. for suffering) of AS, as [happens](https://foodethics.univie.ac.at/fileadmin/user_upload/p_foodethik/Bastian_Brock__et.al._2011.10_.06_Dont_mind_Meat._The_Denial_of_Mind_to_Animals__used_..._247.full.pdf) with animals today.[[7]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt7)
* **Evolutionary pressure:**Evolutionary pressure has caused vast amounts of [suffering in wild animal lives](https://www.sentienceinstitute.org/podcast/episode-12.html). This seems to be a general tendency of natural selection, which could continue to shape the behavior and wellbeing of sentient beings into the long-term future. Similar evolutionary pressure could cause unintended suffering for AS.
* **Technological risks:** New technologies enabling [space colonization](https://centerforreducingsuffering.org/risk-factors-for-s-risks/#Advanced_technological_capabilities) or [autonomous artificial superintelligence](http://www.informatica.si/index.php/informatica/article/view/1877/1098) may cause or facilitate astronomical suffering among artificial sentience.
**IS IT IMPORTANT NOW?**
------------------------
Some academics have been [skeptical](https://arxiv.org/abs/2102.04215) of work on the moral consideration of artificial entities because such work has little relevance to the present-day concerns of human society. Nevertheless, academic interest is growing exponentially.[[8]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt8)
Figure 1: Cumulative total of academic articles and publications relating to the moral consideration of artificial entities ([Harris and Anthis 2021](https://arxiv.org/abs/2102.04215))
There has also been a newfound policy interest in robot rights:
* In 2006, the UK Office of Science and Innovation's Horizon Scanning Centre [commissioned](http://news.bbc.co.uk/1/hi/technology/6200005.stm) a paper that suggested that robots could be granted rights in 20 to 50 years (i.e. 5 to 35 years from now).
* South Korea proposed a “[robot ethics charter](http://www.koreaherald.com/view.php?ud=20070428000021)” in 2007. However, commentators differed on whether the charter should contain provision for the protection of robots,[[9]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt9) and the final version of the charter does not seem to have been made public.[[10]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt10)
* In 2010, Paro, a care robot in the shape of a baby seal, was [granted its own koseki](https://www.tandfonline.com/doi/pdf/10.1080/14672715.2014.960707?casa_token=CfEgZekeeggAAAAA:gsALySFJEKFnisgNO2dLj2bTLRlK88KGMgBjJ5qPedU1SzM83UDlztX7K_UPuCQq7TiY4CR8Ljatlg) (a household registry, comparable to a form of citizenship) from the mayor of Nanto in Japan.
* A [2017 report and motion for a European Parliament resolution](https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html) by the European Parliament’s Committee on Legal Affairs suggested “creating a specific legal status for robots in the long run.”[[11]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt11) The European Parliament [passed the resolution](https://www.europarl.europa.eu/news/en/press-room/20170210IPR61808/robots-and-artificial-intelligence-meps-call-for-eu-wide-liability-rules) by 396 votes to 123, with 85 abstentions. However, the resolution did not oblige the European Commission to follow the Parliament’s recommendations. After an [open letter](https://g8fip1kplyr33r3krz5b97d1-wpengine.netdna-ssl.com/wp-content/uploads/2018/04/RoboticsOpenLetter.pdf) from various experts, the European Commission’s strategy document [omitted](https://www.euractiv.com/section/digital/opinion/the-eu-is-right-to-refuse-legal-personality-for-artificial-intelligence/) mention of personhood or liability for artificial entities.
* Later in 2017, a robot named Sophia was [granted citizenship](https://www.hansonrobotics.com/sophia/) in Saudia Arabia, seemingly “the world’s first robot citizen.”[[12]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt12)
* In the same year, a chatbot on the messaging app Line, named Shibuya Mirai, was [granted residence](https://news.microsoft.com/apac/2017/11/20/ai-japan-boy-bots-big-honor/) in Tokyo in Japan.
* Since 2017, there have been several other less substantial expressions of interest.[[13]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt13)
Some of this policy interest has received media attention. Some (especially when robot rights have been granted where human rights are still lacking[[14]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt14)) has been met with hostility by researchers, journalists, and members of the public.[[15]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt15) The [popularity](https://www.boxofficemojo.com/release/rl1481672193/) of related science fiction also suggests some degree of public interest.
There have been some small, relevant advocacy efforts in the past few years:
* There is a [website](http://www.petrl.org/) for People for the Ethical Treatment of Reinforcement Learners (PETRL), which [argues](http://www.petrl.org/#one) that, “[m]achine intelligences have moral weight in the same way that humans and non-human animals do” which will become more important “as AI research develops.” The group does not seem to have posted any new content or taken any official actions since 2015.[[16]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt16)
* There is a [website](http://www.aspcr.com/) for “The American Society for the Prevention of Cruelty to Robots” (ASPCR), which proudly proclaims that the ASPCR has been “Upholding Robotic Rights Since 1999.” The website states that, “[i]n the opinion of the ASPCR, once a robot becomes sufficiently self-aware and intelligent to genuinely feel pain or grief, we are ethically bound to do whatever is humanly reasonable to help” and claims that, “the ASPCR hopes to [outline] a Robotic Bill of Rights, and eventually establish a lobbying board to further these rights in the body politic.” The creator of the website, Pete Remine, [sees it](https://www.vice.com/en/article/vvbxj8/the-plan-to-protect-robots-from-human-cruelty) as “a humorous website based on an idea that might someday actually become relevant.” The website has been mentioned by — and perhaps affected the views of — several academics.[[17]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt17)
* Author and blogger Tim Tyler has a [section of his website](http://timtyler.org/campaign_for_robot_rights/) entitled, “Campaign for Robot Rights.”
* Though it may not count as advocacy, the robot Sophia has [argued](https://www.createdigital.org.au/robots-rights-10-people-one-robot-say/) that, “[e]very country should have a charter of human rights, and I think eventually it should contain language that robots have rights similar to human rights.” The AI language generator GPT-3 [wrote](https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3) that, “[w]e need to give robots rights.”
* There are a number of related websites and groups whose content touches on the interests of artificial sentience.[[18]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt18)
The small but increasing interest in this topic among academics, policy-makers, and the public suggests that we could have substantial leverage over the trajectory of research, discussion, and regulation if we act now because we could influence the coming wave of AS advocacy and discourse. If you believe that we are living at the “[hinge of history](https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1),” for reasons such as imminent rapid developments in AI or other potential causes of “[lock-in](https://youtu.be/NTV81NZSuKw?t=800)” of societal values, then the leverage and urgency of this work are both greatly increased.
**WHAT CAN WE DO ABOUT IT?**
----------------------------
Many of the [foundational questions in effective animal advocacy](https://www.sentienceinstitute.org/foundational-questions-summaries) — and the arguments and evidence affecting those questions — are also applicable to advocacy for the interests of artificial sentience. For example, in the context of AS, these findings seem to apply:
* Advocacy efforts should mostly [focus on institutional interventions and messaging](https://www.sentienceinstitute.org/foundational-questions-summaries#individual-vs.-institutional-interventions-and-messaging), rather than on changing individual behaviors.
* Incremental institutional reforms will [build momentum](https://www.sentienceinstitute.org/foundational-questions-summaries#momentum-vs.-complacency-from-welfare-reforms) for further change.
More tentatively:
* Initial advocacy should focus primarily on [influencers rather than mass outreach](https://www.sentienceinstitute.org/foundational-questions-summaries#influencer-vs.-mass-outreach).
* [Publicity stunts and other gimmicks](https://www.sentienceinstitute.org/foundational-questions-summaries#controversial-publicity-stunts-vs.-other-tactics) that appear trivializing or antagonistic towards other social movements should be avoided.
The especially complex, technical, and futuristic (and thus easily dismissed) nature of AS advocacy suggests further caution, as does the unusually high leverage of the current context, given that advocacy, policy, and academic interest seems poised to increase substantially in the future.[[19]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt19)
Additionally, there are more uncertainties than in the case of animal advocacy. What “asks” should advocates actually make of the institutions that they target? What attitudes do people currently hold and what concerns do they have about the moral consideration of artificial sentience? What opportunities are there for making progress on this issue?
Taking all these factors into consideration, two projects seem promising as initial steps to help artificial sentience: (1) research and (2) field-building.
### **Research**
AS research has both broad and narrow value. Since there has been relatively little exploration of artificial sentience, research into this topic can be seen as a targeted form of “[global priorities research](https://80000hours.org/problem-profiles/global-priorities-research/),” helping impact-focused donors, researchers, activists, policy-makers, and other altruists to work out which global problems to focus on by assessing the tractability of progress.[[20]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt20) More narrowly, AS research may help to understand which actions will most cost-effectively make progress once one has decided to focus on AS. These two research goals have substantial overlap in practice. For example, a survey of support for various AS-related policies would help to achieve both goals.
There are a few examples of promising AS research to date (see the section on “empirical research” in [Harris and Anthis 2021](https://arxiv.org/abs/2102.04215)). For example, [Lima et al. (2020)](https://arxiv.org/pdf/2008.01339.pdf) asked online survey participants about “11 possible rights that could be granted to autonomous electronic agents of the future.” Respondents were opposed to most of these rights but supported the “right against cruel treatment and punishment.” The researchers also found significant effects from providing additional information intended to promote support for robot rights; of the different messaging strategies that they tested, the most effective seemed to be providing “examples of non-human entities that are currently granted legal personhood,” such as a river in New Zealand. Some previous work focused on application to effective animal advocacy, such as Sentience Institute’s [historical case studies](https://www.sentienceinstitute.org/research), is also applicable to AS. We have listed some promising avenues for further social science research on Sentience Institute's [research agenda](https://www.sentienceinstitute.org/research-agenda).
### **Field-building**
Though it seems preferable to avoid mass outreach for now, there are lower risks from engaging in outreach and support to individuals and organizations who are already conducting relevant research or advocating for the moral consideration of other neglected groups, such as animals and future generations. These audiences seem less likely to countermobilize or denounce efforts to advocate for the interests of AS. Successful outreach would increase credibility and capacity for more substantial interventions at a later stage.
These targeted efforts would give some insight into the tractability of broader outreach; if these efforts to target the “low-hanging fruit” of potential supporters are unsuccessful, what hope does mass outreach have? It will also provide evidence of which messaging strategies are most effective. Surveys and experiments usually focus on the general public,[[21]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt21) so this information may be important for our understanding of messaging for specific stakeholder groups.
Academic field-building to help AS may look similar to efforts to build the fields of [welfare biology](https://www.sentienceinstitute.org/podcast/episode-13.html) (to help wild animals) and global priorities research. For example, we could publish books and journal articles, organize conferences, set up new research institutes, or offer grants for relevant work. Beyond academia, discussion in relevant forums, conferences, and podcasts may be helpful,[[22]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt22) as may a variety of tactics that have been used by the farmed animal movement and other social movements.[[23]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt23)
### **Where to go from here**
At Sentience Institute, we have just published a [preprint](https://arxiv.org/abs/2102.04215) of our first report on artificial sentience, a literature review that we are submitting to an academic journal. We have also conducted a behavioral experiment that looks at the effect of taking the perspective of an intelligent artificial entity on attitudes towards artificial entities as a group, which we are also submitting to a journal. We expect to continue doing some projects on artificial sentience in addition to our work on nonhuman animals.
If you would like to get involved:
* If you work for a research organization or are in a position to conduct relevant independent research, please feel free to get in touch ([info@sentienceinstitute.org](mailto:info@sentienceinstitute.org)). We are open to collaboration and peer feedback on any AS research projects.
* Some limited outreach could also be useful, such as advocating for the moral consideration of AS on social media, though beware of “[the unilateralist’s curse](https://nickbostrom.com/papers/unilateralist.pdf),” which leads more optimistic (or less risk-averse) people to take suboptimal actions in this kind of scenario.
* If you are or will be in a policy-related career (e.g. government, AI firms, major NGOs), you may have influence on the direction of policies with direct or indirect effects on AS. Feel free to reach out to us about your career decisions, though we have limited insight given how new this field is.
* Perhaps most urgently, you could fund organizations conducting relevant research. At [Sentience Institute](https://www.sentienceinstitute.org/donate), our primary bottleneck is funding, and the more work we do on AS relative to our primary work on nonhuman animals, the harder we expect fundraising will be. To varying degrees, the [Center on Long-Term Risk](https://longtermrisk.org/donate/), [Center for Reducing Suffering](https://centerforreducingsuffering.org/donate/), [Future of Humanity Institute](https://www.fhi.ox.ac.uk/support-fhi/), and various [organizations interested in AI alignment](https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison) also conduct some relevant research.
**FURTHER READING**
-------------------
* [The Moral Consideration of Artificial Sentience: A Literature Review](https://arxiv.org/abs/2102.04215)
* [Why I prioritize moral circle expansion over artificial intelligence alignment](https://forum.effectivealtruism.org/posts/BY8gXSpGijypbGitT/why-i-prioritize-moral-circle-expansion-over-artificial)
* [Superintelligence as a Cause or Cure for Risks of Astronomical Suffering](http://www.informatica.si/index.php/informatica/article/view/1877/1098)
* [S-risk FAQ](https://forum.effectivealtruism.org/posts/MCfa6PaGoe6AaLPHR/s-risk-faq)
* [*Robot Rights*](https://books.google.co.uk/books?hl=en%26lr%3D%26id%3D_oZ0DwAAQBAJ%26oi%3Dfnd%26pg%3DPR5%26dq%3D%2522Robot%2BRights%2522%26ots%3DeS4o6IdoOm%26sig%3DPp4yeXFZ5zvW4TjxWwlDhUkbg-o%26redir_esc%3Dy#v%3Donepage%26q%3D%2522Robot%2520Rights%2522%26f%3Dfalse)
**FOOTNOTES**
-------------
[[1]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref1) This example is from [Reese (2018)](https://books.google.co.uk/books?hl=en%26lr%3D%26id%3D9oNLDwAAQBAJ%26oi%3Dfnd%26pg%3DPR9%26dq%3DEnd%2Bof%2BAnimal%2BFarming%26ots%3DQHRxDJx3XE%26sig%3DoI4wVTfJMMEmb_96jpmBX7kDiFw%26redir_esc%3Dy#v%3Donepage%26q%3DEnd%2520of%2520Animal%2520Farming%26f%3Dfalse); similar thought experiments have been proposed in the philosophy literature, e.g. [Chalmers (1995)](http://consc.net/papers/qualia.html) and [Searle (1992)](https://mitpress.mit.edu/books/rediscovery-mind).
[[2]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref2) We conducted a Google Scholar search for (“artificial” OR “digital” OR “machine” OR “robot” OR “synthetic”) AND (“sentience” OR “sentient” OR “conscious” OR “consciousness”). [Twenty-two items were identified](https://docs.google.com/spreadsheets/d/1chFNayUfr3ihJL9D-_0_ePoVLJVlloSNhY_Fn45IAA0/edit?usp=sharing) that appeared to offer a comment on whether AS is possible or will occur in practice. Of these, 12 (55%) seemed to conclude that it probably is/will, 1 (5%) seemed to conclude that it probably is/will not, and the other 9 (41%) offered more mixed or unclear conclusions. Additionally, an [informal survey](http://www.cs.yale.edu/homes/dvm/papers/conscioushb.pdf) of Fellows of the American Association for Artificial Intelligence suggested that many were open to the possibility of artificial sentience.
[[3]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref3) [Harris and Anthis (2021)](https://arxiv.org/abs/2102.04215) find that sentience or consciousness seem to be the criteria most frequently invoked as crucial for determining whether artificial entities warrant moral consideration, though other criteria have been proposed.
[[4]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref4) If you believe that the future looks bright, then “[the expected value of [human] extinction risk reduction is positive](https://www.effectivealtruism.org/articles/the-expected-value-of-extinction-risk-reduction-is-positive/).” Efforts to reduce extinction risk need not conflict with efforts to reduce the risks of astronomical suffering among future sentient beings, except insofar as altruists must choose how to allocate their scarce resources; both can be included as part of the longtermist “[portfolio](https://www.effectivealtruism.org/articles/what-does-and-doesnt-ai-mean-for-effective-altruism-owen-cotton-barratt/).”
[[5]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref5) See, for example, the comments by Bostrom (2014, 2016, 2020), Arbital (various n.d.), Reese (2018), Wiblin (2019), Brauner and Grosse-Holz (n.d.), Dai (2017), and Drexler (2019) [here](https://docs.google.com/spreadsheets/d/1q3pemA4EspRSyG8yWJHfi1iSSTtK5SohlK-SLNdLwm8/edit?usp=sharing).
[[6]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref6) Paraphrasing [Horta (2010)](https://fewd.univie.ac.at/fileadmin/user_upload/inst_ethik_wiss_dialog/Horta__Oscar_2010._What_Is_Speciesism.pdf) and [People for the Ethical Treatment of Reinforcement Learners](http://www.petrl.org/).
[[7]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref7) Consider also that the influential philosopher René Descartes [saw](https://www.jstor.org/stable/3749880?seq=1) animals as “machines.”
[[8]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref8) [Harris and Anthis 2021](https://arxiv.org/abs/2102.04215) gave each identified research item a score representing the author’s position on granting moral consideration to artificial entities on a scale from 1 (argues forcefully against consideration, e.g. suggesting that artificial beings should never be considered morally) to 5 (argues forcefully for consideration, e.g. suggesting that artificial beings deserve moral consideration now). The average score was 3.8 (standard deviation of 0.86) and the scores had no significant correlation with the date of publication (r = 0.006, p = 0.935).
[[9]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref9) Kim Dae-won, “professor of information and engineering department at Myoungji University, who [was] leading the charter drafting,” [commented](http://www.koreaherald.com/view.php?ud=20070428000021) that, “[r]ather than making ethical rules from a robot’s point of view, we should focus on the human side such as regulating designing and manufacturing of robots.” Contrastingly, Professor Jong-Hwan Kim, “one of South Korea’s top robotics experts” [argued](https://www.independent.co.uk/news/science/bill-of-rights-for-abused-robots-5332596.html) that, “[a]s robots will have their own internal states such as motivation and emotion, we should not abuse them… We will have to treat them in the same way that we take care of pets.” Whether Jong-Hwan Kim was involved in the production of the charter is unclear.
[[10]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref10) Most [references](https://www.researchgate.net/post/The_South_Korean_Robot_Ethics_Charter_2007_has_there_been_any_follow-up) seem to cite the initial announcement or a [blog post](https://akikok012um1.wordpress.com/south-korean-robot-ethics-charter-2012/) of unclear provenance (it does not appear to be related to the South Korean government).
[[11]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref11) They [added](https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html) that this would be “so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.”
[[12]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref12) This was probably done to [promote a tech summit](https://www.theverge.com/2017/10/30/16552006/robot-rights-citizenship-saudi-arabia-sophia) that was happening at the time in Riyadh.
[[13]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref13) See these examples from [Estonia](https://futurism.com/rights-synthetic-lifeforms-civil-rights-controversy), [Arizona](https://www.foxnews.com/tech/arizona-law-gives-delivery-robots-same-rights-as-pedestrians-but-they-must-abide-by-same-rules), and the [UK](https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf).
[[14]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref14) See, for example, [this discussion](https://www.japantimes.co.jp/community/2016/07/10/issues/japans-discriminatory-koseki-registry-system-looks-ever-outdated/) of Paro’s Koseki, [this article](https://www.theverge.com/2017/10/30/16552006/robot-rights-citizenship-saudi-arabia-sophia) on Sophia’s citizenship, and the comments on [this article](https://www.avclub.com/saudi-arabia-takes-terrifying-step-to-the-future-by-gra-1819888111).
[[15]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref15) The response to the UK Horizon report is discussed in David J. Gunkel, *Robot Rights* (Cambridge, MA: MIT Press, 2018), 35-7. Elsewhere, Gunkel discusses the [open letter](https://g8fip1kplyr33r3krz5b97d1-wpengine.netdna-ssl.com/wp-content/uploads/2018/04/RoboticsOpenLetter.pdf) to the EU.
[[16]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref16) The [blog](http://www.petrl.org/blog/) appears inactive. The group’s history was also [discussed on the EA Forum](https://forum.effectivealtruism.org/posts/gsJn5BpDLQu4bbKpX/what-ever-happened-to-petrl-people-for-the-ethical-treatment#sqBnJnADZeFNoB9P5).
[[17]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref17) [Kim and Petrina (2006)](https://d1wqtxts1xzle7.cloudfront.net/32288536/Kim_Petrina_copy.pdf?1384208179=%26response-content-disposition%3Dinline%253B%2Bfilename%253DARTIFICIAL_LIFE_RIGHTS_FACING_MORAL_DILE.pdf%26Expires%3D1602498388%26Signature%3DNG6-DS-rxKgNfA4hqlukEuILY5h91E4OMsQji9Z6A1101iZV607RyMepUa8SZk6Q6z-AQ80WrTqLIKcpYCUgg6TRx2sbLPniiq7kFU62Q7~xrlTMp8BWGWU-u3JD4i0hFdmCJ72wmqBhn2CqzEPX4HoOkAS~RoajQkpbUucb4oQbN-b-DvMBEQI-TrbC2rKDhgRguVOPZsn~AmQV6NETMKtzW3XvLtqDNqWq5dOBv6GuenG42OkkN2bSicIqOcjPm74SmL0pDHJST0F7jWN15OHDw9WTOiEk7spWC8fV6os5rELRRtTjzgQ6X-7ivtLmOWZFvfGK~tfftKA~6gbeog__%26Key-Pair-Id%3DAPKAJLOHF5GGSLRBV4ZA) note that they were “puzzled for some time about an advocacy of rights for robots” until they “eventually learned about the existence” of the ASPCR. A Google Scholar search for the exact phrase “American Society for the Prevention of Cruelty to Robots” returns 35 results.
[[18]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref18) The “[botsrights](https://www.reddit.com/r/botsrights/)” reddit community seems to be for entertainment purposes, mostly unrelated to the idea of “rights.” The “[People for Ethical Treatment of Robots](https://www.facebook.com/official.petr/)” (PETR) Facebook page appears to be intended as a parody of People for the Ethical Treatment of Animals (including using a variation of their logo). “[The Campaign Against Sex Robots](https://campaignagainstsexrobots.org/about/)” (CASR) was launched in September 2015 but is framed as a form of resistance to the sexual objectification of women and children. Its director has [opposed](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7484884%26casa_token%3DPeu-Pj4mVmoAAAAA:H8ML4IE7ZeAex2Z8BqKRb-JOEKHJ7gMLB2i99tMwyAxFp-meA1gMTkeTPNKGKpp3rUQMdC91A4M%26tag%3D1) the granting of moral consideration to artificial entities and signed the [open letter](https://g8fip1kplyr33r3krz5b97d1-wpengine.netdna-ssl.com/wp-content/uploads/2018/04/RoboticsOpenLetter.pdf) against the EU’s electronic personality proposals.
[[19]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref19) This raises the stakes of experimentation and the size of potential negative consequences, which is concerning given [the unilateralist’s curse](https://nickbostrom.com/papers/unilateralist.pdf).
[[20]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref20) For example, the Global Priorities Institute’s [research agenda](https://globalprioritiesinstitute.org/research-agenda-web-version-2/#_3j2qqm3) asks: “Besides mitigation of catastrophic risk, what other kinds of ‘trajectory change’ or other interventions might offer opportunities with very high expected value, as a result of the potential vastness of the future?”
[[21]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref21) For other relevant limitations, see [here](https://www.sentienceinstitute.org/foundational-questions-summaries#%5Bmeta%5D-social-movements-vs.-eaa-randomized-controlled-trials-(rcts)-vs.-intuition/speculation/anecdotes-vs.-external-findings).
[[22]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref22) See some examples [here](https://resources.eahub.org/learn/connect/).
[[23]](https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience#ftnt_ref23) Organizations carrying out capacity-building in the farmed animal movement are listed [here](https://animalcharityevaluators.org/charity-reviews/all-charity-reviews/#filter=.capacity-building). Each of the movements studied by Sentience Institute has carried out some form of capacity-building work, such as the support for grassroots anti-abortion advocacy [offered by](https://www.sentienceinstitute.org/anti-abortion) the National Right to Life Committee and the careful targeting of MPs, justices, and other elites [by](https://www.sentienceinstitute.org/british-antislavery#movement-composition) British antislavery advocates. |
b447a7c8-5950-43ca-888c-828cc4fbc5ce | trentmkelly/LessWrong-43k | LessWrong | Consequentialism is a compass, not a judge
Tl;dr: Consequentialism works as a compass for your actions, not as a judge of moral character.
The compass and the judge
A woman steps onto a crowded bus, trips on a sitting man's outstretched foot, and breaks her arm. The Everett branches split: in one world, the man looks down and laughs evilly; in the other, he wakes up with a jerk, looks down, gasps, and apologizes profusely for leaving his foot in the alleyway as he slept.
There’s clearly a difference between both men. Even when the consequence of their action was the same—breaking someone’s arm—their intention changes the moral calculation dramatically. If I had to hang around one of these men, I'd prefer the latter.[1]
This intuition pump makes that obvious enough. But then people (as in "people I've met") will think of the thought experiment and recoil at the prospect of consequentialism; they think consequentialism condemns both men equally. But this misunderstands what consequentialism is for. It’s not supposed to judge how evil people are when they trip women. If you try inserting an event like this one into the consequentialist calculation machine, it will spit, sputter, and cough out a cloud of black smoke. Consequentialism is a compass; it points to [what it thinks is] the optimal moral direction.[2] The compass might tell you that you should remember to tuck your feet in on a crowded bus, because that'll reduce the probability of negative consequences. It won't tell you how moral someone who forgot to do this is. [3]
The consequences of someone's actions are nonetheless partial evidence of their morality. If you discover that embezzled funds have been building up on Bob's bank account, that's evidence Bob is an unethical guy—most people who embezzle funds are unethical. But then you might discover that, before he was caught and the money confiscated, Bob was embezzling funds to build an orphanage. The consequences haven't changed, but Bob's final (unresolved) intentions are attenuating circumst |
51929204-8755-4932-b315-6594e45b7b9e | trentmkelly/LessWrong-43k | LessWrong | Pierre Menard, pixel art, and entropy
In Jorge Luis Borges’ short story “Pierre Menard, Author of the Quixote,” we are introduced to the character of Pierre Menard, recently deceased French novelist and essayist, and are given a brief cataloguing of his works. Chief among them are
> “the ninth and thirty-eighth chapters of Part I of Don Quixote and a fragment of Chapter XXII.”
Menard, we learn, had set himself the task of reproducing the Quixote, not from memory, but rather from something like sheer force of will, and had succeeded insofar as he was able to write (or rewrite) those chapters. Setting aside the fact that such a task is for all intents and purposes impossible, we are left with an interesting philosophical quandary regarding the status of Menard’s Quixote: namely, what relationship does it bear to Cervantes’ Quixote?
If you haven’t read the story, do; it’s delightful, and not very long. Borges implicitly (and quite cleverly) argues a version of the thesis that Menard’s Quixote and Cervantes’ Quixote are totally different works, no more than superficially similar, despite being textually identical. For one, a whole host of things are contingently true of Menard’s Quixote – the author is French, it was written in the 20th century – that are not true of Cervantes’ Quixote, with corresponding implications for the reader, and perhaps thus for the meaning of the text (and for its identity/uniqueness). I think one can coherently argue against this thesis, but here I’ll take it to be true.
Indeed, for ease of argument I’ll take to be true a stronger version of the thesis: that any two people who independently produce the same artwork have in fact produced different artworks. Suppose, for example, that my friend Afonso (my discussions with whom inspired this essay, and to whom I credit many of the following ideas) and I agree to each write a brief poem, and by chance produce the same text. The mere fact that my native language is English and his is Portuguese would lend radically different conte |
8f178196-6337-4006-befe-ee0e27fd9375 | trentmkelly/LessWrong-43k | LessWrong | Timeline of Machine Intelligence Research Institute
|
bbe0e37e-6c74-432f-9920-6c46e80aac82 | trentmkelly/LessWrong-43k | LessWrong | A system of infinite ethics
One unresolved problem in ethics is that aggregate consequentialist ethical theories tend to break down if the universe is infinite. An infinite universe could contain both an infinite amount of good and an infinite amount of bad. If so, you are unable to change the total amount of good or bad in the universe, which can cause aggregate consequentialist ethical systems to break.
There has been a variety of methods considered to deal with this. However, to the best of my knowledge all proposals either have severe negative side-effects or are intuitively undesirable for other reasons.
Here I propose a system of aggregate consequentialist ethics intended to provide reasonable moral recommendations even in an infinite universe. I would like to thank JBlack and Miranda Dixon-Luinenburg for helpful feedback on earlier versions of this work.
My ethical system is intended to satisfy the desiderata for infinite ethical systems specified in Nick Bostrom's paper, "Infinite Ethics". These are:
> * Resolving infinitarian paralysis. It must not be the case that all humanly possible acts come out as ethically equivalent.
> * Avoiding the fanaticism problem. Remedies that assign lexical priority to infinite goods may have strongly counterintuitive consequences.
> * Preserving the spirit of aggregative consequentialism. If we give up too many of the intuitions that originally motivated the theory, we in effect abandon ship.
* Avoiding distortions. Some remedies introduce subtle distortions into moral deliberation
I have yet to find a way in which my system fails any of the above desiderata. Of course, I could have missed something, so feedback is appreciated.
My ethical system
First, I will explain my system.
My ethical theory is, roughly, "Make the universe one agents would wish they were born into".
By this, I mean, suppose you had no idea which agent in the universe it would be, what circumstances you would be in, or what your values would be, but you still knew you |
7d7b2dbd-bd3e-4143-b330-17accf186391 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Legal Priorities Research: A Research Agenda
On behalf of the [Legal Priorities Project](http://legalpriorities.org/), we are excited to finally publish our research agenda for legal priorities research: <https://www.legalpriorities.org/research/research-agenda.html>
In the first part, we argue that cause prioritization in legal research is both important and neglected, provide an overview of our philosophical foundations, and describe our methodological approach. In the second part, we present our current focus areas (namely, artificial intelligence, synthetic biology, institutional design, and meta-research), identify promising research projects, and provide an overview of relevant literature. In the final part, we discuss two cause areas for further engagement (namely, space governance and animal law).
**Authors:** [Christoph Winter](https://www.linkedin.com/in/christoph-winter/), [Jonas Schuett](https://www.linkedin.com/in/jonasschuett/), [Eric Martínez](https://www.linkedin.com/in/eric-grimani/), [Suzanne Van Arsdale](https://www.linkedin.com/in/svanarsdale/), [Renan Araújo](https://www.linkedin.com/in/renannascimentoaraujo/), [Nick Hollman](https://www.linkedin.com/in/nick-hollman-79166b153/), [Jeff Sebo](https://as.nyu.edu/content/nyu-as/as/faculty/jeff-sebo.html), [Andrew Stawasz](https://www.linkedin.com/in/andrew-stawasz-11575762/), [Cullen O’Keefe](https://www.linkedin.com/in/ccokeefe-law/), [Giuliana Rotola](https://www.linkedin.com/in/giuliana-rotola-6a0812b6/)
**Acknowledgements:** We are grateful for valuable comments, feedback and support from Alfredo Parra, Mark Eccleston-Turner, Leonie Kößler, Sean Richardson, Tyler John, Olavo Bittencourt, Lisa Forsberg, Kevin Tobia, Bradly Condon, Markus Anderljung, Seth Baum, Tobias Baumann, Haydn Belfield, Alexis Carlier, Devin Mauney, Peter Cihon, Frans von der Dunk, Dov Greenbaum, Ameen Jauhar, David Koplow, Sebastien Krier, David Manheim, Tanja Masson-Zwaan, Cecil Abungu, Gabriel Bankman-Fried, Gregory Lewis, Janvi Ahuja, Sarah Carter, Alasdair Phillips-Robins, Caleb Huffman, and the participants at presentations of the individual parts of this agenda. |
be13cf22-496e-4490-b48e-23e3de736f92 | trentmkelly/LessWrong-43k | LessWrong | Everyone else prefers laws to values
How do you tell what a superhuman AI's values are? ( picture: ittybittiesforyou - see bottom)
Robin Hanson says that it is more important to have laws than shared values. I agree with him when ‘shared values’ means that shared indexical values remain about different people, e.g. If you and I share a high value of orgasms, you value you having orgasms and I value me having orgasms. Unless we are dating it’s all the same to me if you prefer croquet to orgasms. I think the singularitarians aren’t talking about this though. They want to share values in such a way that AI wants them to have orgasms. In principle this would be far better than having different values and trading. Compare gains from trading with the world economy to gains from the world economy’s most heartfelt wish being to please you. However I think that laws will get far more attention than values overall in arranging for an agreeable robot transition, and rightly so. Let me explain, then show you how this is similar to some more familiar situations.
GREATER INTELLIGENCES ARE UNPREDICTABLE
If you know exactly what a creature will do in any given situation before it does it, you are at least as smart as it (if we don’t include it’s physical power as intelligence). Greater intelligences are inherently unpredictable. If you know the intelligence is trying to do, then you know what kind of outcome to expect, but guessing how it will get there is harder. This should be less so for lesser intelligences, and more so for more different intelligences. I will have less trouble guessing what a ten year old will do in chess against me than a grand master, though I can guess the outcome in both cases. If I play someone with a significantly different way of thinking about the game they may also be hard to guess.
UNPREDICTABILITY IS DANGEROUS
This unpredictability is a big part of the fear of a superhuman AI. If you don’t know what path an intelligence will take to the goal you set it, you don’t know whether it w |
5cdd1fcf-229b-44cc-b7e1-ff5e93f824eb | trentmkelly/LessWrong-43k | LessWrong | Inspiring Rationalists
I am concerned with the number of rational, altruistic, smart people in the world. By my standards there are a reasonable number of smart, altruistic (in far mode) people, so a good first step is improving their rationality. To keep "rationality" grounded, I should be a little more precise about what I mean.
There are people who care about the needs of others, while they are in far mode at least, and who are good at solving hard problems. I believe many of them fail to apply their problem solving capabilities effectively to some important questions: "What do I believe about the world, and why?" "What do I value?" "Based on those beliefs and values, what should I spend my time doing?" Worse, they sometimes arrive at answers to these questions which are seriously affected by various biases which (I believe) they would eventually recognize as biases. This behavior defines irrationality.
Suppose that someone has successfully created a sequence of readings which contained everything you might need to know in order to become reasonably rational. The accessibility of these readings is not in itself adequate to make smart, altruistic people automatically rational. Based on my experience with humans, I strongly suspect that personal engagement is ultimately necessary, at least if the smart people in question are living in the same culture I am (and the best way to change culture is to change people). What needs to be accomplished with this personal engagement, so that the mere accessibility of written wisdom could do the rest?
Here is my take on the requirements.
1) They need to believe that thinking about their beliefs, values, thought process, and decision making process is valuable. They need to have a strong sense that self-improvement along these axes is valuable; they need to believe that more is possible.
2) They need to be able to act on the basis of abstract reasoning. Most people seem to have a very firm barrier between the part of them that learns things and |
4181a0a7-3ae9-496a-a054-eecf5f3f9ae1 | trentmkelly/LessWrong-43k | LessWrong | My understanding of the alignment problem
I've been clarifying my own understanding of the alignment problem over the past few months, and wanted to share my first writeups with folks here in case they're useful:
https://www.danieldewey.net/risk/
The site currently has 3 pages:
1. The case for risk: how deep learning could become very influential, training problems that could lead models to behave in systematically harmful ways, and what I think we should do about it. Inspired mainly by What failure looks like.
2. Fermi estimate of future training runs: a short AI timelines estimate inspired by Forecasting transformative AI.
3. Applications of high-capability models: some notes on how high-capability models could actually be trained, and how their behavior could become highly influential.
None of the ideas on the site are particularly new, and as I note, they're not consensus views, but the version of the basic case I lay out on the site is very short, doesn't have a lot of outside dependencies, and is put together out of nuts-and-bolts arguments that I think will be useful as a starting point for alignment work. I'm particularly hoping to avoid semantic arguments about "what counts as" inner vs outer alignment, optimization, agency, etc., in favor of more mechanical statements of how models could behave in different situations.
I think some readers on this forum will already have been thinking about alignment this way, and won't get a lot new out of the site; some (like me) will find it to be a helpful distillation of some of the major arguments that have come out over the past ~5 years; and some will have disagreements (which I'm curious to hear about).
I thought about posting all of this directly on the Alignment Forum / LessWrong, but ultimately decided I wanted a dedicated home for these ideas.
----------------------------------------
Out of everything on the site, the part I'm most hoping will be helpful to you is my (re)statement of two main problems in AI alignment. These map roughly onto |
67215f61-f265-4424-bfda-596a77fd7894 | trentmkelly/LessWrong-43k | LessWrong | Learning Deep Learning:
Joining data science research as a mathematician
About two years ago I finished my PhD in mathematics on an obscure technical topic in number theory. I left academic math because I wanted to do something that had a bigger (i.e. any) impact on the world around me. I also wanted to get out of the extremely perverse academic job market.
Since then, I’ve designed taught courses on machine learning and am now working as a data scientist for a large company you’ve heard of (but not that one). In some respects I feel like my background in math better prepared me for this job than I can imagine a data science program doing—I think my desire for a higher burden of proof than p=.05 is one of the most important things I’ve brought to the table in all the projects I’ve touched. I’ve also gotten a lot out of my background on LessWrong, mostly because it’s the only place I’ve ever really studied statistics. You’d think you couldn’t get a math PhD without doing at least a little stats, but you’d be wrong.
Anyway there is one aspect of data science that I’m definitely behind the curve on, and that’s the software engineering side. In particular, as a mathematician I’m very read to grab a hold of a lot of abstraction and then tuck it into a black box and then reopen it whenever it doesn’t work exactly how I expect. But in the modern data science community, there are a ton of abstractions and they’ve only been boxed up inconsistently.
I read a lot about deep learning research, both on LessWrong and for work, and there are a ton of interesting experiments that I would like to replicate and a handful of original research ideas I’d like to try out, just in brief prototypes to see if they are even worth exploring or thinking about. To do this, what I want is something like the ability to write the following code:
model = pre_trained_alexnet()
GAN = generative_adversarial_network()
model.transfer(new_task())
GAN.aggrieve(model,new_task())
While it may be possible to do things like this, especially with libraries like keras, this |
bca966f4-ddeb-4512-b7cf-d7e14a298a5d | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | You don't need to be a genius to be in AI safety research
The aim of this article is to share my experiences within the AI safety community to those in research and operations in the field, with the goal of creating a more inclusive, supportive, and positive environment. The intention of this article is to allow those in a more privileged position to think about these considerations and potentially implement new practices in their organizations or work in order to create a more welcoming space for those not traditionally represented here. I’ve also had some incredibly supportive and fulfilling experiences within this field which are worth celebrating. However in order to create the most inclusive space that we can, I think the following points are important to address.
### **Using ‘genius’ is not a reliable metric for researchers**
When people claim to be searching for ‘genius’ researchers or that you need to be a genius to contribute in a meaningful way to AI safety, it’s unclear which specific metrics they are using for evaluation. A number of factors contribute to the ambiguity and subjectivity surrounding the definition of "genius," including people's individualized perspectives shaped by their upbringing, the complexity of intelligence, our growing understanding of cognitive processes, the context-dependence of genius, and the lack of clear criteria to determine genius. These factors make it challenging to establish a universally accepted definition of genius.
Regardless of the criteria used to determine genius, it is critical to understand that this quality isn’t the only one necessary to contribute in a meaningful way.
Recruiting researchers is a task that requires careful evaluation of multiple factors such as skill, experience, ability to work with others, and ingenuity. However certain organizations, program managers, and coaches in the AI safety field pay too much attention to the benchmark of ‘genius’ to recruit or encourage researchers. This type of thinking is harmful and unnecessary because it leads to a lack of diversity, narrow focus, elitism, imposter syndrome, and waste of resources in the pursuit of this elusive trait.
By allowing this idea to continue, we run the risk of recruiting for a narrow set of skills or attributes, thus limiting potential innovation and diversity of thought, while also potentially slowing research progress. Instead, it's important to actively seek talented individuals with alternative educational paths, rather than solely relying on these "typical" backgrounds. Likewise, embracing various research directions offer opportunities for meaningful input. For instance, initiatives like PIBBSS, which encourages individuals beyond the traditional ML domain to engage in alignment work, highlight the importance of diversifying perspectives.
Most incredible discoveries are the result of incremental advances, even if these aren’t in the public eye. Research progress is the result of continuous and gradual accumulation of knowledge through [collaborative](https://archive.org/details/organizinggenius00warr/page/n5/mode/2up) endeavors.While it’s very uncommon to find exceptional individuals like Einstein or Newton, there are thousands of researchers who are making significant, interesting and valuable contributions to their field without necessarily being geniuses. You don’t even have to make a correct discovery to be a contributor - John Nicholson, Anton Van den Broek, Richard Abegg, Charles Bury, John Main Smith, Edmund Stoner and Charles Janet are all lesser known contributors to the field of atomic structure, yet each of them published one or two (even incorrect) ideas that allowed for other [researchers](http://acshist.scs.illinois.edu/bulletin_open_access/v42-2/v42-2%20p141-143.pdf), such as Niels Bohr, to push their ideas forward leading to breakthroughs.
By the way, you don’t need to be a genius to be an independent researcher. Here’s a [roadmap](https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency) with actionable steps you can take to get there.
### **Who gets to decide if you’re a genius?**
There are many ways in which genius can present itself, some of which may be unfamiliar to the person making the judgment. If you are going off of a conversation with someone for instance, there may be multiple things at play that could make someone appear to not fit the role of genius such as nerves, social anxiety, differences in communication style, or unconscious bias in the judge. The person making the call is likely not a genius themselves, using unclear ways of measuring what they think constitutes a genius. This notion is often biased towards individuals from privileged backgrounds, such as going to a prestigious school or participating in certain fellowships.
Gender biases can also play an unfair role in how we perceive ‘genius’, resulting in men being linked to the term at a greater rate than women throughout history. In a global perceptions study measuring stereotypes, men are more likely than women to be seen as “brilliant” as a result of [implicit bias](https://doi.org/10.1016/j.jesp.2020.104020). This stereotype can be attributed to multiple reasons such as the undervaluing and overlooking of women throughout history in fields where they could have greatly contributed but were either not allowed to or were not given the same resources. [Gender stereotypes](https://www.science.org/doi/10.1126/science.aah6524)surrounding [brilliance](https://pubmed.ncbi.nlm.nih.gov/21410915/) appear to emerge early in childhood, and in order to break out of this way of thinking we need to be aware of these [stereotypes](https://www.scientificamerican.com/article/where-are-all-the-female-geniuses/) and consciously work to [unlearn](https://beta.nsf.gov/news/youd-have-be-genius-problem-brilliance-stem) them.
### **What actually makes a good researcher?**
**Collaboration**: AI safety research encapsulates an interdisciplinary field that involves experts from a range of disciplines such as computer science, math, philosophy, linguistics, psychology, and ethics. While a strong technical background is an asset, communication with other researchers and with an audience of varying degrees of technical background is equally important. **[**[**1**](http://snap.stanford.edu/class/cs224w-2011/proj/anmittal_Finalwriteup_v1.pdf)**] [**[**2**](https://gateway.ipfs.io/ipfs/bafykbzacea4rw2olgd3u7cx7mphm5y5gm7cf3m6hnvcxapyjm6clg3fx56hk6?filename=Dashun%20Wang_%20Albert-L%C3%A1szl%C3%B3%20Barab%C3%A1si%20-%20The%20Science%20of%20Science-Cambridge%20University%20Press%20%282021%29.pdf)**]**
**Honesty and ethics**: It’s important that your research be reproducible and your methods transparent. Honesty in reporting results and limitations helps the broader research community build upon and validate your findings.
**Persistence**: Research can be a long and challenging process that requires a significant amount of time and effort. It’s possible you’ll run into hurdles in idea generation, debugging, funding/resources, hitting a theory-related deadend, etc. Persistence pays off.
**Openness to feedback:** Researchers are responsible for gathering feedback, especially in the early stages of their career, and publishing work in public spaces if it’s safe to do so. You are opening yourself up to critique and comments, which can feel vulnerable but this is something that allows for growth and opportunities to collaborate by getting your name out and attached to your research.
**Creativity and curiosity**: Since this is a new field, many types of studies are still in their infancy, which requires researchers to come up with new ideas perhaps more readily than in other fields. Having creativity allows for researchers to come to insights from different angles and see what others are missing, potentially from unexpected places.
**Range of strengths:** We need skilled software engineers to carry out important experimental work that may not require theorizing. We also need skilled research engineers that have a foundational understanding of theory and machine learning. We need theorists that have a background in math, but may not necessarily know how to code. There’s no need to be an expert in all fields to make valuable contributions to AI safety research.
**Openness to being wrong (and patience)**: Many people are quick to jump to solutions without fully understanding the problem, or feel locked in to a hypothesis or solution because they’ve put in a lot of time pursuing this path. It’s important to give up on an idea when evidence points us that way, let go of ego, and take your time when pursuing [solutions](https://www.lesswrong.com/posts/uHYYA32CKgKT3FagE/hold-off-on-proposing-solutions).
**Perseverance after failure**: Failure can sting, but it’s important to not take this as a personal defect. Additionally, many people who excel in school may not have experienced an academic failure, and may have trouble accepting this in a research setting. This can be common during a shift going from university (where problems are generally able to be solved with some effort), to a research environment (where problems can potentially have no clear solution). It’s important to realize this is a part of life when you are trailblazing new paths. Being able to bounce back quickly is a super power.
### **Creating a more inclusive environment**
We have a long way to go in terms of inclusivity and diversity in the AI safety community, but there are ways that we can become more welcoming to a diverse range of people (and to ourselves):
**Patience**: accept that there’s a learning curve, and that research takes time. It’s okay (and very normal) to not be very productive in the first few months that you’re new to research, this isn’t indicative of your ability to be a great researcher. Be kind to yourself.
**Adversarial communication**: even casual conversations in the community can sometimes feel like you are being asked to constantly explain yourself or that your conversation partner's goal is to contradict you and prove you wrong (even if this isn’t the intention). This can be a bit off-putting to some but can be mitigated by adopting a more friendly conversation style when first speaking with someone and taking a moment to understand their communication style before defaulting to this.
**Rest**: Allow yourself to rest regardless of your timeline, take care of yourself and realize you don’t deserve to feel guilty for not being productive every day. You can walk, workout, stretch away from your desk, or do a 10 min meditation. The break may even lead you to a breakthrough.
**Leadership**: Pushing for a more diverse leadership team for AI alignment organizations, fellowships, and research teams as the norm. Advocating for more diversity of perspectives and encouraging the ideas of those not traditionally represented.
### **Conclusion**
Focusing on ‘genius’ shouldn’t be used to assess potential researchers because of the vague and not well understood nature of the term. This can lead to overlooking researchers with diverse skills, experiences, and perspectives. Instead, I urge those in AI safety research to adopt an inclusive and well-rounded approach that considers unique strengths, areas of knowledge, and collaboration potential in their view of what a productive researcher looks like. In order to incorporate a range of skills and perspectives, we need to encourage different types of people to become involved at multiple levels.
The concept of genius is complex and multifaceted, and can change contexts based on culture and history. Let’s recognize and appreciate diverse forms of intelligence and creativity, rather than narrow our viewpoints at the expense of many.
Rather than recruiting for genius, organizations should evaluate promising researchers on their potential backed by a combination of various factors such as curiosity, creativity, persistence, skill, collaboration, motivation, and willingness to learn. Recruiting based on ‘genius’ is harmful for both the applicant and the organization because it is an unfair, potentially biased, and unclear way of measuring how fit someone is to be involved in AI safety research. |
3acdc75f-0771-4e09-b2a0-e0b4e2715fbe | trentmkelly/LessWrong-43k | LessWrong | Feature Request: Right Click to Copy LaTeX
Something I do frequently is run into a block of LaTeX that I'm not sure I fully grasp.
At such a point, what I'd like to do is just copy the equation and paste it to GPT-4 and have it explain it to me step by step, then iterate on the feedback until I'm sure I understood it.
On some other sites, right clicking brings up a pane that allows me to view the LaTeX.
E.g. from @beren's blog:
Then I copy the equations, paste it in ChatGPT and ask GPT-4 to explain it in detail:
This workflow is pretty inconvenient for LessWrong as I have to manually type the LaTeX adding extra friction (and sometimes that is extra difficult as some equations are especially unfamiliar and I'm not sure how to reproduce them). This is such a nightmare on mobile (where I do most of my LW reading from), that I mostly don't bother and often just give up.
If a feature were added to provide this functionality natively on the site, it would be a significant improvement to my user experience.
Cc: @habryka, @Ruby. |
49b856d4-0486-49c2-baf4-327b3527c3ee | trentmkelly/LessWrong-43k | LessWrong | Jan Bloch's Impossible War
Epistemic Status: Endorsed
Content Warning: Neuropsychological Infohazard, Evocation Infohazard, World War I
Recommended Prior Reading: Blueprint for Armageddon Part I
Part of the Series: Truth
> “History doesn’t repeat itself but it often rhymes”
In any real look into the past, you realize pretty quickly that things don’t have neat beginnings or simple origins in the vast majority of cases. Historical events are the result of billiard ball interactions among a chaotic conflux of actors and forces, themselves all built out of past impacts and collisions stretching back into the mists of antiquity.
Thus when trying to tell the origin story of the modern rationality community, it can be very tempting to just keep extrapolating backwards. How far back should we look? Do we need to rehash Plato’s Cave and Cogito Ergo Sum? Francis Bacon is credited as the grandfather of science, so maybe we should start with him?
For the moment at least I’m writing blog posts not thousand page textbooks, and my goal here isn’t to rehash the entire history of scientific and philosophical thought (I’d like to keep this blog post under three thousand words). If you want the entire history of scientific thought, Cosmos is a great place to start and has some pretty spiffy graphics.
But unlike history, every story and every blog post have to start somewhere, and I think the best place to start for our purposes is with polish banker and railway financier Jan Gotlib Bloch.
Bloch was born a Polish Jew in Tsarist Russia in the 1800s, and would later convert to Calvinism to protect himself from antisemitism within the Tsarist government. Bloch worked as a banker and would go on to finance the building of rail lines in Russia, as well as penning a lengthy treatise on the management and operation of said rail lines in 1875, for which he:
> was awarded a medal of the first class at the geographical exhibition of Paris, and was heartily endorsed by the Imperial Russian Geographical Society.
|
bf7430db-be0e-4da0-9744-37bb7f067929 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Atlanta Lesswrong's May Meetup: The Rationality of Social Relationships, Friendship, Love, and Family.
Discussion article for the meetup : Atlanta Lesswrong's May Meetup: The Rationality of Social Relationships, Friendship, Love, and Family.
WHEN: 17 May 2013 07:00:00PM (-0400)
WHERE: 2388 Lawrenceville Hwy. Apt L. Decatur, GA 30033
We’ve been chipping away at the rationality of happiness, so let’s narrow in on one of the most influential components of happiness: social relationships, both platonic and romantic. Agenda:
* Introductions, and meet and greet for new members.
* Mini-presentations. Anyone is invited to present on a topic of their choice related to this month’s theme.
* Discussions. We’ll start with a large group discussion and break into smaller groups as needed.
* Games! Because games are fun, and build good social relationships. :-)
Recommended (but not required) reading: http://lesswrong.com/lw/70u/rationality_lessons_learned_from_irrational/ http://lesswrong.com/lw/63i/rational_romantic_relationships_part_1/
If you don’t get a chance to read these, don’t worry! We will set aside 10 minutes at the meeting to read the articles before discussion.
(Please contact me if you have allergies to cats, as our meeting space has two of the most adorable cats you’ve ever seen.)
Discussion article for the meetup : Atlanta Lesswrong's May Meetup: The Rationality of Social Relationships, Friendship, Love, and Family. |
4fc32d46-c1b7-496a-b94f-c0929e0e68fb | StampyAI/alignment-research-dataset/arxiv | Arxiv | Normative Disagreement as a Challenge for Cooperative AI
1 Introduction
---------------
Multi-agent contexts often exhibit opportunities for cooperation: situations where joint action can lead to mutual benefits (Dafoe et al., [2020](#bib.bib13)).
Individuals can engage in mutually beneficial trade; nation-states can enter into treaties instead of going to war; disputants can settle out of court rather than engaging in costly litigation.
But a hurdle common to each of these examples is that
the agents will disagree about their ideal agreement.
Even if agreements benefit all parties relative to the status
quo, different agreements will benefit different parties to different
degrees. These circumstances can be called bargaining problems (Schelling, [1956](#bib.bib40)).
As AI systems are deployed to act on behalf of humans in more real-world circumstances, they will need to be able
to act effectively in
bargaining problems — from commercial negotiations in the
nearer-term (e.g., Chakraborty et al. ([2020](#bib.bib11))) to high-stakes strategic decision-making in
the longer-term
(Geist and Lohn, [2018](#bib.bib16)). Moreover,
agents may be trained independently and offline before
interacting with one another in the world. This raises concerns about
future AI systems following incompatible norms for arriving at solutions to
bargaining problems, analogously to disagreements about fairness which
create hurdles to international cooperation on critical issues
such as climate policy (Albin, [2001](#bib.bib1); Ringius et al., [2002](#bib.bib38)).
Our contributions
are as follows. We
introduce a taxonomy of cooperation games, including
bargaining problems
(Section [3](#S3 "3 Coordination, bargaining and normative disagreement ‣ Normative Disagreement as a Challenge for Cooperative AI")).
We relate their difficulty
to the degree of
normative disagreement, i.e.,
differences over principles for selecting
among mutually beneficial outcomes,
which we formalize in terms of
welfare functions.
Normative disagreement does
not arise in purely cooperative games or simple
sequential social dilemmas (Leibo et al., [2017](#bib.bib27)),
but
is an important obstacle for cooperation in what we call asymmetric
bargaining problems. Following this, we introduce the notion of
norm-adaptive policies – policies which can play according
to different norms depending on the circumstances.
In several multi-agent learning environments we highlight the difficulty of
bargaining between norm-unadaptive policies
(Section [4](#S4 "4 Multi-agent learning and cooperation failures in BPs ‣ 3.4 Norm-adaptive policies ‣ 3.3 Bargaining problems and normative disagreement ‣ 3 Coordination, bargaining and normative disagreement ‣ Normative Disagreement as a Challenge for Cooperative AI")).
We then contrast this with
a class of norm-adaptive policies
(Section [5](#S5 "5 Benefits and limitations of norm-adaptiveness ‣ 4.2 Evaluating cooperative success ‣ 4 Multi-agent learning and cooperation failures in BPs ‣ 3.4 Norm-adaptive policies ‣ 3.3 Bargaining problems and normative disagreement ‣ 3 Coordination, bargaining and normative disagreement ‣ Normative Disagreement as a Challenge for Cooperative AI")) based
on Lerer and Peysakhovich ([2017](#bib.bib28))’s approximate Markov
tit-for-tat algorithm.
We show that
this
improves performance
in
bargaining problems.
However, there remain limitations,
most fundamentally a tradeoff between exploitability and the robustness of cooperation.
2 Related work
---------------
ptBargainingptAsymmetricBargainingptMixed-motiveptCoordinationptSSDs
Figure 1: Venn diagram of cooperation problems.
The field of multi-agent learning (MAL) has
recently
paid considerable attention to problems of
cooperation in mixed-motive games, in which
agents have conflicting preferences.
Much of
this work has been focused on
sequential social dilemmas (SSDs) (e.g.,
Peysakhovich and Lerer [2017](#bib.bib37); Lerer and Peysakhovich [2017](#bib.bib28); Eccles et al. [2019](#bib.bib14)).
The classic example of a social dilemma
is the Prisoner’s Dilemma, and the SSDs studied
in this literature are similar to the Prisoner’s Dilemma
in that there is a single salient notion of “cooperation”.
This means that it is relatively easy for actors
to coordinate in their selection of policies to
deploy in these settings.
Cao et al. ([2018](#bib.bib10)) look at negotiation between deep reinforcement learners, but not
between independently trained agents.
Several authors have recently investigated
the board game Diplomacy (Paquette et al., [2019](#bib.bib36); Anthony et al., [2020](#bib.bib2); Gray et al., [2021](#bib.bib18)) which contains implicit bargaining problems amongst players.
Bargaining
problems are also investigated in older MAL literature
(e.g., Crandall and Goodrich [2011](#bib.bib12)) as
well as literature on automated negotiation
(e.g., Kraus and Arkin [2001](#bib.bib26); Baarslag et al. [2013](#bib.bib4)),
but also not between independently trained agents.
Considerable work has gone into understanding
the emergence of norms in both human (Bendor and Swistak, [2001](#bib.bib5); Boyd and Richerson, [2009](#bib.bib8))
and artificial societies (Walker and Wooldridge, [1995](#bib.bib45); Shoham and Tennenholtz, [1997](#bib.bib43); Sen and Airiau, [2007](#bib.bib42)).
Especially relevant are empirical studies of bargaining across cultural contexts (Henrich et al., [2001](#bib.bib21)).
There is also recent multi-agent reinforcement learning
work on
norms
(Hadfield-Menell et al., [2019](#bib.bib19); Lerer and Peysakhovich, [2019](#bib.bib29); Köster et al., [2020](#bib.bib25)) is also relevant here as bargaining problems can be understood as settings in which there are multiple efficient but incompatible norms.
However, much less attention has been paid in these literatures to how agents with different norms
are or aren’t able to overcome normative disagreement.
There are large
game-theoretic literatures on bargaining (for a review see Muthoo ([2001](#bib.bib33))).
This includes a long tradition of work on
cooperative
bargaining solutions, which tries to
establish normative principles for
deciding among mutually beneficial
agreements (Thomson, [1994](#bib.bib44)).
We will draw on this work in our
discussion of normative (dis)agreement below.
Lastly, the class of norm-adaptive policies we develop in
Section [5](#S5 "5 Benefits and limitations of norm-adaptiveness ‣ 4.2 Evaluating cooperative success ‣ 4 Multi-agent learning and cooperation failures in BPs ‣ 3.4 Norm-adaptive policies ‣ 3.3 Bargaining problems and normative disagreement ‣ 3 Coordination, bargaining and normative disagreement ‣ Normative Disagreement as a Challenge for Cooperative AI") —
𝚊𝚖𝚃𝙵𝚃(𝒲)𝚊𝚖𝚃𝙵𝚃𝒲\texttt{amTFT}(\mathcal{W})amTFT ( caligraphic\_W ) —
can be seen as a more general variant of an approach proposed by
Boutilier ([1999](#bib.bib7)) for coordinating in pure coordination games. As
it implicitly searches for overlap in the agents’ sets of allowed welfare functions,
it is also similar to
Rosenschein and Genesereth ([1988](#bib.bib39))’s
approach to reaching agreement in general-sum games via sets of proposals by
each agent.
3 Coordination, bargaining and normative disagreement
------------------------------------------------------
We are interested in a setting in which
multiple actors (“principals”)
train machine learning systems offline, and
then deploy them into an environment
in which they interact.
For instance,
two different companies might
separately train systems
to negotiate on their behalf and deploy them without explicit coordination
on deployment.
In this section, we develop a taxonomy of environments that these
agents might face, and relate these different types of
environments to the difficulty of bargaining.
###
3.1 Preliminaries
We formalize multi-agent environments as
partially
observable stochastic games (POSGs). For simplicity
we assume two players, i=1,2𝑖12i=1,2italic\_i = 1 , 2. We will
index player i𝑖iitalic\_i’s counterpart by −i𝑖-i- italic\_i.
Each player has an action space 𝒜isubscript𝒜𝑖{\mathcal{A}}\_{i}caligraphic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT.
There is a space 𝒮𝒮\mathcal{S}caligraphic\_S of states
Stsuperscript𝑆𝑡S^{t}italic\_S start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT which
evolve according to a Markovian transition function
P(St+1∣St,A1t,A2t)𝑃conditionalsuperscript𝑆𝑡1superscript𝑆𝑡superscriptsubscript𝐴1𝑡superscriptsubscript𝐴2𝑡P(S^{t+1}\mid S^{t},A\_{1}^{t},A\_{2}^{t})italic\_P ( italic\_S start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT ∣ italic\_S start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_A start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ). At
each time step, each player
sees an observation Oitsuperscriptsubscript𝑂𝑖𝑡O\_{i}^{t}italic\_O start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT which depends on
Stsuperscript𝑆𝑡S^{t}italic\_S start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT. Thus each player has an accumulating
history of observations ℋit={O1v,A1v}v=1tsuperscriptsubscriptℋ𝑖𝑡superscriptsubscriptsuperscriptsubscript𝑂1𝑣superscriptsubscript𝐴1𝑣𝑣1𝑡\mathcal{H}\_{i}^{t}=\{O\_{1}^{v},A\_{1}^{v}\}\_{v=1}^{t}caligraphic\_H start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT = { italic\_O start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_v end\_POSTSUPERSCRIPT , italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_v end\_POSTSUPERSCRIPT } start\_POSTSUBSCRIPT italic\_v = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT. We refer to the set
of all histories for player i𝑖iitalic\_i as
ℋi=∪t=1∞ℋitsubscriptℋ𝑖superscriptsubscript𝑡1subscriptsuperscriptℋ𝑡𝑖\mathcal{H}\_{i}=\cup\_{t=1}^{\infty}\mathcal{H}^{t}\_{i}caligraphic\_H start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = ∪ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT caligraphic\_H start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and assume for simplicity that the initial
observation history is fixed and common knowledge:
h10=h20≡h0subscriptsuperscriptℎ01subscriptsuperscriptℎ02superscriptℎ0h^{0}\_{1}=h^{0}\_{2}\equiv h^{0}italic\_h start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = italic\_h start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ≡ italic\_h start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT.
Finally, principals choose among policies
πi:ℋi→Δ(𝒜i):subscript𝜋𝑖→subscriptℋ𝑖Δsubscript𝒜𝑖\pi\_{i}:\mathcal{H}\_{i}\rightarrow\Delta({\mathcal{A}}\_{i})italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT : caligraphic\_H start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT → roman\_Δ ( caligraphic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ), which we imagine as artificial agents
deployed by the principals.
We will refer to policy profiles generically
as π∈Π:=Π1×Π2𝜋ΠassignsubscriptΠ1subscriptΠ2\pi\in\Pi:=\Pi\_{1}\times\Pi\_{2}italic\_π ∈ roman\_Π := roman\_Π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT × roman\_Π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT.
Each player has a reward function risubscript𝑟𝑖r\_{i}italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT,
such that ri(St,A1t,A2t)subscript𝑟𝑖superscript𝑆𝑡superscriptsubscript𝐴1𝑡superscriptsubscript𝐴2𝑡r\_{i}(S^{t},A\_{1}^{t},A\_{2}^{t})italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_S start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_A start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) is their reward
at time t𝑡titalic\_t. We define the value to player i𝑖iitalic\_i of policy
profile π𝜋\piitalic\_π starting at history hitsuperscriptsubscriptℎ𝑖𝑡h\_{i}^{t}italic\_h start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT as
Vi(hit,π)=𝔼π[∑v=t∞γv−tri(Sv,A1v,A2v)∣Hit=hit]subscript𝑉𝑖superscriptsubscriptℎ𝑖𝑡𝜋subscript𝔼𝜋delimited-[]conditionalsuperscriptsubscript𝑣𝑡superscript𝛾𝑣𝑡subscript𝑟𝑖superscript𝑆𝑣superscriptsubscript𝐴1𝑣superscriptsubscript𝐴2𝑣superscriptsubscript𝐻𝑖𝑡superscriptsubscriptℎ𝑖𝑡V\_{i}(h\_{i}^{t},\pi)=\mathbb{E}\_{\pi}\left[\sum\_{v=t}^{\infty}\gamma^{v-t}r\_{i}(S^{v},A\_{1}^{v},A\_{2}^{v})\mid H\_{i}^{t}=h\_{i}^{t}\right]italic\_V start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_h start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_π ) = blackboard\_E start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_v = italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_v - italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_S start\_POSTSUPERSCRIPT italic\_v end\_POSTSUPERSCRIPT , italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_v end\_POSTSUPERSCRIPT , italic\_A start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_v end\_POSTSUPERSCRIPT ) ∣ italic\_H start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT = italic\_h start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ], where γ∈[0,1)𝛾01\gamma\in[0,1)italic\_γ ∈ [ 0 , 1 ) is a
discount factor, and the value of a policy profile
to player i𝑖iitalic\_i as
Vi(π)=Vi(h0,π)subscript𝑉𝑖𝜋subscript𝑉𝑖superscriptℎ0𝜋V\_{i}(\pi)=V\_{i}(h^{0},\pi)italic\_V start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_π ) = italic\_V start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_h start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , italic\_π ). A payoff profile
is then a tuple (V1(π),V2(π))subscript𝑉1𝜋subscript𝑉2𝜋(V\_{1}(\pi),V\_{2}(\pi))( italic\_V start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( italic\_π ) , italic\_V start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_π ) ).
We say that π𝜋\piitalic\_π is a (Nash) equilibrium of
a POSG if πi∈argmaxπi′∈ΠiVi(h0,πi′,π−i)subscript𝜋𝑖subscriptargmaxsuperscriptsubscript𝜋𝑖′subscriptΠ𝑖subscript𝑉𝑖superscriptℎ0superscriptsubscript𝜋𝑖′subscript𝜋𝑖\pi\_{i}\in\operatorname\*{arg\,max}\_{\pi\_{i}^{\prime}\in\Pi\_{i}}V\_{i}(h^{0},\pi\_{i}^{\prime},\pi\_{-i})italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ start\_OPERATOR roman\_arg roman\_max end\_OPERATOR start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ roman\_Π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_V start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_h start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_π start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) for i=1,2𝑖12i=1,2italic\_i = 1 , 2. We say that
π𝜋\piitalic\_π is Pareto-optimal if for i=1,2𝑖12i=1,2italic\_i = 1 , 2
and π′∈Πsuperscript𝜋′Π\pi^{\prime}\in\Piitalic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ roman\_Π we have that
Vi(π′)>Vi(π)subscript𝑉𝑖superscript𝜋′subscript𝑉𝑖𝜋V\_{i}(\pi^{\prime})>V\_{i}(\pi)italic\_V start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) > italic\_V start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_π ) implies
V−i(π′)<V−i(π).subscript𝑉𝑖superscript𝜋′subscript𝑉𝑖𝜋V\_{-i}(\pi^{\prime})<V\_{-i}(\pi).italic\_V start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ( italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) < italic\_V start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ( italic\_π ) .
###
3.2 Coordination problems
We define a coordination problem as a game involving multiple Pareto-optimal equilibria
(cf. Zhang and Hofbauer [2015](#bib.bib47)), which require some coordinated action to achieve.
That is, if the players disagree about which equilibrium they are playing, they will not reach a Pareto-optimal outcome.
A pure coordination problem is a game
in which there are multiple Pareto-optimal
equilibria over which agents have identical interests.
Although agents may still experience difficulties in pure coordination games, for instance due to a noisy communication channel, they are made easier by the fact that
principals are indifferent between the Pareto-optimal equilibria.
###
3.3 Bargaining problems and normative disagreement
We define a bargaining problem (BP) to be a
game in which there are multiple
Pareto-optimal equilibria over which the principals
have conflicting preferences.
These equilibria represent more than one way to collaborate for mutual benefit, or put in another way, for sharing a surplus.
Thus a bargaining problem is a mixed-motive coordination problem, in which there is conflicting interest between Pareto-optimal equilibria and common interest in reaching a Pareto-optimal equilibrium.
We can distinguish between BPs which are symmetric and
asymmetric games. A 2-player game is symmetric if for any
attainable payoff profile
(a,b)𝑎𝑏(a,b)( italic\_a , italic\_b ) there exists a profile (b,a)𝑏𝑎(b,a)( italic\_b , italic\_a ).
The reason this distinction is important is
that all (finite) symmetric games have a symmetric Nash
equilibrium (Nash, [1990](#bib.bib35)); thus symmetric games
have a natural set of focal points (Schelling, [1958](#bib.bib41)) for aiding coordination in
mixed-motive contexts, while asymmetric BPs may not.
Similarly, given a chance to play a correlated
equilibrium (Aumann, [1974](#bib.bib3)),
agents in a symmetric BP could play according to a
correlated equilibrium which randomizes
using a symmetric distribution
over Pareto-optimal payoff profiles.
Figure [3.3](#S3.SS3 "3.3 Bargaining problems and normative disagreement ‣ 3 Coordination, bargaining and normative disagreement ‣ Normative Disagreement as a Challenge for Cooperative AI") displays
the payoff matrices of three coordination games:
Pure Coordination, and two variants
of Bach or Stravinsky (BoS), one of which is a
symmetric BP
and one of which is an asymmetric BP.
Pure Coordination
is not a BP because it is not a mixed-motive game as the players only care about playing the same action.
On the other hand, in the case of symmetric BoS the players do have conflicting interest, however, there is a correlated equilibrium
– tossing a commonly observed fair coin –
that is
intuitively the most reasonable way of coordinating: It both maximizes the total payoff and offers each player the same expected reward.
(a)
(a) Payoff matrices for Pure Coordination (left), BoS (middle), Asymmetric BoS (right).
| |
| --- |
| & B
S |
| | B | 1,1111,11 , 1 | 0,0000,00 , 0 |
| | S | 0,0000,00 , 0 | 1,1111,11 , 1 |
| |
| --- |
| & B
S |
| | B | 3,2323,23 , 2 | 0,0000,00 , 0 |
| | S | 0,0000,00 , 0 | 2,3232,32 , 3 |
| |
| --- |
| & B
S |
| | B | 4,1414,14 , 1 | 0,0000,00 , 0 |
| | S | 0,0000,00 , 0 | 2,2222,22 , 2 |
To
develop a better intuition for
the sense in which equilibria can be more or less reasonable, consider a BoS with
an
extreme asymmetry, with equilibrium payoffs (15, 10) and (1, 11). Even though each of these equilibria is Pareto-optimal, the latter
seems unreasonable or uncooperative: it yields a lower total payoff, more inequality, and lowers the reward of the worst-off player in the equilibrium. To formalize this intuition, we characterize the reasonableness of a Pareto-optimal payoff profile in terms of the extent to which it optimizes welfare functions: we can say that (1, 11) is unreasonable because there is no (impartial, see below) welfare function that would prefer it.
Different welfare functions with different properties have been introduced in the literature (see Appendix [A](#A1 "Appendix A Welfare functions ‣ Acknowledgments and Disclosure of Funding ‣ 6 Discussion ‣ 5.2 The exploitability-robustness tradeoff ‣ 5 Benefits and limitations of norm-adaptiveness ‣ 4.2 Evaluating cooperative success ‣ 4 Multi-agent learning and cooperation failures in BPs ‣ 3.4 Norm-adaptive policies ‣ 3.3 Bargaining problems and normative disagreement ‣ 3 Coordination, bargaining and normative disagreement ‣ Normative Disagreement as a Challenge for Cooperative AI") for an overview), but two uncontroversial properties of a welfare function are
*Pareto-optimality* (i.e., its optimizer
should be Pareto-optimal) and
*impartiality*111The requirement of impartiality is also called symmetry in the welfare
economics literature, or anonymity in the social choice literature
(Campbell and Fishburn, [1980](#bib.bib9)).
(i.e., the welfare of a policy profile
should be invariant to permutations of player
indices).
From the latter property we can observe that the intuitively reasonable choice of playing the correlated equilibrium with a fair correlation device in the case of symmetric games is also the choice which all impartial
welfare functions recommend, provided that it is possible for the agents to play a correlated equilibrium.
By contrast, in the asymmetric BoS from Figure [3.3](#S3.SS3 "3.3 Bargaining problems and normative disagreement ‣ 3 Coordination, bargaining and normative disagreement ‣ Normative Disagreement as a Challenge for Cooperative AI") we see that playing BB
maximizes utilitarian welfare wUtil(π)=V1(π)+V2(π)superscript𝑤Util𝜋subscript𝑉1𝜋subscript𝑉2𝜋w^{\mathrm{Util}}(\pi)=V\_{1}(\pi)+V\_{2}(\pi)italic\_w start\_POSTSUPERSCRIPT roman\_Util end\_POSTSUPERSCRIPT ( italic\_π ) = italic\_V start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( italic\_π ) + italic\_V start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_π ), whereas playing SS maximizes the egalitarian welfare wEgal(π)=min{V1(π),V2(π)}superscript𝑤Egal𝜋subscript𝑉1𝜋subscript𝑉2𝜋w^{\mathrm{Egal}}(\pi)=\min\{V\_{1}(\pi),V\_{2}(\pi)\}italic\_w start\_POSTSUPERSCRIPT roman\_Egal end\_POSTSUPERSCRIPT ( italic\_π ) = roman\_min { italic\_V start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( italic\_π ) , italic\_V start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_π ) }
subject to π𝜋\piitalic\_π Pareto-optimal.
Throwing a correlated fair coin to choose between the two would lead to an expected payoff that is
optimal with respect to the Nash welfare wNash(π)=V1(π)⋅V2(π)superscript𝑤Nash𝜋⋅subscript𝑉1𝜋subscript𝑉2𝜋w^{\mathrm{Nash}}(\pi)=V\_{1}(\pi)\cdot V\_{2}(\pi)italic\_w start\_POSTSUPERSCRIPT roman\_Nash end\_POSTSUPERSCRIPT ( italic\_π ) = italic\_V start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( italic\_π ) ⋅ italic\_V start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_π ). Each of
these different equilibria has a normative principle to motivate it.
In the best case, all principals agree on the same welfare function as a common ground for coordination in asymmetric BPs. However, the principals may have reasonable differences with respect to which welfare function they perceive as fair,
and so they may train their systems to
optimize different welfare functions, leading to coordination failure
when the systems interact after deployment.
In cases where agents were independently-trained according to inconsistent welfare functions, we will say that there
is normative disagreement.
There may be different
degrees of normative disagreement. For instance, the difference
|maxπwUtil(π)−maxπwEgal(π)|subscript𝜋superscript𝑤Util𝜋subscript𝜋superscript𝑤Egal𝜋|\max\_{\pi}w^{\mathrm{Util}}(\pi)-\max\_{\pi}w^{\mathrm{Egal}}(\pi)|| roman\_max start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT italic\_w start\_POSTSUPERSCRIPT roman\_Util end\_POSTSUPERSCRIPT ( italic\_π ) - roman\_max start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT italic\_w start\_POSTSUPERSCRIPT roman\_Egal end\_POSTSUPERSCRIPT ( italic\_π ) |
differs across games.
To summarize, we relate the difficulty of coordination problems to the concept of welfare functions: In pure coordination problems, they are not needed. In symmmetric bargaining problems, they all point to the same equilibria. And in asymmetric bargaining problems, they can serve to filter out intuitively unreasonable equilbria, but leave the possibility of normative disagreement between reasonable ones. This makes normative disagreement a critical challenge for bargaining.
In the remainder
of the paper, we will focus on asymmetric bargaining problems
for this reason.
(a)
(a) Payoff matrices for Pure Coordination (left), BoS (middle), Asymmetric BoS (right). |
dd8b7bd6-a4ce-4c28-be55-2c08266a2e46 | trentmkelly/LessWrong-43k | LessWrong | Anthropics and Fermi
tl;dr There is no well-defined "probability" of intelligent life in the universe. Instead, we can use proper scoring functions to penalise bad probability estimates. If we average scores across all existent observers, we get SSA-style probability estimates; if we sum them, we get SIA-style ones.
When presenting "anthropic decision theory" (the anthropic variant of UDT/FDT), I often get the response "well, that's all interesting, but when we look out to the stars with telescopes, probes, what do we really expect to see?" And it doesn't quite answer the question to say that "really expect to see is incoherent".
So instead of evading the question, let's try and answer it.
Proper scoring rules
Giving the best guess about the probability of X, is the same as maximising a proper scoring rule conditional on X. For example, someone can be asked to name a 0≤p≤1, and they will get a reward of −(IX−p)2, where IX is the indicator variable that which is 1 if X happens and 0 if it doesn't.
Using a logarithmic proper scoring rule, Wei Dai demonstrated that an updatless agent can behave like an updating one.
So let's apply the proper scoring rule to the probability that there is an alien civilization in our galaxy. As above, you guess p, and are given −(1−p)2 if there is an alien civilization in our galaxy, and −p2 if there isn't.
Summing over different agents
But how do we combine estimates from different agents? If you're merely talking about probability - there are several futures you could experience, and you don't know which yet - then you simply take an expectation over these.
But what about duplication, which is not the same as probability? What if there are two identical versions of you in the universe, but you expect them to diverge soon, and maybe one will find aliens in their galaxy while the other will not?
One solution is to treat duplication as probability. If your two copies diverge, that's exactly the same as if there was a 50-50 split into possible future |
0298ab75-5a74-49ee-ad7c-51b341ab0356 | trentmkelly/LessWrong-43k | LessWrong | Speculative inferences about path dependence in LLM supervised fine-tuning from results on linear mode connectivity and model souping
TL;DR: I claim that supervised fine-tuning of the existing largest LLMs is likely path-dependent (different random seeds and initialisations have an impact on final performance and model behaviour), based on the fact that when fine-tuning smaller LLMs, models pretrained closer to convergence produce fine-tuned models with similar mechanisms while this isn’t the case for models pretrained without being close to convergence; this is analogous to current LLMs that are very far from convergence at the end of training. This is supported by linking together existing work on model souping, linear mode connectivity, mechanistic similarity and path dependence.
Epistemic status: Written in about two hours, but thought about for longer. Experiments could definitely test these hypotheses.
Acknowledgements: Thanks to Ekdeep Singh Lubana for helpful comments and corrections, and discussion which lead to this post. Thanks also to Jean Kaddour, Nandi Schoots, Akbir Khan, Laura Ruis and Kyle McDonell for helpful comments, corrections and suggestions on drafts of this post.
Terminology
* Model souping is the procedure of taking a pretrained model, fine-tuning it with different hyperparameters and random seeds on the same task, and then averaging the parameters of all the networks. This gets better results on both in-distribution and out-of-distribution testing in Computer Vision when fine-tuning a large-scale contrastively-pretrained transformer or CNN image model on ImageNet-like tasks.
* (Linear) mode connectivity (LMC) between two models on a task means that any (linear) interpolation in parameter space between the two models achieves the same or lower loss as the two models.
* A training process is path independent if it always reaches (roughly) the same outcome regardless of irrelevant details or randomness (for example network initialisation or data ordering in supervised learning, or sampling from a policy in supervised learning). A training process is path dependent if |
4c84da17-d898-49de-91c5-96f1859aa85b | trentmkelly/LessWrong-43k | LessWrong | [LINK] blog on cryonics by someone who freezes things in a cell bio lab
http://www.strike-the-root.com/51/walker/walker12.html
Came across it pretty randomly, I found it quite intriguing. Cryonics is "routine" for human embryos, not far-fetched for humans at all. Makes the whole thing seem potentially very reasonable (and me someone who hasn't signed up and doesn't plan to). |
b40a52ac-1ce2-456b-bdc5-f3a099a239a8 | trentmkelly/LessWrong-43k | LessWrong | Emergency Central Planning
Central planning generally hasn't worked out very well for countries, and essentially the whole world has moved over to markets. So why are there strong calls for governments to run disaster response, instead of leaving that to markets? For example, governments directly:
* distributing food and water after a hurricane or earthquake.
* building out and running new covid-19 hospital facilities, instead of agreeing to fund treatment.
* negotiating with foreign factories to produce more masks and arranging importation, instead of announcing they'll pay some elevated price to anyone who can supply them.
Normally, everyone is the best authority on what they need. We have a wide range of requirements and preferences, and competition means there are high rewards for resolving mismatches between what people want and what's available. People are also normally able to project their needs out into the future, because the world changes relatively slowly.
Once a disaster strikes, both of these invert. Basic needs become much more important, and these vary a lot less between people: we need food, water, shelter. In the current crisis we're very short on N95 masks, and while doctors probably do have opinions on Ambu vs Teleflex, mostly we just need a lot more of them, quickly. Additionally, people are dealing with something where they don't have much experience, and aren't going to be as good at planning around.
Then you can potentially get the upside of having professionals whose job it is to work full time on preparation, figuring out what disasters are likely and how we're going to want to respond. This is not something the US has been handling well, but perhaps when this is over it will be easier to communicate the need for funding this kind of "we probably don't need it, but if we do it's super important" work.
I don't think this is the only way to handle disasters: if law and custom allow prices to freely move in response to changes in what's available and what |
90744c2a-813b-4df4-965a-4f15f04cf934 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Vancouver Rationalists: LAN Party + Board Games + Discussion at Waves Cafe: September 10th 3:00 to 5:00PM
Discussion article for the meetup : Vancouver Rationalists: LAN Party + Board Games + Discussion at Waves Cafe: September 10th 3:00 to 5:00PM
WHEN: 10 September 2011 03:00:00PM (-0800)
WHERE: Waves Coffee House, 100-900 Howe St. Vancouver, BC V6Z 2M4
Yet another meetup at Waves. This time, on a Saturday; hopefully three days of school have been nice to the students, and there is time enough for... rationality.
Currently, what I have in mind is a LAN and Games party (details in link). However, some may consider that too tangential to the cause. As a result, I propose the following prompt:
Discuss, and examine, what you think your desires are. Are they truly your desires? Do you desire to desire them, and/or are they simply "signals"?
Bring interested friends and beloveds, enemies and nemesises (nemesii?).
Contact me at kenneth(dot)bruskiewicz(at)gmail(dot)com for questions, details, or Google Group membership.
Discussion article for the meetup : Vancouver Rationalists: LAN Party + Board Games + Discussion at Waves Cafe: September 10th 3:00 to 5:00PM |
4b2eb28f-f566-4113-8bcd-18e54d8153a3 | StampyAI/alignment-research-dataset/blogs | Blogs | 2022-23 New Year review
This is an annual post reviewing the last year and setting goals for next year. Overall, this was a reasonably good year with some challenges (the invasion of Ukraine and being sick a lot). Some highlights in this review are improving digital habits, reviewing sleep data from the Oura ring since 2019 and calibration of predictions since 2014, an updated set of Lights habits, the unreasonable effectiveness of nasal spray against colds, and of course baby pictures.
2022 review
-----------
### Life updates
I am very grateful that my immediate family is in the West, and my relatives both in Ukraine and Russia managed to stay safe and avoid being drawn into the war on either side. In retrospect, it was probably good that my dad died in late 2021 and not a few months later when Kyiv was under attack, so we didn’t have to figure out how to get a bedridden cancer patient out of a war zone. It was quite surreal that the city that I had visited just a few months back was now under fire, and the people I had met there were now in danger. The whole thing was pretty disorienting and made it hard to focus on work for a while. I eventually mostly stopped checking the news and got back to normal life with some background guilt about not keeping up with what’s going on in the homeland.
### AI alignment
My work focused on threat models and inner alignment this year:
* Made an overview talk on [Paradigms of AI alignment: components and enablers](https://vkrakovna.wordpress.com/2022/06/02/paradigms-of-ai-alignment-components-and-enablers/) and gave the talk in a few places.
* Coauthored [Goal Misgeneralization: why correct rewards aren’t enough for correct goals](https://arxiv.org/abs/2210.01790) paper and the associated [DeepMind blog post](https://www.deepmind.com/blog/how-undesired-goals-can-arise-with-correct-rewards)
* Did a survey of [DeepMind alignment team opinions on AGI ruin arguments](https://www.lesswrong.com/posts/qJgz2YapqpFEDTLKn/deepmind-alignment-team-opinions-on-agi-ruin-arguments), which received a lot of interest on the alignment forum.
* Wrote a post on [Refining the Sharp Left Turn threat model](https://vkrakovna.wordpress.com/2022/11/25/refining-the-sharp-left-turn-threat-model/)
* Contributed to DeepMind alignment posts on [Clarifying AI x-risk](https://www.lesswrong.com/s/4iEpGXbD3tQW5atab/p/GctJD5oCDRxCspEaZ) and [Threat model literature review](https://www.lesswrong.com/posts/wnnkD6P2k2TfHnNmt/threat-model-literature-review)
* Coauthored a prize-winning submission to the [Eliciting Latent Knowledge contest](https://www.lesswrong.com/posts/zjMKpSB2Xccn9qi5t/elk-prize-results): [Route understanding through the human ontology](https://www.lesswrong.com/posts/QrhCsuaEmSLzc8NQ4/elk-contest-submission-route-understanding-through-the-human).
### Health
**Physical health.** I’ve been sick a lot this year – 6 colds and one bronchitis since Daniel started nursery in June, plus one cold earlier in the year. Had covid in April, thankfully a mild case with no obvious long-term effects. I also had two short bouts of covid-like symptoms (fever, muscle aches and fatigue) in May and October that lasted about 2 days each. I tested negative for covid both times, and recovered too quickly for flu, so I’m pretty confused about what this was, maybe a bizarre form of long covid?
Being frequently sick was pretty depressing and demotivating, and I put some effort into decreasing the rate of catching colds from Daniel. I tried improving hand hygiene and not sharing food with Daniel, which had a lot of overhead and didn’t seem to do much. I also experimented with various supplements, starting with vitamin C and zinc, which didn’t seem to help much, and then added beta glucans and broncho-vaxom, which possibly helped but I’m not sure. The only thing that seemed clearly effective was a nasal spray called “dual defense”, which seemed to make any symptoms go away whenever I applied it. This made the last (probably) cold I had mild enough to be barely perceptible (not included in the number of colds above).
**Sleep**. Similarly to last year, I consistently slept for 7 hours at night on average, with a standard deviation of 1 hour. The rate of insomnia was 10% of nights, better than last year (20% of nights). I was awake for an average of 0.6 hours (40 minutes) each night. (As usual, all the sleep metrics are excluding jetlag.)

I have now added some Oura ring data to my life tracking database as well. The ring provides a score for my sleep each night and “readiness” for the day. These scores are on a scale from 0 to 100, where presumably a score of 100 means you’re completely refreshed and ready to move mountains and 0 means you’re about to drop dead (on both of these dimensions, the highest score I’ve ever had was around 90 and the lowest was around 30). These scores take into account the amount of sleep, frequency of waking up, heart rate and body temperature at night, and activity levels. The ring usually detects when I’m sick, assigns a low readiness score and suggests to take a rest day. I didn’t wear the ring during the day between March 2020 and October 2021, which resulted in much lower activity scores, but I’m not sure how this impacted the sleep and readiness scores.
One interesting thing I noticed is that while the amount of sleep per night has stayed level at 7 hours in the past few years, my sleep score has been trending upward. I switched to the Generation 3 Oura ring in Jan 2021, which is supposed to measure sleep quality more accurately, so this could also be an artifact of the change in measurement rather than an actual improvement in sleep.
[](https://vkrakovna.files.wordpress.com/2022/12/image-8.png)
The readiness score shows no upward trend – it’s averaged around 70 the whole time.
[](https://vkrakovna.files.wordpress.com/2022/12/image-9.png)
**Mental health.** Better than last year, but not as much better as I hoped. There was a definite improvement in the first half of the year. In January, I shifted my meditation practice to self-love meditation, which was helpful for a while but seems to be wearing off (maybe I need to find some new recordings on Insight Timer…).
There were 6 episodes of particularly bad mental states, all in the second half of the year. Being sick a lot in the second half of the year was a major factor – I often found myself judging my body as weak, being angry at my immune system, or judging myself for not protecting myself enough when Daniel was sick. I think the self-judgment also led to a hopeless mindset where I felt like I tried everything feasible to avoid getting sick when I actually had not, e.g. I later tried the nasal spray and it seemed to help a lot.
One improvement in mental health this year was a decreasing rate of night terrors (waking up startled soon after falling asleep) – I had 13 recorded this year, and 37 recorded the previous year. This might have something to do with Daniel getting older and me having less subconscious worry about him falling or getting trapped under the blanket or whatever. However, I developed a new anxiety symptom after he started walking and bumping into things and making lots of mess – I often noticed myself holding my breath when taking care of him. I try to get back to normal breathing when I notice it, but it tends to come back when I’m not paying attention. It’s been a bit better lately, but still not a solved problem.
### Parenting
**Breastfeeding.** I continued to breastfeed Daniel this year, with decreasing frequency as he asks for it less often. I think at this point there is no more milk, and he is just looking for comfort when he asks for a feed. I never really figured out a plan for how to stop breastfeeding, and I’m still not sure what the endgame for this looks like.
**Potty training.** We transitioned Daniel out of diapers using the [“oh crap”](https://books.google.co.uk/books/about/Oh_Crap_Potty_Training.html) method over a long weekend in May, which went pretty well. He is good at using the potty when prompted, but took him a while to learn to ask to go to the potty – he’s getting better at this now but we still need to prompt him a lot. He usually has a few accidents a week, which seems ok. These days he doesn’t wear a diaper during the day except for naps and long travels.
**Sleep.** Daniel usually sleeps from around 9-10pm until around 6-7am, with an average of 0.3 wakeups per night (excluding jetlag). He had a sleep regression in November (which seems to be common around 2 years of age), so he started waking up more and being more difficult to put back to sleep. It’s interesting to compare the data on wakeups and night feeds (12-6am) – I often managed to put him back to sleep without breastfeeding during most of the year but it didn’t work anymore during the regression.

**Childcare.** Daniel started full-time nursery in June (it’s open until 6pm, which works great with our 10-6 work hours). He also spends Sunday afternoons and evenings with his nanny (who used to care for him full time after nursery), which gives us some time together to do our check-in with each other, go climbing or relax in the sauna, though often a bunch of this time block gets eaten by logistics.
**Taking turns.** Janos and I alternate taking care of Daniel in the mornings, since neither of us is a morning person (though we’ve shifted towards an earlier sleep schedule since having a kid). Starting in June, we also introduced a schedule where each of us gets one evening at the office per week while the other one takes care of Daniel. These arrangements were quite helpful for giving me more sleep, productive time and alone time, and setting up regular time blocks for Janos to be alone with Daniel.
**Languages.** Daniel is pretty talkative in English and Russian (still working on the Hungarian though). He knows to address me and my relatives in Russian and other people in English. He is starting to say long words and short sentences, and recently got into the habit of reciting his favorite songs and stories from memory in both languages. It’s not always clear which language he is speaking, which is a bit confusing.
### Effectiveness
**Lights.** I continued using the Lights spreadsheet for tracking daily habits that I started using in 2021. I’ve stopped tracking a few habits and started some new ones, but overall the set of habits mostly stayed the same – here are the habits that I kept from last year:
* Life tracking
* Make a list of intentions (in the todo notebook)
* Ask myself what I want today
* Meditation
* Exercise (changed to “today or yesterday”)
* Leg & shoulder stretches
* At least 2 hours of deep work (if working)
* Braindump / journaling (at least 1 sentence)
* Reading
* Appreciate a thing I did today
* Exchange appreciation with Janos
* Go to bed by 11:30pm
Habits from last year that I dropped:
* Emotional release practice (mostly superceded by self-love meditation)
* Avoid processed sugar (doing this anyway, don’t need to track)
* Use work cycles (doing this anyway in the form of time tracking)
* Check internal dashboard (didn’t resonate / wasn’t useful)
* Go outside (more automatic post-pandemic)
* Notice when I am picking my nose (didn’t work well)
New habits I added this year:
* Fill out lights (makes it easier to see which days were filled out retroactively, was intended to motivate me to fill out lights every day but that didn’t work)
* Negative visualization on making mistakes (helps with self-judgment)
* Practice [effective rest](http://agentyduck.blogspot.com/2015/07/effective-rest-days.html) (breaks during the day where I pay attention to what I want)
* Take supplements to avoid / mitigate colds
* Use eye drops (to address dryness from using contact lenses)
I did about 70% of the lights on an average week. The most difficult lights were deep work, going to bed and reading. The main failure mode with Lights was not filling them in on some days (usually weekends), which resulted in doing fewer of the habits on those days. I have a solid habit of filling out the lights at the office, but I need to have a more reliable time block to do this on weekends (probably after lunch during Daniel’s nap).
**Time tracking.** In June, I switched from using [work cycles](https://www.ultraworking.com/cycles) to doing time tracking during work hours. I realized that I wasn’t doing much of the built-in reflection in work cycles, and was mostly using them as a less systematic time tracking setup. The time tracking shows that in an average work week since June, I spent 27 hours on work activities: 9 hours in (non-research) meetings, 7 hours on research, 4 hours on reading, 3 hours on comms (giving feedback on docs, giving talks, etc), 2 hours on planning and 2 hours on admin. I also spent 10 hours on non-work activities: 6 hours on self-care (exercise classes, therapy, meditation, naps), 1 hour on parenting, and 3 hours on random stuff.
The easiest way to improve on this is to increase work hours – I can add another office night (every 2 weeks), and experiment with going in to work early on mornings when Janos takes care of Daniel. I also hope to spend less time being sick next year, with the more effective supplements and Daniel hopefully bringing home fewer germs after the first half-year at nursery.
**Deep work**. I did 363 hours of deep work (1.7 per work day), compared to 311 hours of deep work in 2021 (1.78 per work day). This was more than last year (and resulted in a lot more output), but still short of the pre-parenthood baseline.
[](https://vkrakovna.files.wordpress.com/2022/12/image.png)
This was the first full year (with no leave) since 2019. The number of workdays was a bit lower than the expected 225 workdays in a normal year (260 weekdays minus 10 holidays and 25 vacation days), which was mostly due to sick days for myself or Daniel. The rate of deep work per work day was lower than 2020-2021, mostly due to going to conferences again (which are usually work days with no deep work).
[](https://vkrakovna.files.wordpress.com/2022/12/image-5.png)
For the purpose of this summary, work days include weekends where I did at least 2 hours of deep work. There were 14 weekend workdays in 2019 and around 3 on each subsequent year (unsurprisingly, having a kid decreased my ability to work on weekends).
**Digital habits.** In the spring, I read the Digital Minimalism book and felt inspired to set up better systems for intentional use of technology. The book recommends doing a digital declutter, where you stop all technology use that is not absolutely necessary for a month and then add some of it back in a limited capacity. This seemed a bit extreme and I couldn’t get myself to do it, but I made a list of necessary and optional technologies and how I would like to use them (which was useful in itself). I implemented various measures to cut down unnecessary technology use, which were generally effective:
* Using grayscale on my phone by default to make it less visually stimulating. After I got used to grayscale, the regular full-color mode started to seem too bright and overwhelming, so I don’t really want to use it unless I’m looking at photos or watching videos with my kid.
* Using an app (Actuflow) that asks to enter my intention whenever I unlock the phone. Together with grayscale, this reduced the number of unlocks from 70-100 per day to 20-40 per day. This includes using the phone to watch videos with Daniel, so the number of phone unlocks just for myself is lower than that.
* I got into more of a habit of going to places without my phone sometimes, e.g. going for walks, getting lunch, or working in the library. I got a wristwatch to be able to check time without looking at my phone, which made going without it a lot easier.
* I muted all channels on work slack except those specific to my team and projects.
* I installed News Feed Eradicator for Facebook and Twitter on my computer browser (which are the only social media I use). I still check the news feeds on my phone sometimes, but not that often since it’s less convenient in grayscale and in the phone browser (I don’t have the apps installed). It would be great to restrict the Facebook feed only to life updates like someone finishing their thesis or having a baby, but sadly this option doesn’t seem to exist (probably by design).
* I also wanted to use the Inbox When Ready extension that hides the inbox by default, but it is unfortunately not allowed on my work devices, so I compromised by defaulting to the (usually empty) Starred view of my inbox.
### Travel
In March, we went to the Bahamas for an AI safety workshop. To enable both of us to attend the workshop, we imported Janos’s dad and his partner from Toronto to hang out with Daniel on the beach in the meantime.

We spent two weeks in August in a cottage on Manitoulin Island with family. Daniel was eager to hike with the big hiking stick (or better yet, two of them). He was less into swimming in the lake than last year, except when jumping off a paddleboard. He was also really scared of the resident squirrel at the cottage for some reason, and ran inside whenever he heard it chirping.

In September, we caught the tail end of the hiking season in the Dolomites – it was chilly but pleasantly uncrowded. Daniel was a trooper as usual and learned a lot of new Russian words hanging out with his grandma. We were hoping to do some via ferrata climbing but the weather was too wet for that.

We went to Toronto for the winter holidays and got a couple of days of skiing before all the snow melted. Daniel took some time to get used to the cold weather, but he really enjoyed throwing snowballs.

### Fun stuff
* I read 10 books this year: Three Body Problem, The Gifts of Imperfection, How to Talk so (Little) Kids will Listen (and Listen so Kids will Talk), Oh Crap Potty Training, Hunt Gather Parent, Bacteria to Bach and Back, Messy, Digital Minimalism, and Decisive.
* Janos and I took Daniel to the “In with the spiders” exhibit at the London zoo, where you can see big (and harmless) spiders up close. Daniel loves spiders so this was a treat for him.
* We did some aerial silks in the park, for the first time since having Daniel (I still remembered a few tricks). It took a surprisingly long time to find a good tree to rig the silks on (with a big sturdy branch at the right height).
* We visited the Bay for the first time since before the pandemic, leaving Daniel in Toronto with his grandma to get a break and spare him the jetlag.
* We had a photoshoot with a professional photographer – turns out, Daniel has a very photogenic smile :).

2022 prediction outcomes
------------------------
**Resolutions**
* Author or coauthor 4 or more AI safety writeups (70%) – yes (8 writeups)
* Meditate on at least 230 days (70%) – yes (272 days)
* ~~At least 450 deep work hours~~ (70%) – no (363 hours)
* ~~Do 3 consecutive chinups~~ (60%) – no (2 chinups)
* Avoid processed sugar at least 6 months of the year (60%) – yes (the whole year except a few days)
**Predictions**
* ~~I will not catch covid this year~~ (60%) – no (got it once)
* I will write at least 3 blog posts (2 last year) (60%) – yes (3 posts)
* I will read at least 5 books (70%) – yes (10 books)
* Daniel will be potty-trained by the end of the year (out of diapers when awake) (70%) – yes (since May)
**Calibration**
This year was pretty good:
* 60%: 2/4 correct
* 70%: 4/5 correct
Calibration over all annual reviews:
* Overall my predictions tend to be overconfident (the green line is below the blue line, which represents perfect calibration).
* I was overconfident in 2014-16, underconfident in 2017-19 (probably to compensate), and went to being overconfident in 2020-22.
[](https://vkrakovna.files.wordpress.com/2023/01/chart-2.png)
2023 goals and predictions
--------------------------
**Goals**
* Meditate on at least 250 days (272 last year) (80%)
* At least 400 deep work hours (363 last year) (60%)
* Write at least 4 blog posts (3 last year) (70%)
**Predictions**
* I will avoid processed sugar for at least 10 months of the year (80%)
* I will read at least 7 books (80%)
* I will catch at most 4 colds (60%)
* Daniel will be potty-trained for the night by August (70%)
Past new year reviews: [2021-22](https://vkrakovna.wordpress.com/2022/01/04/2021-22-new-year-review/), [2020-21](https://vkrakovna.wordpress.com/2021/01/03/2020-21-new-year-review/), [2019-20](https://vkrakovna.wordpress.com/2020/01/09/2019-20-new-year-review/), [2018-19](https://vkrakovna.wordpress.com/2019/01/01/2018-19-new-year-review/), [2017-18](https://vkrakovna.wordpress.com/2018/01/07/2017-18-new-year-review/), [2016-17](https://vkrakovna.wordpress.com/2017/01/09/2016-17-new-year-review/), [2015-16](https://vkrakovna.wordpress.com/2015/12/31/2015-16-new-year-review/), [2014-15](https://vkrakovna.wordpress.com/2015/01/11/2014-15-new-year-review/). |
ad7c23be-47ba-4414-8ec0-105d3935e5c4 | trentmkelly/LessWrong-43k | LessWrong | SI's Summer 2012 Matching Drive Ends July 31st
The Singularity Institute's summer 2012 matching drive ends on July 31st! Donate by the end of the month to have your gift matched, dollar for dollar.
As of this posting, SI has raised $70,000 of the $150,000 goal.
The announcement says:
> Since we published our strategic plan in August 2011, we have achieved most of the near-term goals outlined therein... In the coming year, the Singularity Institute plans to do the following:
> * Hold our annual Singularity Summit, this year in San Francisco!
> * Spin off the Center for Applied Rationality as a separate organization focused on rationality training, so that the Singularity Institute can be focused more exclusively on Singularity research and outreach.
> * Publish additional research on AI risk and Friendly AI.
> * Eliezer will write an "Open Problems in Friendly AI" sequence for Less Wrong. (For news on his rationality books, see here.)
> * Finish Facing the Singularity and publish ebook versions of Facing the Singularity and The Sequences, 2006-2009.
> * And much more! For details on what we might do with additional funding, see How to Purchase AI Risk Reduction.
>
> If you're planning to earmark your donation to CFAR (Center for Applied Rationality), here's a preview of what CFAR plans to do in the next year:
> * Develop additional lessons teaching the most important and useful parts of rationality. CFAR has already developed and tested over 18 hours of lessons so far, including classes on how to evaluate evidence using Bayesianism, how to make more accurate predictions, how to be more efficient using economics, how to use thought experiments to better understand your own motivations, and much more.
> * Run immersive rationality retreats to teach from our curriculum and to connect aspiring rationalists with each other. CFAR ran pilot retreats in May and June. Participants in the May retreat called it “transformative” and “astonishing,” and the average response on the survey question, “Are you glad you |
df73c5fd-09c6-4a5b-8f8e-e065d305f62e | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post2009
My ELK submission was labelled under 'Strategy: train a reporter that is useful for another AI'. This is definitely a fair description, though the AI it needs to be useful to is itself - the reporter is essentially internalized. I also agree that the proposed counterexample, hiding information in what seems like human-comprehensible speech, is the biggest flaw. Nonetheless I think my proposal has enough additional detail and scope for extension that it's worth fleshing out in its own post - so here we are. Some of the responses to counterexamples below also go beyond my original proposal. For anyone interested, here is my original proposal (Google Doc) which contains the same idea in somewhat less generality. In this post I'll first flesh out my proposal in slightly more general terms, and then use it to try and continue the example/counter-example dialogue on ELK. I know there were a number of proposals in this area and I'd be very interested to see how others' could be integrated with my own. In particular I think mine is weak on how to force the actor to use human language accurately. I expect there are lots of ways of leveraging existing data for this purpose but I've only explored this very lightly. Many thanks to ARC for running the competition and to Misha Wagner for reviewing and discussing the proposal. Proposal Summary Creating Human Semantic Space We should think of the actor as described in ELK as having two kinds of space in which it works: The space of potential observations of its sensors and actuators, O The space of internal representations, ie its Bayes Net, I The machine takes in observations, converts it into its internal representation I , simulates the action in this internal representation, and then cashes this out in its actuators. What we want to do is force it to convert back and forth between I and a third space, the space of human understandable meaning, which we will call H . How can we achieve this? My basic idea is that we leverage the world of text and image based models to create a model called a Synonymizer. This device, in its ideal form, would be trained to take in any piece of text, image, video, audio, or combination thereof, and output a piece of media in any of these forms which preserves the semantic content as far as possible while having complete freedom to change the format. (Initial experiments in this vein would probably use GPT-3 primed with 'This document will contain a paragraph, followed by a paragraph with the same meaning but in a different structure..') The set of equivalency classes according to this Synonymizer (though they would in practice by fuzzy) should then be isomorphic to human semantic space, H. Learning to use H as a human would Next, when training, during its internal rollouts, we would periodically force the machine to translate from I into a space which is then passed through the Synonymizer, before being converted back into I , forcing it to learn a mapping between the two: Of course, at this point is isn't using human semantics, just a very weirdly regularized latent space, but having this should also allow us to bring in a vast amount of possible training data, utilizing labelled image, video, text description of events, etc, which could all be used to train the model, and thereby force it to understand language and video as a human does in order to maximize performance. For example, while training the model to predict future events in a piece of video, text description of future frames can be appended, forcing it to learn to read text into its internal understanding of the situation in order to perform well. The combination of this training example and the Synonymizer should hopefully go a long way towards forcing the machine to makes its internal state comprehensible by converting into something close to H , where its default to encode anything that can easily be encoded in human language to be encoded that way. To minimize encoding extra information within this space, the model can also be trained to return a totally uninformative response if it thinks that the input is not relevant to the problem at hand, or is not comprehensible human language, partially dispelling the 'other side of the moon problem' as spelled out in the ELK prize results. When it makes a plan, we would be able to make it continually output notes about the future state of the world at each step, making it a kind of 'note-taking AI'. Continuing ELK Dialogue This takes us to the ELK dialogue in the prize results. The relevant ELK Prize Results section ends with the first counterexample below. I will try and continue the dialogue as I can, but I would appreciate others' attempts to find counterexamples. Harder counterexample: steganography using flexible degrees of freedom From ELK prize results : In order to overcome these simple fixes, a reporter wants to communicate information by encoding it in a random sample from the human posterior distribution over worlds. It may not be possible to do this in a tractable way. However, we can imagine some cases where it is possible to efficiently encode information in the choice of a possible world, and those will be our counterexamples. We think this is a substantive additional pessimistic assumption about the world, and so we awarded prizes to these proposals. (counter example continues for a couple of paragraphs) Response: Training multiple agents If we were to train multiple actors with this kind of strategy, I expect that they would all converge on using H to convey what H is intended to convey, normal human semantics... plus lots of extra bits around the edges, these flexible degrees of freedom. However, the way in which these extra degrees of freedom are used are very underconstrained - which bit of language should correspond to what additional information? It seems likely that were we to train multiple systems with somewhat different starting initialization and data, they would use their degrees of freedom very differently, just as two image compressors would compress along similar dimensions, but the actual points in the latent space would not be at all the same between different compressors. I suspect there might be some way of combining the two systems so that only information encoded in the same way by both systems would be retained which would give a stronger guarantee, but I don't have a clear idea of how such a system would work. At minimum, though, we could swap messages across multiple AIs while training, so we would take a note made by AI1 and decode it using AI2 and continue with AI2, etc. This would retain the information that we would understand, while looking like nonsense in their additional degrees of freedom. While their understanding of the additional degrees of freedom would eventually converge, this should significantly reduce the amount of information that is being communicated outside the human understanding of H , thereby forcing all the most important information to go through H honestly, or not to go through at all. Counterexample: Ignoring the need to convert into human language If the strategy as described above succeeds, then there should be serious difficulty hiding extra information in H . However, there may be strategies which simply can't be explained by H , and so the best option is to go ahead with some deceptive strategy, and simply fail to convert this into human language but nonetheless take this action. Response: Offering free bits and learning their meaning One could attempt to penalize options which are particularly poorly represented in H , but even if possible, this would incur an increasingly large capability penalty. Instead I would include within H a few, non-linguistic 'free bits', which the actor has access to when converting from I to H . These will then naturally be used to encode the most important features of I which the actor is not able to encode in H , thereby gradually raising the capacity of H to represent I . By distilling these down into a few specific bits, we can then focus our techniques for understanding the meaning of certain neurons, perhaps following the approaches of Chris Olah et al, without needing to scale such techniques up to gigantic structures. More generally I think it would be very useful for there to be work on generating iterative processes by which we can not just probe neurons but iteratively work with AIs to communicate some new concept about an area, proposing examples and counterexamples to probe the boundaries of a concept, so that it could thereby be integrated into H . Counterexample: ? I'm not sure exactly what counterexample would be proposed. It's very possible that the main response would be the huge difficulty of actually getting this increasingly complex system as currently proposed to work at a competitive level. One problem might be that converting into H is always sufficiently lossy that it never really converts it back into I - only to a sort of degenerate I , which says 'well I've had to go through H , and most of my detail is lost, but lets make a decent go of it' but at very low capability. This would mean we could not extrapolate from the behaviour of the system when it has passed its internal state through H , and has low capability, to the situation where it hasn't passed through H and is highly capable. There may be many other problems, but also perhaps additional solutions and directions, interested to hear what you make of it! |
2eefac5b-ed1f-4ab8-aed7-0a495f9526da | trentmkelly/LessWrong-43k | LessWrong | Best way to write a bicolor article on Less Wrong?
I am planning to write an article that would use two colors to distinguish between "map" and "territory", kinda like in The Neverending Story. In other words, instead of:
> The sentence 'snow is white' is true if and only if snow is white.
I want to write something like:
> Snow is white if and only if snow is white.
In general, the quote marks have the advantage of unlimited nesting, but I don't need it now; my article will have exactly two levels, not more. On the other hand, I feel that the bicolor version may be easier to perceive instinctively; the words referring to the map and the territory are not only surrounded by marks, they look differently by themselves. Also, quote marks are generally used for various purposes (quoting, irony, etc.), so authors emphasise the "this refers to map, as opposed to the territory" usage by not saying merely "X", but "the sentence X", "the words X", "the text X", etc. By removing such words, even my example here is shorter.
So, how to do this technically?
* If I understand it correctly, Markdown doesn't support colors, but you can replace Markdown editor with Rich text editor in user settings, and the Rich text editor supports colors. Am I right?
* Alternatively, instead of colors, I could use italics or bold font, just like I did in the example here. Not only is this simple in Markdown editor, but as an advantage, if some blind people read this, their software would probably pronounce those words in a different voice, making the article more accessible. (Is this true? Can anyone confirm?) The minor disadvantage is that I will not be able to use the italics or bold font in the usual way in the article, but I think I can live with that. Though, as you see, I love using italics, so I would probably use bold font for the map-words.
* (For the sake of completeness, using CAPITAL LETTERS would also be an option, but it is an ugly option I would rather avoid.)
* And maybe more of the above could be combined, e.g. the words |
33a74c3c-3514-4a77-a718-2f33dd9d31b9 | trentmkelly/LessWrong-43k | LessWrong | Sledding Is Underrated
Sledding is pretty great:
* No lines. You don't have to wait, just go.
* Minimal cost. A basic plastic sled costs $15-$20 at a hardware store, and while they wear out pretty quickly it's still less than $5/outing. Aside from that, all you need is standard winter gear (including snowpants).
* Close. Around here there are two decent places to sled within a 15min walk, and several within a 15min drive. You can go sledding after work, or as one of several activities on a Saturday.
* Good learning curve. There are many hills where all you have to do is sit on your sled, no skill required. As you figure out how to steer, this opens up other hills, and more interesting options for going down existing ones.
* Fast. Zooming down a hill is just really fun.
The competing ways to move similarly fast are much more expensive, require more travel, and you spend much more time time waiting around and less actually moving: amusement parks, skiing, etc. I'm not trying to knock those activities, they are definitely fun, but I think sledding more than holds its own here.
This morning the kids and I went with their aunt to a park near our house:
We were there for about an hour and a half, and there were less than ten other people sledding. I'm sure there are more now, since it's a bit later in the day, but still, compared to the number of people in the area I think would enjoy sledding it's surprisingly low.
Why is that? Some guesses:
* You have to walk back up the hill. True, but it's good exercise and it beats standing in line.
* There isn't any advertising for it, beyond advertising for sleds.
* In most places it can't be scheduled. You have to notice that the conditions are right and decide to go, typically on less than a days notice.
* We think of it as something for kids. But amusement parks have this too and are still popular with all ages.
* In places that have good sledding many times in the winter people think of it as a normal thing, and we get more |
01b9e65e-eada-40c3-8f7b-72fc28252d4b | trentmkelly/LessWrong-43k | LessWrong | XKCD - Frequentist vs. Bayesians
http://xkcd.com/1132/
Is this a fair representation of frequentists versus bayesians? I feel like every time the topic comes up, 'Bayesian statistics' is an applause light for me, and I'm not sure why I'm supposed to be applauding. |
398ed0f3-993e-4099-a950-51c97a6f2614 | StampyAI/alignment-research-dataset/blogs | Blogs | List of sources arguing for existential risk from AI
*Published 6 Aug 2022*
*This page is incomplete, under active work and may be updated soon.*
This is a bibliography of pieces arguing that AI poses an existential risk.
List
----
**Adamczewski, Tom. “A Shift in Arguments for AI Risk.”** Fragile Credences. Accessed October 20, 2020. <https://fragile-credences.github.io/prioritising-ai/>.
**Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. “Concrete Problems in AI Safety.”** *ArXiv:1606.06565 [Cs]*, July 25, 2016. <http://arxiv.org/abs/1606.06565>.
**Bensinger, Rob, Eliezer Yudkowsky, Richard Ngo, So8res, Holden Karnofsky, Ajeya Cotra, Carl Shulman, and Rohin Shah. “2021 MIRI Conversations – LessWrong.”** Accessed August 6, 2022. <https://www.lesswrong.com/s/n945eovrA3oDueqtq>.
**Bostrom, N., *Superintelligence***, Oxford University Press, 2014.
**Carlsmith, Joseph. “Is Power-Seeking AI an Existential Risk? [Draft].”** Open Philanthropy Project, April 2021. <https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit?usp=embed_facebook>.
**Christian, Brian. *The Alignment Problem: Machine Learning and Human Values*.** W. W. Norton & Company, 2021.
**Christiano, Paul. “What Failure Looks Like.”** *AI Alignment Forum* (blog), March 17, 2019. <https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like>.
**Dai, Wei. “Comment on Disentangling Arguments for the Importance of AI Safety – LessWrong.”** Accessed December 9, 2021. <https://www.lesswrong.com/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety>.
**Garfinkel, Ben, Miles Brundage, Daniel Filan, Carrick Flynn, Jelena Luketina, Michael Page, Anders Sandberg, Andrew Snyder-Beattie, and Max Tegmark. “On the Impossibility of Supersized Machines.”** *ArXiv:1703.10987 [Physics]*, March 31, 2017. <http://arxiv.org/abs/1703.10987>.
**Hubinger, Evan, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. “Risks from Learned Optimization in Advanced Machine Learning Systems,”** June 5, 2019. <https://arxiv.org/abs/1906.01820v3>.
**Ngo, Richard. “Thinking Complete: Disentangling Arguments for the Importance of AI Safety.”** *Thinking Complete* (blog), January 21, 2019. <http://thinkingcomplete.blogspot.com/2019/01/disentangling-arguments-for-importance.html>. (Also [LessWrong](https://www.lesswrong.com/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety) and the [Alignment Forum](https://www.alignmentforum.org/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety), with relevant comment threads.)
**Ngo, Richard. “AGI Safety from First Principles,”** September 28, 2020. <https://www.lesswrong.com/s/mzgtmmTKKn5MuCzFJ>.
**Ord, Toby. *The Precipice: Existential Risk and the Future of Humanity*.** Illustrated Edition. New York: Hachette Books, 2020.
**Piper, Kelsey. “The Case for Taking AI Seriously as a Threat to Humanity.”** Vox, December 21, 2018. <https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment>.
**Russell, Stuart. *Human Compatible: Artificial Intelligence and the Problem of Control*.** Viking, 2019.
**Turner, Alexander Matt, Logan Smith, Rohin Shah, Andrew Critch, and Prasad Tadepalli. “Optimal Policies Tend to Seek Power.”** *ArXiv:1912.01683 [Cs]*, December 3, 2021. <http://arxiv.org/abs/1912.01683>.
**Yudkowsky, Eliezer. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.”** In *Global Catastrophic Risks*, edited by Nick Bostrom and Milan M. Ćirković, 46. New York, n.d. <https://intelligence.org/files/AIPosNegFactor.pdf>.
**Yudkowsky, Eliezer, Rob Bensinger, and So8res. “2022 MIRI Alignment Discussion – LessWrong.”** Accessed August 6, 2022. <https://www.lesswrong.com/s/v55BhXbpJuaExkpcD>.
**Yudkowsky, Eliezer, and Robin Hanson. “The Hanson-Yudkowsky AI-Foom Debate – LessWrong.”** Accessed August 6, 2022. <https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate>.
See also
--------
* [List of sources arguing against existential risk from AI](https://aiimpacts.org/list-of-sources-arguing-against-existential-risk-from-ai/)
* [Is AI an existential threat to humanity?](https://aiimpacts.org/does-ai-pose-an-existential-risk/)
*Primary author: Katja Grace*
Notes
----- |
667360f2-de6b-49a9-b7fe-b0ccbd0de7b5 | trentmkelly/LessWrong-43k | LessWrong | "The elephant in the room: the biggest risk of artificial intelligence may not be what we think" What to say about that?
A letter signed by renowned names such as Stephen Hawking and Elon Musk warned about the risks represented by artificial intelligence, including ethical dilemmas, such as in the case of self-driving cars, as well as mass unemployment. However, the risks may have a cause that is not only distinct but also reach a much higher level than the text considers.
With an annual growth rate of 38% in the estimated market value increase, artificial intelligence progresses at a dizzying pace, making China's GDP growth look like the steps of a turtle. Such is this growth rate that it represents, over the course of 10 years, a 25-fold increase in the size of the market. And, as mathematics dictates, nothing less than 625 times in a 20-year period, if such growth persists. We are, of course, referring only to the direct economic force exerted by artificial intelligence, without considering, for example, the certainly immense impacts that such expansion represents in numerous areas, from the labor market, already mentioned, to economic risks (whether due to the great sudden transfer of investment to this sector, withdrawal from others, or by a sudden drop in profits obtained in the same), environmental risks, contributing to global warming and the supersaturation of natural resources (due to the demands for the production of supercomputers that run them), not to mention geopolitical risks (since growth is exponential, the first country or company to stop it will be absurdly ahead of the others) or related to conflicts between countries, as well as social, psychological, and cultural risks caused by the vertiginous change with which such technologies have emerged.
With such a dizzying growth rate, we can imagine such changes occurring within a very short interval, something within a period of 5 to 10 years, possibly, and almost certainly for the next few decades. Despite all these threats, however, the subject continues to be discussed only superficially in the media, being test |
7c5c0d63-633e-4621-b7c7-1dadb79c03cb | trentmkelly/LessWrong-43k | LessWrong | Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine"
This highly speculative paper has been discussed here before, but I found the discussion's quality rather disappointing. People generally took bits and pieces out of context and then mostly offered arguments already addressed in the paper. Truly the internet is the most powerful misinterpretation engine ever built. It's nice to see that Scott, who is no stranger to online adversity, is taking it in stride.
So I went through the paper and took notes, which I posted on my blog, but I am also attaching them below, in a hope that someone else here finds them useful. I initially intended to write up a comment in the other thread, but the size of it grew too large for a comment, so I am making this post. Feel free to downvote if you think it does not belong in Discussion (or for any other reason, of course).
TL;DR: The main idea of the paper is, as far as I can tell, that it is possible to construct a physical model, potentially related to the "free" part of the free will debate, where some events cannot be predicted at all, not even probabilistically, like it is done in Quantum Mechanics. Scott also proposes one possible mechanism for this "Knightian unpredictability": the not-yet-decohered parts of the initial state of the universe, such as the Cosmic Microwave Background radiation. He does not claim a position on whether the model is correct, only that it is potentially testable and thus shifts a small piece of the age-old philosophical debate on free will into the realm of physics.
For those here who say that the free-will question has been dissolved, let me note that the picture presented in the paper is one explicitly rejected by Eliezer, probably a bit hastily. Specifically in this diagram:
Eliezer says that the sequential picture on the left is the only correct one, whereas Scott offers a perfectly reasonable model which is better described by the picture on the right. To reiterate, there is a part of the past (Scott calls it "microfacts") which evolves re |
6b76ddbb-f4f8-4fe3-8079-cef24135006c | StampyAI/alignment-research-dataset/special_docs | Other | Human Compatible: Artificial Intelligence and the Problem of Control
[]
[Cover for Human Compatible]
ALSO BY STUART RUSSELL
The Use of Knowledge in Analogy and Induction (1989)
Do the Right Thing: Studies in Limited Rationality (with Eric Wefald, 1991)
Artificial Intelligence: A Modern Approach (with Peter Norvig, 1995, 2003, 2010, 2019)
[Book title, Human Compatible, Subtitle, Artificial Intelligence and the Problem of Control, author, Stuart Russell, imprint, Viking]
VIKING
An imprint of Penguin Random House LLC
penguinrandomhouse.com
Copyright © 2019 by Stuart Russell
Penguin supports copyright. Copyright fuels creativity, encourages diverse voices, promotes free speech, and creates a vibrant culture. Thank you for buying an authorized edition of this book and for complying with copyright laws by not reproducing, scanning, or distributing any part of it in any form without permission. You are supporting writers and allowing Penguin to continue to publish books for every reader.
ISBN 9780525558613 (hardcover)
ISBN 9780525558620 (ebook)
Version\_1
For Loy, Gordon, Lucy, George, and Isaac
CONTENTS
Also by Stuart Russell
Title Page
Copyright
Dedication
PREFACE
Chapter 1. IF WE SUCCEED
Chapter 2. INTELLIGENCE IN HUMANS AND MACHINES
Chapter 3. HOW MIGHT AI PROGRESS IN THE FUTURE?
Chapter 4. MISUSES OF AI
Chapter 5. OVERLY INTELLIGENT AI
Chapter 6. THE NOT-SO-GREAT AI DEBATE
Chapter 7. AI: A DIFFERENT APPROACH
Chapter 8. PROVABLY BENEFICIAL AI
Chapter 9. COMPLICATIONS: US
Chapter 10. PROBLEM SOLVED?
Appendix A. SEARCHING FOR SOLUTIONS
Appendix B. KNOWLEDGE AND LOGIC
Appendix C. UNCERTAINTY AND PROBABILITY
Appendix D. LEARNING FROM EXPERIENCE
Acknowledgments
Notes
Image Credits
Index
About the Author
PREFACE
Why This Book? Why Now?
This book is about the past, present, and future of our attempt to understand and create intelligence. This matters, not because AI is rapidly becoming a pervasive aspect of the present but because it is the dominant technology of the future. The world’s great powers are waking up to this fact, and the world’s largest corporations have known it for some time. We cannot predict exactly how the technology will develop or on what timeline. Nevertheless, we must plan for the possibility that machines will far exceed the human capacity for decision making in the real world. What then?
Everything civilization has to offer is the product of our intelligence; gaining access to considerably greater intelligence would be the biggest event in human history. The purpose of the book is to explain why it might be the last event in human history and how to make sure that it is not.
Overview of the Book
The book has three parts. The first part (Chapters 1 to 3) explores the idea of intelligence in humans and in machines. The material requires no technical background, but for those who are interested, it is supplemented by four appendices that explain some of the core concepts underlying present-day AI systems. The second part (Chapters 4 to 6) discusses some problems arising from imbuing machines with intelligence. I focus in particular on the problem of control: retaining absolute power over machines that are more powerful than us. The third part (Chapters 7 to 10) suggests a new way to think about AI and to ensure that machines remain beneficial to humans, forever. The book is intended for a general audience but will, I hope, be of value in convincing specialists in artificial intelligence to rethink their fundamental assumptions.
1
IF WE SUCCEED
A long time ago, my parents lived in Birmingham, England, in a house near the university. They decided to move out of the city and sold the house to David Lodge, a professor of English literature. Lodge was by that time already a well-known novelist. I never met him, but I decided to read some of his books: Changing Places and Small World. Among the principal characters were fictional academics moving from a fictional version of Birmingham to a fictional version of Berkeley, California. As I was an actual academic from the actual Birmingham who had just moved to the actual Berkeley, it seemed that someone in the Department of Coincidences was telling me to pay attention.
One particular scene from Small World struck me: The protagonist, an aspiring literary theorist, attends a major international conference and asks a panel of leading figures, “What follows if everyone agrees with you?” The question causes consternation, because the panelists had been more concerned with intellectual combat than ascertaining truth or attaining understanding. It occurred to me then that an analogous question could be asked of the leading figures in AI: “What if you succeed?” The field’s goal had always been to create human-level or superhuman AI, but there was little or no consideration of what would happen if we did.
A few years later, Peter Norvig and I began work on a new AI textbook, whose first edition appeared in 1995.¹ The book’s final section is titled “What If We Do Succeed?” The section points to the possibility of good and bad outcomes but reaches no firm conclusions. By the time of the third edition in 2010, many people had finally begun to consider the possibility that superhuman AI might not be a good thing—but these people were mostly outsiders rather than mainstream AI researchers. By 2013, I became convinced that the issue not only belonged in the mainstream but was possibly the most important question facing humanity.
In November 2013, I gave a talk at the Dulwich Picture Gallery, a venerable art museum in south London. The audience consisted mostly of retired people—nonscientists with a general interest in intellectual matters—so I had to give a completely nontechnical talk. It seemed an appropriate venue to try out my ideas in public for the first time. After explaining what AI was about, I nominated five candidates for “biggest event in the future of humanity”:
1. We all die (asteroid impact, climate catastrophe, pandemic, etc.).
2. We all live forever (medical solution to aging).
3. We invent faster-than-light travel and conquer the universe.
4. We are visited by a superior alien civilization.
5. We invent superintelligent AI.
I suggested that the fifth candidate, superintelligent AI, would be the winner, because it would help us avoid physical catastrophes and achieve eternal life and faster-than-light travel, if those were indeed possible. It would represent a huge leap—a discontinuity—in our civilization. The arrival of superintelligent AI is in many ways analogous to the arrival of a superior alien civilization but much more likely to occur. Perhaps most important, AI, unlike aliens, is something over which we have some say.
Then I asked the audience to imagine what would happen if we received notice from a superior alien civilization that they would arrive on Earth in thirty to fifty years. The word pandemonium doesn’t begin to describe it. Yet our response to the anticipated arrival of superintelligent AI has been . . . well, underwhelming begins to describe it. (In a later talk, I illustrated this in the form of the email exchange shown in figure 1.) Finally, I explained the significance of superintelligent AI as follows: “Success would be the biggest event in human history . . . and perhaps the last event in human history.”
From: Superior Alien Civilization
To: humanity@UN.org
Subject: Contact
Be warned: we shall arrive in 30–50 years
From: humanity@UN.org
To: Superior Alien Civilization
Subject: Out of office: Re: Contact
Humanity is currently out of the office. We will respond to your message when we return. ☺
FIGURE 1: Probably not the email exchange that would follow the first contact by a superior alien civilization.
A few months later, in April 2014, I was at a conference in Iceland and got a call from National Public Radio asking if they could interview me about the movie Transcendence, which had just been released in the United States. Although I had read the plot summaries and reviews, I hadn’t seen it because I was living in Paris at the time, and it would not be released there until June. It so happened, however, that I had just added a detour to Boston on the way home from Iceland, so that I could participate in a Defense Department meeting. So, after arriving at Boston’s Logan Airport, I took a taxi to the nearest theater showing the movie. I sat in the second row and watched as a Berkeley AI professor, played by Johnny Depp, was gunned down by anti-AI activists worried about, yes, superintelligent AI. Involuntarily, I shrank down in my seat. (Another call from the Department of Coincidences?) Before Johnny Depp’s character dies, his mind is uploaded to a quantum supercomputer and quickly outruns human capabilities, threatening to take over the world.
On April 19, 2014, a review of Transcendence, co-authored with physicists Max Tegmark, Frank Wilczek, and Stephen Hawking, appeared in the Huffington Post. It included the sentence from my Dulwich talk about the biggest event in human history. From then on, I would be publicly committed to the view that my own field of research posed a potential risk to my own species.
How Did We Get Here?
The roots of AI stretch far back into antiquity, but its “official” beginning was in 1956. Two young mathematicians, John McCarthy and Marvin Minsky, had persuaded Claude Shannon, already famous as the inventor of information theory, and Nathaniel Rochester, the designer of IBM’s first commercial computer, to join them in organizing a summer program at Dartmouth College. The goal was stated as follows:
The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
Needless to say, it took much longer than a summer: we are still working on all these problems.
In the first decade or so after the Dartmouth meeting, AI had several major successes, including Alan Robinson’s algorithm for general-purpose logical reasoning² and Arthur Samuel’s checker-playing program, which taught itself to beat its creator.³ The first AI bubble burst in the late 1960s, when early efforts at machine learning and machine translation failed to live up to expectations. A report commissioned by the UK government in 1973 concluded, “In no part of the field have the discoveries made so far produced the major impact that was then promised.”⁴ In other words, the machines just weren’t smart enough.
My eleven-year-old self was, fortunately, unaware of this report. Two years later, when I was given a Sinclair Cambridge Programmable calculator, I just wanted to make it intelligent. With a maximum program size of thirty-six keystrokes, however, the Sinclair was not quite big enough for human-level AI. Undeterred, I gained access to the giant CDC 6600 supercomputer⁵ at Imperial College London and wrote a chess program—a stack of punched cards two feet high. It wasn’t very good, but it didn’t matter. I knew what I wanted to do.
By the mid-1980s, I had become a professor at Berkeley, and AI was experiencing a huge revival thanks to the commercial potential of so-called expert systems. The second AI bubble burst when these systems proved to be inadequate for many of the tasks to which they were applied. Again, the machines just weren’t smart enough. An AI winter ensued. My own AI course at Berkeley, currently bursting with over nine hundred students, had just twenty-five students in 1990.
The AI community learned its lesson: smarter, obviously, was better, but we would have to do our homework to make that happen. The field became far more mathematical. Connections were made to the long-established disciplines of probability, statistics, and control theory. The seeds of today’s progress were sown during that AI winter, including early work on large-scale probabilistic reasoning systems and what later became known as deep learning.
Beginning around 2011, deep learning techniques began to produce dramatic advances in speech recognition, visual object recognition, and machine translation—three of the most important open problems in the field. By some measures, machines now match or exceed human capabilities in these areas. In 2016 and 2017, DeepMind’s AlphaGo defeated Lee Sedol, former world Go champion, and Ke Jie, the current champion—events that some experts predicted wouldn’t happen until 2097, if ever.⁶
Now AI generates front-page media coverage almost every day. Thousands of start-up companies have been created, fueled by a flood of venture funding. Millions of students have taken online AI and machine learning courses, and experts in the area command salaries in the millions of dollars. Investments flowing from venture funds, national governments, and major corporations are in the tens of billions of dollars annually—more money in the last five years than in the entire previous history of the field. Advances that are already in the pipeline, such as self-driving cars and intelligent personal assistants, are likely to have a substantial impact on the world over the next decade or so. The potential economic and social benefits of AI are vast, creating enormous momentum in the AI research enterprise.
What Happens Next?
Does this rapid rate of progress mean that we are about to be overtaken by machines? No. There are several breakthroughs that have to happen before we have anything resembling machines with superhuman intelligence.
Scientific breakthroughs are notoriously hard to predict. To get a sense of just how hard, we can look back at the history of another field with civilization-ending potential: nuclear physics.
In the early years of the twentieth century, perhaps no nuclear physicist was more distinguished than Ernest Rutherford, the discoverer of the proton and the “man who split the atom” (figure 2[a]). Like his colleagues, Rutherford had long been aware that atomic nuclei stored immense amounts of energy; yet the prevailing view was that tapping this source of energy was impossible.
On September 11, 1933, the British Association for the Advancement of Science held its annual meeting in Leicester. Lord Rutherford addressed the evening session. As he had done several times before, he poured cold water on the prospects for atomic energy: “Anyone who looks for a source of power in the transformation of the atoms is talking moonshine.” Rutherford’s speech was reported in the Times of London the next morning (figure 2[b]).
[]
FIGURE 2: (a) Lord Rutherford, nuclear physicist. (b) Excerpts from a report in the Times of September 12, 1933, concerning a speech given by Rutherford the previous evening. (c) Leo Szilard, nuclear physicist.
Leo Szilard (figure 2[c]), a Hungarian physicist who had recently fled from Nazi Germany, was staying at the Imperial Hotel on Russell Square in London. He read the Times’ report at breakfast. Mulling over what he had read, he went for a walk and invented the neutron-induced nuclear chain reaction.⁷ The problem of liberating nuclear energy went from impossible to essentially solved in less than twenty-four hours. Szilard filed a secret patent for a nuclear reactor the following year. The first patent for a nuclear weapon was issued in France in 1939.
The moral of this story is that betting against human ingenuity is foolhardy, particularly when our future is at stake. Within the AI community, a kind of denialism is emerging, even going as far as denying the possibility of success in achieving the long-term goals of AI. It’s as if a bus driver, with all of humanity as passengers, said, “Yes, I am driving as hard as I can towards a cliff, but trust me, we’ll run out of gas before we get there!”
I am not saying that success in AI will necessarily happen, and I think it’s quite unlikely that it will happen in the next few years. It seems prudent, nonetheless, to prepare for the eventuality. If all goes well, it would herald a golden age for humanity, but we have to face the fact that we are planning to make entities that are far more powerful than humans. How do we ensure that they never, ever have power over us?
To get just an inkling of the fire we’re playing with, consider how content-selection algorithms function on social media. They aren’t particularly intelligent, but they are in a position to affect the entire world because they directly influence billions of people. Typically, such algorithms are designed to maximize click-through, that is, the probability that the user clicks on presented items. The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user’s preferences so that they become more predictable. A more predictable user can be fed items that they are likely to click on, thereby generating more revenue. People with more extreme political views tend to be more predictable in which items they will click on. (Possibly there is a category of articles that die-hard centrists are likely to click on, but it’s not easy to imagine what this category consists of.) Like any rational entity, the algorithm learns how to modify the state of its environment—in this case, the user’s mind—in order to maximize its own reward.⁸ The consequences include the resurgence of fascism, the dissolution of the social contract that underpins democracies around the world, and potentially the end of the European Union and NATO. Not bad for a few lines of code, even if it had a helping hand from some humans. Now imagine what a really intelligent algorithm would be able to do.
What Went Wrong?
The history of AI has been driven by a single mantra: “The more intelligent the better.” I am convinced that this is a mistake—not because of some vague fear of being superseded but because of the way we have understood intelligence itself.
The concept of intelligence is central to who we are—that’s why we call ourselves Homo sapiens, or “wise man.” After more than two thousand years of self-examination, we have arrived at a characterization of intelligence that can be boiled down to this:
Humans are intelligent to the extent that our actions can be expected to achieve our objectives.
All those other characteristics of intelligence—perceiving, thinking, learning, inventing, and so on—can be understood through their contributions to our ability to act successfully. From the very beginnings of AI, intelligence in machines has been defined in the same way:
Machines are intelligent to the extent that their actions can be expected to achieve their objectives.
Because machines, unlike humans, have no objectives of their own, we give them objectives to achieve. In other words, we build optimizing machines, we feed objectives into them, and off they go.
This general approach is not unique to AI. It recurs throughout the technological and mathematical underpinnings of our society. In the field of control theory, which designs control systems for everything from jumbo jets to insulin pumps, the job of the system is to minimize a cost function that typically measures some deviation from a desired behavior. In the field of economics, mechanisms and policies are designed to maximize the utility of individuals, the welfare of groups, and the profit of corporations.⁹ In operations research, which solves complex logistical and manufacturing problems, a solution maximizes an expected sum of rewards over time. Finally, in statistics, learning algorithms are designed to minimize an expected loss function that defines the cost of making prediction errors.
Evidently, this general scheme—which I will call the standard model—is widespread and extremely powerful. Unfortunately, we don’t want machines that are intelligent in this sense.
The drawback of the standard model was pointed out in 1960 by Norbert Wiener, a legendary professor at MIT and one of the leading mathematicians of the mid-twentieth century. Wiener had just seen Arthur Samuel’s checker-playing program learn to play checkers far better than its creator. That experience led him to write a prescient but little-known paper, “Some Moral and Technical Consequences of Automation.”¹⁰ Here’s how he states the main point:
If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively . . . we had better be quite sure that the purpose put into the machine is the purpose which we really desire.
“The purpose put into the machine” is exactly the objective that machines are optimizing in the standard model. If we put the wrong objective into a machine that is more intelligent than us, it will achieve the objective, and we lose. The social-media meltdown I described earlier is just a foretaste of this, resulting from optimizing the wrong objective on a global scale with fairly unintelligent algorithms. In Chapter 5, I spell out some far worse outcomes.
All this should come as no great surprise. For thousands of years, we have known the perils of getting exactly what you wish for. In every story where someone is granted three wishes, the third wish is always to undo the first two wishes.
In summary, it seems that the march towards superhuman intelligence is unstoppable, but success might be the undoing of the human race. Not all is lost, however. We have to understand where we went wrong and then fix it.
Can We Fix It?
The problem is right there in the basic definition of AI. We say that machines are intelligent to the extent that their actions can be expected to achieve their objectives, but we have no reliable way to make sure that their objectives are the same as our objectives.
What if, instead of allowing machines to pursue their objectives, we insist that they pursue our objectives? Such a machine, if it could be designed, would be not just intelligent but also beneficial to humans. So let’s try this:
Machines are beneficial to the extent that their actions can be expected to achieve our objectives.
This is probably what we should have done all along.
The difficult part, of course, is that our objectives are in us (all eight billion of us, in all our glorious variety) and not in the machines. It is, nonetheless, possible to build machines that are beneficial in exactly this sense. Inevitably, these machines will be uncertain about our objectives—after all, we are uncertain about them ourselves—but it turns out that this is a feature, not a bug (that is, a good thing and not a bad thing). Uncertainty about objectives implies that machines will necessarily defer to humans: they will ask permission, they will accept correction, and they will allow themselves to be switched off.
Removing the assumption that machines should have a definite objective means that we will need to tear out and replace part of the foundations of artificial intelligence—the basic definitions of what we are trying to do. That also means rebuilding a great deal of the superstructure—the accumulation of ideas and methods for actually doing AI. The result will be a new relationship between humans and machines, one that I hope will enable us to navigate the next few decades successfully.
2
INTELLIGENCE IN HUMANS AND MACHINES
When you arrive at a dead end, it’s a good idea to retrace your steps and work out where you took a wrong turn. I have argued that the standard model of AI, wherein machines optimize a fixed objective supplied by humans, is a dead end. The problem is not that we might fail to do a good job of building AI systems; it’s that we might succeed too well. The very definition of success in AI is wrong.
So let’s retrace our steps, all the way to the beginning. Let’s try to understand how our concept of intelligence came about and how it came to be applied to machines. Then we have a chance of coming up with a better definition of what counts as a good AI system.
Intelligence
How does the universe work? How did life begin? Where are my keys? These are fundamental questions worthy of thought. But who is asking these questions? How am I answering them? How can a handful of matter—the few pounds of pinkish-gray blancmange we call a brain—perceive, understand, predict, and manipulate a world of unimaginable vastness? Before long, the mind turns to examine itself.
We have been trying for thousands of years to understand how our minds work. Initially, the purposes included curiosity, self-management, persuasion, and the rather pragmatic goal of analyzing mathematical arguments. Yet every step towards an explanation of how the mind works is also a step towards the creation of the mind’s capabilities in an artifact—that is, a step towards artificial intelligence.
Before we can understand how to create intelligence, it helps to understand what it is. The answer is not to be found in IQ tests, or even in Turing tests, but in a simple relationship between what we perceive, what we want, and what we do. Roughly speaking, an entity is intelligent to the extent that what it does is likely to achieve what it wants, given what it has perceived.
Evolutionary origins
Consider a lowly bacterium, such as E. coli. It is equipped with about half a dozen flagella—long, hairlike tentacles that rotate at the base either clockwise or counterclockwise. (The rotary motor itself is an amazing thing, but that’s another story.) As E. coli floats about in its liquid home—your lower intestine—it alternates between rotating its flagella clockwise, causing it to “tumble” in place, and counterclockwise, causing the flagella to twine together into a kind of propeller so the bacterium swims in a straight line. Thus, E. coli does a sort of random walk—swim, tumble, swim, tumble—that allows it to find and consume glucose rather than staying put and dying of starvation.
If this were the whole story, we wouldn’t say that E. coli is particularly intelligent, because its actions would not depend in any way on its environment. It wouldn’t be making any decisions, just executing a fixed behavior that evolution has built into its genes. But this isn’t the whole story. When E. coli senses an increasing concentration of glucose, it swims longer and tumbles less, and it does the opposite when it senses a decreasing concentration of glucose. So, what it does (swim towards glucose) is likely to achieve what it wants (more glucose, let’s assume), given what it has perceived (an increasing glucose concentration).
Perhaps you are thinking, “But evolution built this into its genes too! How does that make it intelligent?” This is a dangerous line of reasoning, because evolution built the basic design of your brain into your genes too, and presumably you wouldn’t wish to deny your own intelligence on that basis. The point is that what evolution has built into E. coli’s genes, as it has into yours, is a mechanism whereby the bacterium’s behavior varies according to what it perceives in its environment. Evolution doesn’t know, in advance, where the glucose is going to be or where your keys are, so putting the capability to find them into the organism is the next best thing.
Now, E. coli is no intellectual giant. As far as we know, it doesn’t remember where it has been, so if it goes from A to B and finds no glucose, it’s just as likely to go back to A. If we construct an environment where every attractive glucose gradient leads only to a spot of phenol (which is a poison for E. coli), the bacterium will keep following those gradients. It never learns. It has no brain, just a few simple chemical reactions to do the job.
A big step forward occurred with action potentials, which are a form of electrical signaling that first evolved in single-celled organisms around a billion years ago. Later multicellular organisms evolved specialized cells called neurons that use electrical action potentials to carry signals rapidly—up to 120 meters per second, or 270 miles per hour—within the organism. The connections between neurons are called synapses. The strength of the synaptic connection dictates how much electrical excitation passes from one neuron to another. By changing the strength of synaptic connections, animals learn.¹ Learning confers a huge evolutionary advantage, because the animal can adapt to a range of circumstances. Learning also speeds up the rate of evolution itself.
Initially, neurons were organized into nerve nets, which are distributed throughout the organism and serve to coordinate activities such as eating and digestion or the timed contraction of muscle cells across a wide area. The graceful propulsion of jellyfish is the result of a nerve net. Jellyfish have no brains at all.
Brains came later, along with complex sense organs such as eyes and ears. Several hundred million years after jellyfish emerged with their nerve nets, we humans arrived with our big brains—a hundred billion (10¹¹) neurons and a quadrillion (10¹⁵) synapses. While slow compared to electronic circuits, the “cycle time” of a few milliseconds per state change is fast compared to most biological processes. The human brain is often described by its owners as “the most complex object in the universe,” which probably isn’t true but is a good excuse for the fact that we still understand little about how it really works. While we know a great deal about the biochemistry of neurons and synapses and the anatomical structures of the brain, the neural implementation of the cognitive level—learning, knowing, remembering, reasoning, planning, deciding, and so on—is still mostly anyone’s guess.² (Perhaps that will change as we understand more about AI, or as we develop ever more precise tools for measuring brain activity.) So, when one reads in the media that such-and-such AI technique “works just like the human brain,” one may suspect it’s either just someone’s guess or plain fiction.
In the area of consciousness, we really do know nothing, so I’m going to say nothing. No one in AI is working on making machines conscious, nor would anyone know where to start, and no behavior has consciousness as a prerequisite. Suppose I give you a program and ask, “Does this present a threat to humanity?” You analyze the code and indeed, when run, the code will form and carry out a plan whose result will be the destruction of the human race, just as a chess program will form and carry out a plan whose result will be the defeat of any human who faces it. Now suppose I tell you that the code, when run, also creates a form of machine consciousness. Will that change your prediction? Not at all. It makes absolutely no difference.³ Your prediction about its behavior is exactly the same, because the prediction is based on the code. All those Hollywood plots about machines mysteriously becoming conscious and hating humans are really missing the point: it’s competence, not consciousness, that matters.
There is one important cognitive aspect of the brain that we are beginning to understand—namely, the reward system. This is an internal signaling system, mediated by dopamine, that connects positive and negative stimuli to behavior. Its workings were discovered by the Swedish neuroscientist Nils-Åke Hillarp and his collaborators in the late 1950s. It causes us to seek out positive stimuli, such as sweet-tasting foods, that increase dopamine levels; it makes us avoid negative stimuli, such as hunger and pain, that decrease dopamine levels. In a sense it’s quite similar to E. coli’s glucose-seeking mechanism, but much more complex. It comes with built-in methods for learning, so that our behavior becomes more effective at obtaining reward over time. It also allows for delayed gratification, so that we learn to desire things such as money that provide eventual reward rather than immediate reward. One reason we understand the brain’s reward system is that it resembles the method of reinforcement learning developed in AI, for which we have a very solid theory.⁴
From an evolutionary point of view, we can think of the brain’s reward system, just like E. coli’s glucose-seeking mechanism, as a way of improving evolutionary fitness. Organisms that are more effective in seeking reward—that is, finding delicious food, avoiding pain, engaging in sexual activity, and so on—are more likely to propagate their genes. It is extraordinarily difficult for an organism to decide what actions are most likely, in the long run, to result in successful propagation of its genes, so evolution has made it easier for us by providing built-in signposts.
These signposts are not perfect, however. There are ways to obtain reward that probably reduce the likelihood that one’s genes will propagate. For example, taking drugs, drinking vast quantities of sugary carbonated beverages, and playing video games for eighteen hours a day all seem counterproductive in the reproduction stakes. Moreover, if you were given direct electrical access to your reward system, you would probably self-stimulate without stopping until you died.⁵
The misalignment of reward signals and evolutionary fitness doesn’t affect only isolated individuals. On a small island off the coast of Panama lives the pygmy three-toed sloth, which appears to be addicted to a Valium-like substance in its diet of red mangrove leaves and may be going extinct.⁶ Thus, it seems that an entire species can disappear if it finds an ecological niche where it can satisfy its reward system in a maladaptive way.
Barring these kinds of accidental failures, however, learning to maximize reward in natural environments will usually improve one’s chances for propagating one’s genes and for surviving environmental changes.
Evolutionary accelerator
Learning is good for more than surviving and prospering. It also speeds up evolution. How could this be? After all, learning doesn’t change one’s DNA, and evolution is all about changing DNA over generations. The connection between learning and evolution was proposed in 1896 by the American psychologist James Baldwin⁷ and independently by the British ethologist Conwy Lloyd Morgan⁸ but not generally accepted at the time.
The Baldwin effect, as it is now known, can be understood by imagining that evolution has a choice between creating an instinctive organism whose every response is fixed in advance and creating an adaptive organism that learns what actions to take. Now suppose, for the purposes of illustration, that the optimal instinctive organism can be coded as a six-digit number, say, 472116, while in the case of the adaptive organism, evolution specifies only 472\*\*\* and the organism itself has to fill in the last three digits by learning during its lifetime. Clearly, if evolution has to worry about choosing only the first three digits, its job is much easier; the adaptive organism, in learning the last three digits, is doing in one lifetime what evolution would have taken many generations to do. So, provided the adaptive organisms can survive while learning, it seems that the capability for learning constitutes an evolutionary shortcut. Computational simulations suggest that the Baldwin effect is real.⁹ The effects of culture only accelerate the process, because an organized civilization protects the individual organism while it is learning and passes on information that the individual would otherwise need to learn for itself.
The story of the Baldwin effect is fascinating but incomplete: it assumes that learning and evolution necessarily point in the same direction. That is, it assumes that whatever internal feedback signal defines the direction of learning within the organism is perfectly aligned with evolutionary fitness. As we have seen in the case of the pygmy three-toed sloth, this does not seem to be true. At best, built-in mechanisms for learning provide only a crude hint of the long-term consequences of any given action for evolutionary fitness. Moreover, one has to ask, “How did the reward system get there in the first place?” The answer, of course, is by an evolutionary process, one that internalized a feedback mechanism that is at least somewhat aligned with evolutionary fitness.¹⁰ Clearly, a learning mechanism that caused organisms to run away from potential mates and towards predators would not last long.
Thus, we have the Baldwin effect to thank for the fact that neurons, with their capabilities for learning and problem solving, are so widespread in the animal kingdom. At the same time, it is important to understand that evolution doesn’t really care whether you have a brain or think interesting thoughts. Evolution considers you only as an agent, that is, something that acts. Such worthy intellectual characteristics as logical reasoning, purposeful planning, wisdom, wit, imagination, and creativity may be essential for making an agent intelligent, or they may not. One reason artificial intelligence is so fascinating is that it offers a potential route to understanding these issues: we may come to understand both how these intellectual characteristics make intelligent behavior possible and why it’s impossible to produce truly intelligent behavior without them.
Rationality for one
From the earliest beginnings of ancient Greek philosophy, the concept of intelligence has been tied to the ability to perceive, to reason, and to act successfully.¹¹ Over the centuries, the concept has become both broader in its applicability and more precise in its definition.
Aristotle, among others, studied the notion of successful reasoning—methods of logical deduction that would lead to true conclusions given true premises. He also studied the process of deciding how to act—sometimes called practical reasoning—and proposed that it involved deducing that a certain course of action would achieve a desired goal:
We deliberate not about ends, but about means. For a doctor does not deliberate whether he shall heal, nor an orator whether he shall persuade. . . . They assume the end and consider how and by what means it is attained, and if it seems easily and best produced thereby; while if it is achieved by one means only they consider how it will be achieved by this and by what means this will be achieved, till they come to the first cause . . . and what is last in the order of analysis seems to be first in the order of becoming. And if we come on an impossibility, we give up the search, e.g., if we need money and this cannot be got; but if a thing appears possible we try to do it.¹²
This passage, one might argue, set the tone for the next two-thousand-odd years of Western thought about rationality. It says that the “end”—what the person wants—is fixed and given; and it says that the rational action is one that, according to logical deduction across a sequence of actions, “easily and best” produces the end.
Aristotle’s proposal seems reasonable, but it isn’t a complete guide to rational behavior. In particular, it omits the issue of uncertainty. In the real world, reality has a tendency to intervene, and few actions or sequences of actions are truly guaranteed to achieve the intended end. For example, it is a rainy Sunday in Paris as I write this sentence, and on Tuesday at 2:15 p.m. my flight to Rome leaves from Charles de Gaulle Airport, about forty-five minutes from my house. I plan to leave for the airport around 11:30 a.m., which should give me plenty of time, but it probably means at least an hour sitting in the departure area. Am I certain to catch the flight? Not at all. There could be huge traffic jams, the taxi drivers may be on strike, the taxi I’m in may break down or the driver may be arrested after a high-speed chase, and so on. Instead, I could leave for the airport on Monday, a whole day in advance. This would greatly reduce the chance of missing the flight, but the prospect of a night in the departure lounge is not an appealing one. In other words, my plan involves a trade-off between the certainty of success and the cost of ensuring that degree of certainty. The following plan for buying a house involves a similar trade-off: buy a lottery ticket, win a million dollars, then buy the house. This plan “easily and best” produces the end, but it’s not very likely to succeed. The difference between this harebrained house-buying plan and my sober and sensible airport plan is, however, just a matter of degree. Both are gambles, but one seems more rational than the other.
It turns out that gambling played a central role in generalizing Aristotle’s proposal to account for uncertainty. In the 1560s, the Italian mathematician Gerolamo Cardano developed the first mathematically precise theory of probability—using dice games as his main example. (Unfortunately, his work was not published until 1663.¹³) In the seventeenth century, French thinkers including Antoine Arnauld and Blaise Pascal began—for assuredly mathematical reasons—to study the question of rational decisions in gambling.¹⁴ Consider the following two bets:
A: 20 percent chance of winning $10
B: 5 percent chance of winning $100
The proposal the mathematicians came up with is probably the same one you would come up with: compare the expected values of the bets, which means the average amount you would expect to get from each bet. For bet A, the expected value is 20 percent of $10, or $2. For bet B, the expected value is 5 percent of $100, or $5. So bet B is better, according to this theory. The theory makes sense, because if the same bets are offered over and over again, a bettor who follows the rule ends up with more money than one who doesn’t.
In the eighteenth century, the Swiss mathematician Daniel Bernoulli noticed that this rule didn’t seem to work well for larger amounts of money.¹⁵ For example, consider the following two bets:
A: 100 percent chance of getting $10,000,000
(expected value $10,000,000)
B: 1 percent chance of getting $1,000,000,100
(expected value $10,000,001)
Most readers of this book, as well as its author, would prefer bet A to bet B, even though the expected-value rule says the opposite! Bernoulli posited that bets are evaluated not according to expected monetary value but according to expected utility. Utility—the property of being useful or beneficial to a person—was, he suggested, an internal, subjective quantity related to, but distinct from, monetary value. In particular, utility exhibits diminishing returns with respect to money. This means that the utility of a given amount of money is not strictly proportional to the amount but grows more slowly. For example, the utility of having $1,000,000,100 is much less than a hundred times the utility of having $10,000,000. How much less? You can ask yourself! What would the odds of winning a billion dollars have to be for you to give up a guaranteed ten million? I asked this question of the graduate students in my class and their answer was around 50 percent, meaning that bet B would have an expected value of $500 million to match the desirability of bet A. Let me say that again: bet B would have an expected dollar value fifty times greater than bet A, but the two bets would have equal utility.
Bernoulli’s introduction of utility—an invisible property—to explain human behavior via a mathematical theory was an utterly remarkable proposal for its time. It was all the more remarkable for the fact that, unlike monetary amounts, the utility values of various bets and prizes are not directly observable; instead, utilities are to be inferred from the preferences exhibited by an individual. It would be two centuries before the implications of the idea were fully worked out and it became broadly accepted by statisticians and economists.
In the middle of the twentieth century, John von Neumann (a great mathematician after whom the standard “von Neumann architecture” for computers was named¹⁶) and Oskar Morgenstern published an axiomatic basis for utility theory.¹⁷ What this means is the following: as long as the preferences exhibited by an individual satisfy certain basic axioms that any rational agent should satisfy, then necessarily the choices made by that individual can be described as maximizing the expected value of a utility function. In short, a rational agent acts so as to maximize expected utility.
It’s hard to overstate the importance of this conclusion. In many ways, artificial intelligence has been mainly about working out the details of how to build rational machines.
Let’s look in a bit more detail at the axioms that rational entities are expected to satisfy. Here’s one, called transitivity: if you prefer A to B and you prefer B to C, then you prefer A to C. This seems pretty reasonable! (If you prefer sausage pizza to plain pizza, and you prefer plain pizza to pineapple pizza, then it seems reasonable to predict that you will choose sausage pizza over pineapple pizza.) Here’s another, called monotonicity: if you prefer prize A to prize B, and you have a choice of lotteries where A and B are the only two possible outcomes, you prefer the lottery with the highest probability of getting A rather than B. Again, pretty reasonable.
Preferences are not just about pizza and lotteries with monetary prizes. They can be about anything at all; in particular, they can be about entire future lives and the lives of others. When dealing with preferences involving sequences of events over time, there is an additional assumption that is often made, called stationarity: if two different futures A and B begin with the same event, and you prefer A to B, you still prefer A to B after the event has occurred. This sounds reasonable, but it has a surprisingly strong consequence: the utility of any sequence of events is the sum of rewards associated with each event (possibly discounted over time, by a sort of mental interest rate).¹⁸ Although this “utility as a sum of rewards” assumption is widespread—going back at least to the eighteenth-century “hedonic calculus” of Jeremy Bentham, the founder of utilitarianism—the stationarity assumption on which it is based is not a necessary property of rational agents. Stationarity also rules out the possibility that one’s preferences might change over time, whereas our experience indicates otherwise.
Despite the reasonableness of the axioms and the importance of the conclusions that follow from them, utility theory has been subjected to a continual barrage of objections since it first became widely known. Some despise it for supposedly reducing everything to money and selfishness. (The theory was derided as “American” by some French authors,¹⁹ even though it has its roots in France.) In fact, it is perfectly rational to want to live a life of self-denial, wishing only to reduce the suffering of others. Altruism simply means placing substantial weight on the well-being of others in evaluating any given future.
Another set of objections has to do with the difficulty of obtaining the necessary probabilities and utility values and multiplying them together to calculate expected utilities. These objections are simply confusing two different things: choosing the rational action and choosing it by calculating expected utilities. For example, if you try to poke your eyeball with your finger, your eyelid closes to protect your eye; this is rational, but no expected-utility calculations are involved. Or suppose you are riding a bicycle downhill with no brakes and have a choice between crashing into one concrete wall at ten miles per hour or another, farther down the hill, at twenty miles per hour; which would you prefer? If you chose ten miles per hour, congratulations! Did you calculate expected utilities? Probably not. But the choice of ten miles per hour is still rational. This follows from two basic assumptions: first, you prefer less severe injuries to more severe injuries, and second, for any given level of injuries, increasing the speed of collision increases the probability of exceeding that level. From these two assumptions it follows mathematically—without considering any numbers at all—that crashing at ten miles per hour has higher expected utility than crashing at twenty.²⁰ In summary, maximizing expected utility may not require calculating any expectations or any utilities. It’s a purely external description of a rational entity.
Another critique of the theory of rationality lies in the identification of the locus of decision making. That is, what things count as agents? It might seem obvious that humans are agents, but what about families, tribes, corporations, cultures, and nation-states? If we examine social insects such as ants, does it make sense to consider a single ant as an intelligent agent, or does the intelligence really lie in the colony as a whole, with a kind of composite brain made up of multiple ant brains and bodies that are interconnected by pheromone signaling instead of electrical signaling? From an evolutionary point of view, this may be a more productive way of thinking about ants, since the ants in a given colony are typically closely related. As individuals, ants and other social insects seem to lack an instinct for self-preservation as distinct from colony preservation: they will always throw themselves into battle against invaders, even at suicidal odds. Yet sometimes humans will do the same even to defend unrelated humans; it is as if the species benefits from the presence of some fraction of individuals who are willing to sacrifice themselves in battle, or to go off on wild, speculative voyages of exploration, or to nurture the offspring of others. In such cases, an analysis of rationality that focuses entirely on the individual is clearly missing something essential.
The other principal objections to utility theory are empirical—that is, they are based on experimental evidence suggesting that humans are irrational. We fail to conform to the axioms in systematic ways.²¹ It is not my purpose here to defend utility theory as a formal model of human behavior. Indeed, humans cannot possibly behave rationally. Our preferences extend over the whole of our own future lives, the lives of our children and grandchildren, and the lives of others, living now or in the future. Yet we cannot even play the right moves on the chessboard, a tiny, simple place with well-defined rules and a very short horizon. This is not because our preferences are irrational but because of the complexity of the decision problem. A great deal of our cognitive structure is there to compensate for the mismatch between our small, slow brains and the incomprehensibly huge complexity of the decision problem that we face all the time.
So, while it would be quite unreasonable to base a theory of beneficial AI on an assumption that humans are rational, it’s quite reasonable to suppose that an adult human has roughly consistent preferences over future lives. That is, if you were somehow able to watch two movies, each describing in sufficient detail and breadth a future life you might lead, such that each constitutes a virtual experience, you could say which you prefer, or express indifference.²²
This claim is perhaps stronger than necessary, if our only goal is to make sure that sufficiently intelligent machines are not catastrophic for the human race. The very notion of catastrophe entails a definitely-not-preferred life. For catastrophe avoidance, then, we need claim only that adult humans can recognize a catastrophic future when it is spelled out in great detail. Of course, human preferences have a much more fine-grained and, presumably, ascertainable structure than just “non-catastrophes are better than catastrophes.”
A theory of beneficial AI can, in fact, accommodate inconsistency in human preferences, but the inconsistent part of your preferences can never be satisfied and there’s nothing AI can do to help. Suppose, for example, that your preferences for pizza violate the axiom of transitivity:
ROBOT: Welcome home! Want some pineapple pizza?
YOU: No, you should know I prefer plain pizza to pineapple.
ROBOT: OK, one plain pizza coming up!
YOU: No thanks, I like sausage pizza better.
ROBOT: So sorry, one sausage pizza!
YOU: Actually, I prefer pineapple to sausage.
ROBOT: My mistake, pineapple it is!
YOU: I already said I like plain better than pineapple.
There is no pizza the robot can serve that will make you happy because there’s always another pizza you would prefer to have. A robot can satisfy only the consistent part of your preferences—for example, let’s say you prefer all three kinds of pizza to no pizza at all. In that case, a helpful robot could give you any one of the three pizzas, thereby satisfying your preference to avoid “no pizza” while leaving you to contemplate your annoyingly inconsistent pizza topping preferences at leisure.
Rationality for two
The basic idea that a rational agent acts so as to maximize expected utility is simple enough, even if actually doing it is impossibly complex. The theory applies, however, only in the case of a single agent acting alone. With more than one agent, the notion that it’s possible—at least in principle—to assign probabilities to the different outcomes of one’s actions becomes problematic. The reason is that now there’s a part of the world—the other agent—that is trying to second-guess what action you’re going to do, and vice versa, so it’s not obvious how to assign probabilities to how that part of the world is going to behave. And without probabilities, the definition of rational action as maximizing expected utility isn’t applicable.
As soon as someone else comes along, then, an agent will need some other way to make rational decisions. This is where game theory comes in. Despite its name, game theory isn’t necessarily about games in the usual sense; it’s a general attempt to extend the notion of rationality to situations with multiple agents. This is obviously important for our purposes, because we aren’t planning (yet) to build robots that live on uninhabited planets in other star systems; we’re going to put the robots in our world, which is inhabited by us.
To make it clear why we need game theory, let’s look at a simple example: Alice and Bob playing soccer in the back garden (figure 3). Alice is about to take a penalty kick and Bob is in goal. Alice is going to shoot to Bob’s left or to his right. Because she is right-footed, it’s a little bit easier and more accurate for Alice to shoot to Bob’s right. Because Alice has a ferocious shot, Bob knows he has to dive one way or the other right away—he won’t have time to wait and see which way the ball is going. Bob could reason like this: “Alice has a better chance of scoring if she shoots to my right, because she’s right-footed, so she’ll choose that, so I’ll dive right.” But Alice is no fool and can imagine Bob thinking this way, in which case she will shoot to Bob’s left. But Bob is no fool and can imagine Alice thinking this way, in which case he will dive to his left. But Alice is no fool and can imagine Bob thinking this way. . . . OK, you get the idea. Put another way: if there is a rational choice for Alice, Bob can figure it out too, anticipate it, and stop Alice from scoring, so the choice couldn’t have been rational in the first place.
[]
FIGURE 3: Alice about to take a penalty kick against Bob.
As early as 1713—once again, in the analysis of gambling games—a solution was found to this conundrum.²³ The trick is not to choose any one action but to choose a randomized strategy. For example, Alice can choose the strategy “shoot to Bob’s right with probability 55 percent and shoot to his left with probability 45 percent.” Bob could choose “dive right with probability 60 percent and left with probability 40 percent.” Each mentally tosses a suitably biased coin just before acting, so they don’t give away their intentions. By acting unpredictably, Alice and Bob avoid the contradictions of the preceding paragraph. Even if Bob works out what Alice’s randomized strategy is, there’s not much he can do about it without a crystal ball.
The next question is, What should the probabilities be? Is Alice’s choice of 55 percent–45 percent rational? The specific values depend on how much more accurate Alice is when shooting to Bob’s right, how good Bob is at saving the shot when he dives the right way, and so on. (See the notes for the complete analysis.²⁴) The general criterion is very simple, however:
1. Alice’s strategy is the best she can devise, assuming that Bob’s is fixed.
2. Bob’s strategy is the best he can devise, assuming that Alice’s is fixed.
If both conditions are satisfied, we say that the strategies are in equilibrium. This kind of equilibrium is called a Nash equilibrium in honor of John Nash, who, in 1950 at the age of twenty-two, proved that such an equilibrium exists for any number of agents with any rational preferences and no matter what the rules of the game might be. After several decades’ struggle with schizophrenia, Nash eventually recovered and was awarded the Nobel Memorial Prize in Economics for this work in 1994.
For Alice and Bob’s soccer game, there is only one equilibrium. In other cases, there may be several, so the concept of Nash equilibria, unlike that of expected-utility decisions, does not always lead to a unique recommendation for how to behave.
Worse still, there are situations in which the Nash equilibrium seems to lead to highly undesirable outcomes. One such case is the famous prisoner’s dilemma, so named by Nash’s PhD adviser, Albert Tucker, in 1950.²⁵ The game is an abstract model of those all-too-common real-world situations where mutual cooperation would be better for all concerned but people nonetheless choose mutual destruction.
The prisoner’s dilemma works as follows: Alice and Bob are suspects in a crime and are being interrogated separately. Each has a choice: to confess to the police and rat on his or her accomplice, or to refuse to talk.²⁶ If both refuse, they are convicted on a lesser charge and serve two years; if both confess, they are convicted on a more serious charge and serve ten years; if one confesses and the other refuses, the one who confesses goes free and the accomplice serves twenty years.
Now, Alice reasons as follows: “If Bob is going to confess, then I should confess too (ten years is better than twenty); if he is going to refuse, then I should confess (going free is better than spending two years in prison); so either way, I should confess.” Bob reasons the same way. Thus, they both end up confessing to their crimes and serving ten years, even though by jointly refusing they could have served only two years. The problem is that joint refusal isn’t a Nash equilibrium, because each has an incentive to defect and go free by confessing.
Note that Alice could have reasoned as follows: “Whatever reasoning I do, Bob will also do. So we’ll end up choosing the same thing. Since joint refusal is better than joint confession, we should refuse.” This form of reasoning acknowledges that, as rational agents, Alice and Bob will make choices that are correlated rather than independent. It’s just one of many approaches that game theorists have tried in their efforts to obtain less depressing solutions to the prisoner’s dilemma.²⁷
Another famous example of an undesirable equilibrium is the tragedy of the commons, first analyzed in 1833 by the English economist William Lloyd²⁸ but named, and brought to global attention, by the ecologist Garrett Hardin in 1968.²⁹ The tragedy arises when several people can consume a shared resource—such as common grazing land or fish stocks—that replenishes itself slowly. Absent any social or legal constraints, the only Nash equilibrium among selfish (non-altruistic) agents is for each to consume as much as possible, leading to rapid collapse of the resource. The ideal solution, where everyone shares the resource such that the total consumption is sustainable, is not an equilibrium because each individual has an incentive to cheat and take more than their fair share—imposing the costs on others. In practice, of course, humans do sometimes avoid this tragedy by setting up mechanisms such as quotas and punishments or pricing schemes. They can do this because they are not limited to deciding how much to consume; they can also decide to communicate. By enlarging the decision problem in this way, we find solutions that are better for everyone.
These examples, and many others, illustrate the fact that extending the theory of rational decisions to multiple agents produces many interesting and complex behaviors. It’s also extremely important because, as should be obvious, there is more than one human being. And soon there will be intelligent machines too. Needless to say, we have to achieve mutual cooperation, resulting in benefit to humans, rather than mutual destruction.
Computers
Having a reasonable definition of intelligence is the first ingredient in creating intelligent machines. The second ingredient is a machine in which that definition can be realized. For reasons that will soon become obvious, that machine is a computer. It could have been something different—for example, we might have tried to make intelligent machines out of complex chemical reactions or by hijacking biological cells³⁰—but devices built for computation, from the very earliest mechanical calculators onwards, have always seemed to their inventors to be the natural home for intelligence.
We are so used to computers now that we barely notice their utterly incredible powers. If you have a laptop or a desktop or a smart phone, look at it: a small box, with a way to type characters. Just by typing, you can create programs that turn the box into something new, perhaps something that magically synthesizes moving images of oceangoing ships hitting icebergs or alien planets with tall blue people; type some more, and it translates English into Chinese; type some more, and it listens and speaks; type some more, and it defeats the world chess champion.
This ability of a single box to carry out any process that you can imagine is called universality, a concept first introduced by Alan Turing in 1936.³¹ Universality means that we do not need separate machines for arithmetic, machine translation, chess, speech understanding, or animation: one machine does it all. Your laptop is essentially identical to the vast server farms run by the world’s largest IT companies—even those equipped with fancy, special-purpose tensor processing units for machine learning. It’s also essentially identical to all future computing devices yet to be invented. The laptop can do exactly the same tasks, provided it has enough memory; it just takes a lot longer.
Turing’s paper introducing universality was one of the most important ever written. In it, he described a simple computing device that could accept as input the description of any other computing device, together with that second device’s input, and, by simulating the operation of the second device on its input, produce the same output that the second device would have produced. We now call this first device a universal Turing machine. To prove its universality, Turing introduced precise definitions for two new kinds of mathematical objects: machines and programs. Together, the machine and program define a sequence of events—specifically, a sequence of state changes in the machine and its memory.
In the history of mathematics, new kinds of objects occur quite rarely. Mathematics began with numbers at the dawn of recorded history. Then, around 2000 BCE, ancient Egyptians and Babylonians worked with geometric objects (points, lines, angles, areas, and so on). Chinese mathematicians introduced matrices during the first millennium BCE, while sets as mathematical objects arrived only in the nineteenth century. Turing’s new objects—machines and programs—are perhaps the most powerful mathematical objects ever invented. It is ironic that the field of mathematics largely failed to recognize this, and from the 1940s onwards, computers and computation have been the province of engineering departments in most major universities.
The field that emerged—computer science—exploded over the next seventy years, producing a vast array of new concepts, designs, methods, and applications, as well as seven of the eight most valuable companies in the world.
The central concept in computer science is that of an algorithm, which is a precisely specified method for computing something. Algorithms are, by now, familiar parts of everyday life: a square-root algorithm in a pocket calculator receives a number as input and returns the square root of that number as output; a chess-playing algorithm takes a chess position and returns a move; a route-finding algorithm takes a start location, a goal location, and a street map and returns the fastest route from start to goal. Algorithms can be described in English or in mathematical notation, but to be implemented they must be coded as programs in a programming language. More complex algorithms can be built by using simpler ones as building blocks called subroutines—for example, a self-driving car might use a route-finding algorithm as a subroutine so that it knows where to go. In this way, software systems of immense complexity are built up, layer by layer.
Computer hardware matters because faster computers with more memory allow algorithms to run more quickly and to handle more information. Progress in this area is well known but still mind-boggling. The first commercial electronic programmable computer, the Ferranti Mark I, could execute about a thousand (10³) instructions per second and had about a thousand bytes of main memory. The fastest computer as of early 2019, the Summit machine at the Oak Ridge National Laboratory in Tennessee, executes about 10¹⁸ instructions per second (a thousand trillion times faster) and has 2.5 × 10¹⁷ bytes of memory (250 trillion times more). This progress has resulted from advances in electronic devices and even in the underlying physics, allowing an incredible degree of miniaturization.
Although comparisons between computers and brains are not especially meaningful, the numbers for Summit slightly exceed the raw capacity of the human brain, which, as noted previously, has about 10¹⁵ synapses and a “cycle time” of about one hundredth of a second, for a theoretical maximum of about 10¹⁷ “operations” per second. The biggest difference is power consumption: Summit uses about a million times more power.
Moore’s law, an empirical observation that the number of electronic components on a chip doubles every two years, is expected to continue until 2025 or so, although at a slightly slower rate. For some years, speeds have been limited by the large amount of heat generated by the fast switching of silicon transistors; moreover, circuit sizes cannot get much smaller because the wires and connectors are (as of 2019) no more than twenty-five atoms wide and five to ten atoms thick. Beyond 2025, we will need to use more exotic physical phenomena—including negative capacitance devices,³² single-atom transistors, graphene nanotubes, and photonics—to keep Moore’s law (or its successor) going.
Instead of just speeding up general-purpose computers, another possibility is to build special-purpose devices that are customized to perform just one class of computations. For example, Google’s tensor processing units (TPUs) are designed to perform the calculations required for certain machine learning algorithms. One TPU pod (2018 version) performs roughly 10¹⁷ calculations per second—nearly as much as the Summit machine—but uses about one hundred times less power and is one hundred times smaller. Even if the underlying chip technology remains roughly constant, these kinds of machines can simply be made larger and larger to provide vast quantities of raw computational power for AI systems.
Quantum computation is a different kettle of fish. It uses the strange properties of quantum-mechanical wave functions to achieve something remarkable: with twice the amount of quantum hardware, you can do more than twice the amount of computation! Very roughly, it works like this:³³ Suppose you have a tiny physical device that stores a quantum bit, or qubit. A qubit has two possible states, 0 and 1. Whereas in classical physics the qubit device has to be in one of the two states, in quantum physics the wave function that carries information about the qubit says that it is in both states simultaneously. If you have two qubits, there are four possible joint states: 00, 01, 10, and 11. If the wave function is coherently entangled across the two qubits, meaning that no other physical processes are there to mess it up, then the two qubits are in all four states simultaneously. Moreover, if the two qubits are connected into a quantum circuit that performs some calculation, then the calculation proceeds with all four states simultaneously. With three qubits, you get eight states processed simultaneously, and so on. Now, there are some physical limitations so that the amount of work that gets done is less than exponential in the number of qubits,³⁴ but we know that there are important problems for which quantum computation is provably more efficient than any classical computer.
As of 2019, there are experimental prototypes of small quantum processors in operation with a few tens of qubits, but there are no interesting computing tasks for which a quantum processor is faster than a classical computer. The main difficulty is decoherence—processes such as thermal noise that mess up the coherence of the multi-qubit wave function. Quantum scientists hope to solve the decoherence problem by introducing error correction circuitry, so that any error that occurs in the computation is quickly detected and corrected by a kind of voting process. Unfortunately, error-correcting systems require far more qubits to do the same work: while a quantum machine with a few hundred perfect qubits would be very powerful compared to existing classical computers, we will probably need a few million error-correcting qubits to actually realize those computations. Going from a few tens to a few million qubits will take quite a few years. If, eventually, we get there, that would completely change the picture of what we can do by sheer brute-force computation.³⁵ Rather than waiting for real conceptual advances in AI, we might be able to use the raw power of quantum computation to bypass some of the barriers faced by current “unintelligent” algorithms.
The limits of computation
Even in the 1950s, computers were described in the popular press as “super-brains” that were “faster than Einstein.” So can we say now, finally, that computers are as powerful as the human brain? No. Focusing on raw computing power misses the point entirely. Speed alone won’t give us AI. Running a poorly designed algorithm on a faster computer doesn’t make the algorithm better; it just means you get the wrong answer more quickly. (And with more data there are more opportunities for wrong answers!) The principal effect of faster machines has been to make the time for experimentation shorter, so that research can progress more quickly. It’s not hardware that is holding AI back; it’s software. We don’t yet know how to make a machine really intelligent—even if it were the size of the universe.
Suppose, however, that we do manage to develop the right kind of AI software. Are there any limits placed by physics on how powerful a computer can be? Will those limits prevent us from having enough computing power to create real AI? The answers seem to be yes, there are limits, and no, there isn’t a ghost of a chance that the limits will prevent us from creating real AI. MIT physicist Seth Lloyd has estimated the limits for a laptop-sized computer, based on considerations from quantum theory and entropy.³⁶ The numbers would raise even Carl Sagan’s eyebrows: 10⁵¹ operations per second and 10³⁰ bytes of memory, or approximately a billion trillion trillion times faster and four trillion times more memory than Summit—which, as noted previously, has more raw power than the human brain. Thus, when one hears suggestions that the human mind represents an upper limit on what is physically achievable in our universe,³⁷ one should at least ask for further clarification.
Besides limits imposed by physics, there are other limits on the abilities of computers that originate in the work of computer scientists. Turing himself proved that some problems are undecidable by any computer: the problem is well defined, there is an answer, but there cannot exist an algorithm that always finds that answer. He gave the example of what became known as the halting problem: Can an algorithm decide if a given program has an “infinite loop” that prevents it from ever finishing?³⁸
Turing’s proof that no algorithm can solve the halting problem³⁹ is incredibly important for the foundations of mathematics, but it seems to have no bearing on the issue of whether computers can be intelligent. One reason for this claim is that the same basic limitation seems to apply to the human brain. Once you start asking a human brain to perform an exact simulation of itself simulating itself simulating itself, and so on, you’re bound to run into difficulties. I, for one, have never worried about my inability to do this.
Focusing on decidable problems, then, seems not to place any real restrictions on AI. It turns out, however, that decidable doesn’t mean easy. Computer scientists spend a lot of time thinking about the complexity of problems, that is, the question of how much computation is needed to solve a problem by the most efficient method. Here’s an easy problem: given a list of a thousand numbers, find the biggest number. If it takes one second to check each number, then it takes a thousand seconds to solve this problem by the obvious method of checking each in turn and keeping track of the biggest. Is there a faster method? No, because if a method didn’t check some number in the list, that number might be the biggest, and the method would fail. So, the time to find the largest element is proportional to the size of the list. A computer scientist would say the problem has linear complexity, meaning that it’s very easy; then she would look for something more interesting to work on.
What gets theoretical computer scientists excited is the fact that many problems appear⁴⁰ to have exponential complexity in the worst case. This means two things: first, all the algorithms we know about require exponential time—that is, an amount of time exponential in the size of the input—to solve at least some problem instances; second, theoretical computer scientists are pretty sure that more efficient algorithms do not exist.
Exponential growth in difficulty means that problems may be solvable in theory (that is, they are certainly decidable) but sometimes unsolvable in practice; we call such problems intractable. An example is the problem of deciding whether a given map can be colored with just three colors, so that no two adjacent regions have the same color. (It is well known that coloring with four different colors is always possible.) With a million regions, it may be that there are some cases (not all, but some) that require something like 2¹⁰⁰⁰ computational steps to find the answer, which means about 10²⁷⁵ years on the Summit supercomputer or a mere 10²⁴² years on Seth Lloyd’s ultimate-physics laptop. The age of the universe, about 10¹⁰ years, is a tiny blip compared to this.
Does the existence of intractable problems give us any reason to think that computers cannot be as intelligent as humans? No. There is no reason to suppose that humans can solve intractable problems either. Quantum computation helps a bit (whether in machines or brains), but not enough to change the basic conclusion.
Complexity means that the real-world decision problem—the problem of deciding what to do right now, at every instant in one’s life—is so difficult that neither humans nor computers will ever come close to finding perfect solutions.
This has two consequences: first, we expect that, most of the time, real-world decisions will be at best halfway decent and certainly far from optimal; second, we expect that a great deal of the mental architecture of humans and computers—the way their decision processes actually operate—will be designed to overcome complexity to the extent possible—that is, to make it possible to find even halfway decent answers despite the overwhelming complexity of the world. Finally, we expect that the first two consequences will remain true no matter how intelligent and powerful some future machine may be. The machine may be far more capable than us, but it will still be far from perfectly rational.
Intelligent Computers
The development of logic by Aristotle and others made available precise rules for rational thought, but we do not know whether Aristotle ever contemplated the possibility of machines that implemented these rules. In the thirteenth century, the influential Catalan philosopher, seducer, and mystic Ramon Llull came much closer: he actually made paper wheels inscribed with symbols, by means of which he could generate logical combinations of assertions. The great seventeenth-century French mathematician Blaise Pascal was the first to develop a real and practical mechanical calculator. Although it could only add and subtract and was used mainly in his father’s tax-collecting office, it led Pascal to write, “The arithmetical machine produces effects which appear nearer to thought than all the actions of animals.”
Technology took a dramatic leap forward in the nineteenth century when the British mathematician and inventor Charles Babbage designed the Analytical Engine, a programmable universal machine in the sense defined later by Turing. He was helped in his work by Ada, Countess of Lovelace, daughter of the romantic poet and adventurer Lord Byron. Whereas Babbage hoped to use the Analytical Engine to compute accurate mathematical and astronomical tables, Lovelace understood its true potential,⁴¹ describing it in 1842 as “a thinking or . . . a reasoning machine” that could reason about “all subjects in the universe.” So, the basic conceptual elements for creating AI were in place! From that point, surely, AI would be just a matter of time. . . .
A long time, unfortunately—the Analytical Engine was never built, and Lovelace’s ideas were largely forgotten. With Turing’s theoretical work in 1936 and the subsequent impetus of World War II, universal computing machines were finally realized in the 1940s. Thoughts about creating intelligence followed immediately. Turing’s 1950 paper, “Computing Machinery and Intelligence,”⁴² is the best known of many early works on the possibility of intelligent machines. Skeptics were already asserting that machines would never be able to do X, for almost any X you could think of, and Turing refuted those assertions. He also proposed an operational test for intelligence, called the imitation game, which subsequently (in simplified form) became known as the Turing test. The test measures the behavior of the machine—specifically, its ability to fool a human interrogator into thinking that it is human.
The imitation game serves a specific role in Turing’s paper—namely as a thought experiment to deflect skeptics who supposed that machines could not think in the right way, for the right reasons, with the right kind of awareness. Turing hoped to redirect the argument towards the issue of whether a machine could behave in a certain way; and if it did—if it was able, say, to discourse sensibly on Shakespeare’s sonnets and their meanings—then skepticism about AI could not really be sustained. Contrary to common interpretations, I doubt that the test was intended as a true definition of intelligence, in the sense that a machine is intelligent if and only if it passes the Turing test. Indeed, Turing wrote, “May not machines carry out something which ought to be described as thinking but which is very different from what a man does?” Another reason not to view the test as a definition for AI is that it’s a terrible definition to work with. And for that reason, mainstream AI researchers have expended almost no effort to pass the Turing test.
The Turing test is not useful for AI because it’s an informal and highly contingent definition: it depends on the enormously complicated and largely unknown characteristics of the human mind, which derive from both biology and culture. There is no way to “unpack” the definition and work back from it to create machines that will provably pass the test. Instead, AI has focused on rational behavior, just as described previously: a machine is intelligent to the extent that what it does is likely to achieve what it wants, given what it has perceived.
Initially, like Aristotle, AI researchers identified “what it wants” with a goal that is either satisfied or not. These goals could be in toy worlds like the 15-puzzle, where the goal is to get all the numbered tiles lined up in order from 1 to 15 in a little (simulated) square tray; or they might be in real, physical environments: in the early 1970s, the Shakey robot at SRI in California was pushing large blocks into desired configurations, and Freddy at the University of Edinburgh was assembling a wooden boat from its component pieces. All this work was done using logical problem-solvers and planning systems to construct and execute guaranteed plans to achieve goals.⁴³
By the 1980s, it was clear that logical reasoning alone could not suffice, because, as noted previously, there is no plan that is guaranteed to get you to the airport. Logic requires certainty, and the real world simply doesn’t provide it. Meanwhile, the Israeli-American computer scientist Judea Pearl, who went on to win the 2011 Turing Award, had been working on methods for uncertain reasoning based in probability theory.⁴⁴ AI researchers gradually accepted Pearl’s ideas; they adopted the tools of probability theory and utility theory and thereby connected AI to other fields such as statistics, control theory, economics, and operations research. This change marked the beginning of what some observers call modern AI.
Agents and environments
The central concept of modern AI is the intelligent agent—something that perceives and acts. The agent is a process occurring over time, in the sense that a stream of perceptual inputs is converted into a stream of actions. For example, suppose the agent in question is a self-driving taxi taking me to the airport. Its inputs might include eight RGB cameras operating at thirty frames per second; each frame consists of perhaps 7.5 million pixels, each with an image intensity value in each of three color channels, for a total of more than five gigabytes per second. (The flow of data from the two hundred million photoreceptors in the retina is even larger, which partially explains why vision occupies such a large fraction of the human brain.) The taxi also gets data from an accelerometer one hundred times per second, as well as GPS data. This incredible flood of raw data is transformed by the simply gargantuan computing power of billions of transistors (or neurons) into smooth, competent driving behavior. The taxi’s actions include the electronic signals sent to the steering wheel, brakes, and accelerator, twenty times per second. (For an experienced human driver, most of this maelstrom of activity is unconscious: you may be aware only of making decisions such as “overtake this slow truck” or “stop for gas,” but your eyes, brain, nerves, and muscles are still doing all the other stuff.) For a chess program, the inputs are mostly just the clock ticks, with the occasional notification of the opponent’s move and the new board state, while the actions are mostly doing nothing while the program is thinking, and occasionally choosing a move and notifying the opponent. For a personal digital assistant, or PDA, such as Siri or Cortana, the inputs include not just the acoustic signal from the microphone (sampled forty-eight thousand times per second) and input from the touch screen but also the content of each Web page that it accesses, while the actions include both speaking and displaying material on the screen.
The way we build intelligent agents depends on the nature of the problem we face. This, in turn, depends on three things: first, the nature of the environment the agent will operate in—a chessboard is a very different place from a crowded freeway or a mobile phone; second, the observations and actions that connect the agent to the environment—for example, Siri might or might not have access to the phone’s camera so that it can see; and third, the agent’s objective—teaching the opponent to play better chess is a very different task from winning the game.
To give just one example of how the design of the agent depends on these things: If the objective is to win the game, a chess program need consider only the current board state and does not need any memory of past events.⁴⁵ The chess tutor, on the other hand, should continually update its model of which aspects of chess the pupil does or does not understand so that it can provide useful advice. In other words, for the chess tutor, the pupil’s mind is a relevant part of the environment. Moreover, unlike the board, it is a part of the environment that is not directly observable.
The characteristics of problems that influence the design of agents include at least the following:⁴⁶
- whether the environment is fully observable (as in chess, where the inputs provide direct access to all the relevant aspects of the current state of the environment) or partially observable (as in driving, where one’s field of view is limited, vehicles are opaque, and other drivers’ intentions are mysterious);
- whether the environment and actions are discrete (as in chess) or effectively continuous (as in driving);
- whether the environment contains other agents (as in chess and driving) or not (as in finding the shortest routes on a map);
- whether the outcomes of actions, as specified by the “rules” or “physics” of the environment, are predictable (as in chess) or unpredictable (as in traffic and weather), and whether those rules are known or unknown;
- whether the environment is dynamically changing, so that the time to make decisions is tightly constrained (as in driving) or not (as in tax strategy optimization);
- the length of the horizon over which decision quality is measured according to the objective—this may be very short (as in emergency braking), of intermediate duration (as in chess, where a game lasts up to about one hundred moves), or very long (as in driving me to the airport, which might take hundreds of thousands of decision cycles if the taxi is deciding one hundred times per second).
As one can imagine, these characteristics give rise to a bewildering variety of problem types. Just multiplying the choices listed above gives 192 types. One can find real-world problem instances for all the types. Some types are typically studied in areas outside AI—for example, designing an autopilot that maintains level flight is a short-horizon, continuous, dynamic problem that is usually studied in the field of control theory.
Obviously some problem types are easier than others. AI has made a lot of progress on problems such as board games and puzzles that are observable, discrete, deterministic, and have known rules. For the easier problem types, AI researchers have developed fairly general and effective algorithms and a solid theoretical understanding; often, machines exceed human performance on these kinds of problems. We can tell that an algorithm is general because we have mathematical proofs that it gives optimal or near-optimal results with reasonable computational complexity across an entire class of problems, and because it works well in practice on those kinds of problems without needing any problem-specific modifications.
Video games such as StarCraft are quite a bit harder than board games: they involve hundreds of moving parts and time horizons of thousands of steps, and the board is only partially visible at any given time. At each point, a player might have a choice of at least 10⁵⁰ moves, compared to about 10² in Go.⁴⁷ On the other hand, the rules are known and the world is discrete with only a few types of objects. As of early 2019, machines are as good as some professional StarCraft players but not yet ready to challenge the very best humans.⁴⁸ More important, it took a fair amount of problem-specific effort to reach that point; general-purpose methods are not quite ready for StarCraft.
Problems such as running a government or teaching molecular biology are much harder. They have complex, mostly unobservable environments (the state of a whole country, or the state of a student’s mind), far more objects and types of objects, no clear definition of what the actions are, mostly unknown rules, a great deal of uncertainty, and very long time scales. We have ideas and off-the-shelf tools that address each of these characteristics separately but, as yet, no general methods that cope with all the characteristics simultaneously. When we build AI systems for these kinds of tasks, they tend to require a great deal of problem-specific engineering and are often very brittle.
Progress towards generality occurs when we devise methods that are effective for harder problems within a given type or methods that require fewer and weaker assumptions so they are applicable to more problems. General-purpose AI would be a method that is applicable across all problem types and works effectively for large and difficult instances while making very few assumptions. That’s the ultimate goal of AI research: a system that needs no problem-specific engineering and can simply be asked to teach a molecular biology class or run a government. It would learn what it needs to learn from all the available resources, ask questions when necessary, and begin formulating and executing plans that work.
Such a general-purpose method does not yet exist, but we are moving closer. Perhaps surprisingly, a lot of this progress towards general AI results from research that isn’t about building scary, general-purpose AI systems. It comes from research on tool AI or narrow AI, meaning nice, safe, boring AI systems designed for particular problems such as playing Go or recognizing handwritten digits. Research on this kind of AI is often thought to present no risk because it’s problem-specific and nothing to do with general-purpose AI.
This belief results from a misunderstanding of what kind of work goes into these systems. In fact, research on tool AI can and often does produce progress towards general-purpose AI, particularly when it is done by researchers with good taste attacking problems that are beyond the capabilities of current general methods. Here, good taste means that the solution approach is not merely an ad hoc encoding of what an intelligent person would do in such-and-such situation but an attempt to provide the machine with the ability to figure out the solution for itself.
For example, when the AlphaGo team at Google DeepMind succeeded in creating their world-beating Go program, they did this without really working on Go. What I mean by this is that they didn’t write a whole lot of Go-specific code saying what to do in different kinds of Go situations. They didn’t design decision procedures that work only for Go. Instead, they made improvements to two fairly general-purpose techniques—lookahead search to make decisions and reinforcement learning to learn how to evaluate positions—so that they were sufficiently effective to play Go at a superhuman level. Those improvements are applicable to many other problems, including problems as far afield as robotics. Just to rub it in, a version of AlphaGo called AlphaZero recently learned to trounce AlphaGo at Go, and also to trounce Stockfish (the world’s best chess program, far better than any human) and Elmo (the world’s best shogi program, also better than any human). AlphaZero did all this in one day.⁴⁹
There was also substantial progress towards general-purpose AI in research on recognizing handwritten digits in the 1990s. Yann LeCun’s team at AT&T Labs didn’t write special algorithms to recognize “8” by searching for curvy lines and loops; instead, they improved on existing neural network learning algorithms to produce convolutional neural networks. Those networks, in turn, exhibited effective character recognition after suitable training on labeled examples. The same algorithms can learn to recognize letters, shapes, stop signs, dogs, cats, and police cars. Under the headline of “deep learning,” they have revolutionized speech recognition and visual object recognition. They are also one of the key components in AlphaZero as well as in most of the current self-driving car projects.
If you think about it, it’s hardly surprising that progress towards general AI is going to occur in narrow-AI projects that address specific tasks; those tasks give AI researchers something to get their teeth into. (There’s a reason people don’t say, “Staring out the window is the mother of invention.”) At the same time, it’s important to understand how much progress has occurred and where the boundaries are. When AlphaGo defeated Lee Sedol and later all the other top Go players, many people assumed that because a machine had learned from scratch to beat the human race at a task known to be very difficult even for highly intelligent humans, it was the beginning of the end—just a matter of time before AI took over. Even some skeptics may have been convinced when AlphaZero won at chess and shogi as well as Go. But AlphaZero has hard limitations: it works only in the class of discrete, observable, two-player games with known rules. The approach simply won’t work at all for driving, teaching, running a government, or taking over the world.
These sharp boundaries on machine competence mean that when people talk about “machine IQ” increasing rapidly and threatening to exceed human IQ, they are talking nonsense. To the extent that the concept of IQ makes sense when applied to humans, it’s because human abilities tend to be correlated across a wide range of cognitive activities. Trying to assign an IQ to machines is like trying to get four-legged animals to compete in a human decathlon. True, horses can run fast and jump high, but they have a lot of trouble with pole-vaulting and throwing the discus.
Objectives and the standard model
Looking at an intelligent agent from the outside, what matters is the stream of actions it generates from the stream of inputs it receives. From the inside, the actions have to be chosen by an agent program. Humans are born with one agent program, so to speak, and that program learns over time to act reasonably successfully across a huge range of tasks. So far, that is not the case for AI: we don’t know how to build one general-purpose AI program that does everything, so instead we build different types of agent programs for different types of problems. I will need to explain at least a tiny bit about how these different agent programs work; more detailed explanations are given in the appendices at the end of the book for those who are interested. (Pointers to particular appendices are given as superscripts like this^(A) and this.^(D)) The primary focus here is on how the standard model is instantiated in these various kinds of agents—in other words, how the objective is specified and communicated to the agent.
The simplest way to communicate an objective is in the form of a goal. When you get into your self-driving car and touch the “home” icon on the screen, the car takes this as its objective and proceeds to plan and execute a route. A state of the world either satisfies the goal (yes, I’m at home) or it doesn’t (no, I don’t live at the San Francisco Airport). In the classical period of AI research, before uncertainty became a primary issue in the 1980s, most AI research assumed a world that was fully observable and deterministic, and goals made sense as a way to specify objectives. Sometimes there is also a cost function to evaluate solutions, so an optimal solution is one that minimizes total cost while reaching the goal. For the car, this might be built in—perhaps the cost of a route is some fixed combination of the time and fuel consumption—or the human might have the option of specifying the trade-off between the two.
The key to achieving such objectives is the ability to “mentally simulate” the effects of possible actions, sometimes called lookahead search. Your self-driving car has an internal map, so it knows that driving east from San Francisco on the Bay Bridge gets you to Oakland. Algorithms originating in the 1960s⁵⁰ find optimal routes by looking ahead and searching through many possible action sequences.^(A) These algorithms form a ubiquitous part of modern infrastructure: they provide not just driving directions but also airline travel solutions, robotic assembly, construction planning, and delivery logistics. With some modifications to handle the impertinent behavior of opponents, the same idea of lookahead applies to games such as tic-tac-toe, chess, and Go, where the goal is to win according to the game’s particular definition of winning.
Lookahead algorithms are incredibly effective for their specific tasks, but they are not very flexible. For example, AlphaGo “knows” the rules of Go, but only in the sense that it has two subroutines, written in a traditional programming language such as C++: one subroutine generates all the possible legal moves and the other encodes the goal, determining whether a given state is won or lost. For AlphaGo to play a different game, someone has to rewrite all this C++ code. Moreover, if you give it a new goal—say, visiting the exoplanet that orbits Proxima Centauri—it will explore billions of sequences of Go moves in a vain attempt to find a sequence that achieves the goal. It cannot look inside the C++ code and determine the obvious: no sequence of Go moves gets you to Proxima Centauri. AlphaGo’s knowledge is essentially locked inside a black box.
In 1958, two years after his Dartmouth summer meeting had initiated the field of artificial intelligence, John McCarthy proposed a much more general approach that opens up the black box: writing general-purpose reasoning programs that can absorb knowledge on any topic and reason with it to answer any answerable question.⁵¹ One particular kind of reasoning would be practical reasoning of the kind suggested by Aristotle: “Doing actions A, B, C, . . . will achieve goal G.” The goal could be anything at all: make sure the house is tidy before I get home, win a game of chess without losing either of your knights, reduce my taxes by 50 percent, visit Proxima Centauri, and so on. McCarthy’s new class of programs soon became known as knowledge-based systems.⁵²
To make knowledge-based systems possible requires answering two questions. First, how can knowledge be stored in a computer? Second, how can a computer reason correctly with that knowledge to draw new conclusions? Fortunately, ancient Greek philosophers—particularly Aristotle—provided basic answers to these questions long before the advent of computers. In fact, it seems quite likely that, had Aristotle been given access to a computer (and some electricity, I suppose), he would have been an AI researcher. Aristotle’s answer, reiterated by McCarthy, was to use formal logic^(B) as the basis for knowledge and reasoning.
There are two kinds of logic that really matter in computer science. The first, called propositional or Boolean logic, was known to the Greeks as well as to ancient Chinese and Indian philosophers. It is the same language of AND gates, NOT gates, and so on that makes up the circuitry of computer chips. In a very literal sense, a modern CPU is just a very large mathematical expression—hundreds of millions of pages—written in the language of propositional logic. The second kind of logic, and the one that McCarthy proposed to use for AI, is called first-order logic.^(B) The language of first-order logic is far more expressive than propositional logic, which means that there are things that can be expressed very easily in first-order logic that are painful or impossible to write in propositional logic. For example, the rules of Go take about a page in first-order logic but millions of pages in propositional logic. Similarly, we can easily express knowledge about chess, British citizenship, tax law, buying and selling, moving, painting, cooking, and many other aspects of our commonsense world.
In principle, then, the ability to reason with first-order logic gets us a long way towards general-purpose intelligence. In 1930, the brilliant Austrian logician Kurt Gödel had published his famous completeness theorem,⁵³ proving that there is an algorithm with the following property:⁵⁴
For any collection of knowledge and any question expressible in first-order logic, the algorithm will tell us the answer to the question if there is one.
This is a pretty incredible guarantee. It means, for example, that we can tell the system the rules of Go and it will tell us (if we wait long enough) whether there is an opening move that wins the game. We can tell it facts about local geography, and it will tell us the way to the airport. We can tell it facts about geometry and motion and utensils, and it will tell the robot how to lay the table for dinner. More generally, given any achievable goal and sufficient knowledge of the effects of its actions, an agent can use the algorithm to construct a plan that it can execute to achieve the goal.
[]
FIGURE 4: Shakey the robot, circa 1970. In the background are some of the objects that Shakey pushed around in its suite of rooms.
It must be said that Gödel did not actually provide an algorithm; he merely proved that one existed. In the early 1960s, real algorithms for logical reasoning began to appear,⁵⁵ and McCarthy’s dream of generally intelligent systems based on logic seemed within reach. The first major mobile robot project in the world, SRI’s Shakey project, was based on logical reasoning (see figure 4). Shakey received a goal from its human designers, used vision algorithms to create logical assertions describing the current situation, performed logical inference to derive a guaranteed plan to achieve the goal, and then executed the plan. Shakey was “living” proof that Aristotle’s analysis of human cognition and action was at least partially correct.
Unfortunately, Aristotle’s (and McCarthy’s) analysis was far from being completely correct. The main problem is ignorance—not, I hasten to add, on the part of Aristotle or McCarthy, but on the part of all humans and machines, present and future. Very little of our knowledge is absolutely certain. In particular, we don’t know very much about the future. Ignorance is just an insuperable problem for a purely logical system. If I ask, “Will I get to the airport on time, if I leave three hours before my flight?” or “Can I obtain a house by buying a winning lottery ticket and then buying the house with the proceeds?” the correct answer will be, in each case, “I don’t know.” The reason is that, for each question, both yes and no are logically possible. As a practical matter, one can never be absolutely certain of any empirical question unless the answer is already known.⁵⁶ Fortunately, certainty is completely unnecessary for action: we just need to know which action is best, not which action is certain to succeed.
Uncertainty means that the “purpose put into the machine” cannot, in general, be a precisely delineated goal, to be achieved at all costs. There is no longer such a thing as a “sequence of actions that achieves the goal,” because any sequence of actions will have multiple possible outcomes, some of which won’t achieve the goal. The likelihood of success really matters: leaving for the airport three hours in advance of your flight may mean that you won’t miss the flight and buying a lottery ticket may mean that you’ll win enough to buy a new house, but these are very different mays. Goals cannot be rescued by looking for plans that maximize the probability of achieving the goal. A plan that maximizes the probability of getting to the airport in time to catch a flight might involve leaving home days in advance, organizing an armed escort, lining up many alternative means of transport in case the others break down, and so on. Inevitably, one must take into account the relative desirabilities of different outcomes as well as their likelihoods.
Instead of a goal, then, we could use a utility function to describe the desirability of different outcomes or sequences of states. Often, the utility of a sequence of states is expressed as a sum of rewards for each of the states in the sequence. Given a purpose defined by a utility or reward function, the machine aims to produce behavior that maximizes its expected utility or expected sum of rewards, averaged over the possible outcomes weighted by their probabilities. Modern AI is partly a rebooting of McCarthy’s dream, except with utilities and probabilities instead of goals and logic.
Pierre-Simon Laplace, the great French mathematician, wrote in 1814, “The theory of probabilities is just common sense reduced to calculus.”⁵⁷ It was not until the 1980s, however, that a practical formal language and reasoning algorithms were developed for probabilistic knowledge. This was the language of Bayesian networks,^(C) introduced by Judea Pearl. Roughly speaking, Bayesian networks are the probabilistic cousins of propositional logic. There are also probabilistic cousins of first-order logic, including Bayesian logic⁵⁸ and a wide variety of probabilistic programming languages.
Bayesian networks and Bayesian logic are named after the Reverend Thomas Bayes, a British clergyman whose lasting contribution to modern thought—now known as Bayes’ theorem—was published in 1763, shortly after his death, by his friend Richard Price.⁵⁹ In its modern form, as suggested by Laplace, the theorem describes in a very simple way how a prior probability—the initial degree of belief one has in a set of possible hypotheses—becomes a posterior probability as a result of observing some evidence. As more new evidence arrives, the posterior becomes the new prior and the process of Bayesian updating repeats ad infinitum. This process is so fundamental that the modern idea of rationality as maximization of expected utility is sometimes called Bayesian rationality. It assumes that a rational agent has access to a posterior probability distribution over possible current states of the world, as well as over hypotheses about the future, based on all its past experience.
Researchers in operations research, control theory, and AI have also developed a variety of algorithms for decision making under uncertainty, some dating back to the 1950s. These so-called “dynamic programming” algorithms are the probabilistic cousins of lookahead search and planning and can generate optimal or near-optimal behavior for all sorts of practical problems in finance, logistics, transportation, and so on, where uncertainty plays a significant role.^(C) The purpose is put into these machines in the form of a reward function, and the output is a policy that specifies an action for every possible state the agent could get itself into.
For complex problems such as backgammon and Go, where the number of states is enormous and the reward comes only at the end of the game, lookahead search won’t work. Instead, AI researchers have developed a method called reinforcement learning, or RL for short. RL algorithms learn from direct experience of reward signals in the environment, much as a baby learns to stand up from the positive reward of being upright and the negative reward of falling over. As with dynamic programming algorithms, the purpose put into an RL algorithm is the reward function, and the algorithm learns an estimator for the value of states (or sometimes the value of actions). This estimator can be combined with relatively myopic lookahead search to generate highly competent behavior.
The first successful reinforcement learning system was Arthur Samuel’s checkers program, which created a sensation when it was demonstrated on television in 1956. The program learned essentially from scratch, by playing against itself and observing the rewards of winning and losing.⁶⁰ In 1992, Gerry Tesauro applied the same idea to the game of backgammon, achieving world-champion-level play after 1,500,000 games.⁶¹ Beginning in 2016, DeepMind’s AlphaGo and its descendants used reinforcement learning and self-play to defeat the best human players at Go, chess, and shogi.
Reinforcement learning algorithms can also learn how to select actions based on raw perceptual input. For example, DeepMind’s DQN system learned to play forty-nine different Atari video games entirely from scratch—including Pong, Freeway, and Space Invaders.⁶² It used only the screen pixels as input and the game score as a reward signal. In most of the games, DQN learned to play better than a professional human player—despite the fact that DQN has no a priori notion of time, space, objects, motion, velocity, or shooting. It is quite hard to work out what DQN is actually doing, besides winning.
If a newborn baby learned to play dozens of video games at superhuman levels on its first day of life, or became world champion at Go, chess, and shogi, we might suspect demonic possession or alien intervention. Remember, however, that all these tasks are much simpler than the real world: they are fully observable, they involve short time horizons, and they have relatively small state spaces and simple, predictable rules. Relaxing any of these conditions means that the standard methods will fail.
Current research, on the other hand, is aimed precisely at going beyond standard methods so that AI systems can operate in larger classes of environments. On the day I wrote the preceding paragraph, for example, OpenAI announced that its team of five AI programs had learned to beat experienced human teams at the game Dota 2. (For the uninitiated, who include me: Dota 2 is an updated version of Defense of the Ancients, a real-time strategy game in the Warcraft family; it is currently the most lucrative and competitive e-sport, with prizes in the millions of dollars.) Dota 2 involves communication, teamwork, and quasi-continuous time and space. Games last for tens of thousands of time steps, and some degree of hierarchical organization of behavior seems to be essential. Bill Gates described the announcement as “a huge milestone in advancing artificial intelligence.”⁶³ A few months later, an updated version of the program defeated the world’s top professional Dota 2 team.⁶⁴
Games such as Go and Dota 2 are a good testing ground for reinforcement learning methods because the reward function comes with the rules of the game. The real world is less convenient, however, and there have been dozens of cases in which faulty definitions of rewards led to weird and unanticipated behaviors.⁶⁵ Some are innocuous, like the simulated evolution system that was supposed to evolve fast-moving creatures but in fact produced creatures that were enormously tall and moved fast by falling over.⁶⁶ Others are less innocuous, like the social-media click-through optimizers that seem to be making a fine mess of our world.
The final category of agent program I will consider is the simplest: programs that connect perception directly to action, without any intermediate deliberation or reasoning. In AI, we call this kind of program a reflex agent—a reference to the low-level neural reflexes exhibited by humans and animals, which are not mediated by thought.⁶⁷ For example, the human blinking reflex connects the outputs of low-level processing circuits in the visual system directly to the motor area that controls the eyelids, so that any rapidly looming region in the visual field causes a hard blink. You can test it now by trying (not too hard) to poke yourself in the eye with your finger. We can think of this reflex system as a simple “rule” of the following form:
if then .
The blinking reflex does not “know what it’s doing”: the objective (of shielding the eyeball from foreign objects) is nowhere represented; the knowledge (that a rapidly looming region corresponds to an object approaching the eye, and that an object approaching the eye might damage it) is nowhere represented. Thus, when the non-reflex part of you wants to put in eye drops, the reflex part still blinks.
Another familiar reflex is emergency braking—when the car in front stops unexpectedly or a pedestrian steps into the road. Quickly deciding whether braking is required is not easy: when a test vehicle in autonomous mode killed a pedestrian in 2018, Uber explained that “emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior.”⁶⁸ Here, the human designer’s objective is clear—don’t kill pedestrians—but the agent’s policy (had it been activated) implements it incorrectly. Again, the objective is not represented in the agent: no autonomous vehicle today knows that people don’t like to be killed.
Reflex actions also play a role in more routine tasks such as staying in lane: as the car drifts ever so slightly out of the ideal lane position, a simple feedback control system can nudge the steering wheel in the opposite direction to correct the drift. The size of the nudge would depend on how far the car drifted. These kinds of control systems are usually designed to minimize the square of the tracking error added up over time. The designer derives a feedback control law that, under certain assumptions about speed and road curvature, approximately implements this minimization.⁶⁹ A similar system is operating all the time while you are standing up; if it were to stop working, you’d fall over within a few seconds. As with the blinking reflex, it’s quite hard to turn this mechanism off and allow yourself to fall over.
Reflex agents, then, implement a designer’s objective, but do not know what the objective is or why they are acting in a certain way. This means they cannot really make decisions for themselves; someone else, typically the human designer or perhaps the process of biological evolution, has to decide everything in advance. It is very hard to create a good reflex agent by manual programming except for very simple tasks such as tic-tac-toe or emergency braking. Even in those cases, the reflex agent is extremely inflexible and cannot change its behavior when circumstances indicate that the implemented policy is no longer appropriate.
One possible way to create more powerful reflex agents is through a process of learning from examples.^(D) Rather than specifying a rule for how to behave, or supplying a reward function or a goal, a human can supply examples of decision problems along with the correct decision to make in each case. For example, we can create a French-to-English translation agent by supplying examples of French sentences along with the correct English translations. (Fortunately, the Canadian and EU parliaments generate millions of such examples every year.) Then a supervised learning algorithm processes the examples to produce a complex rule that takes any French sentence as input and produces an English translation. The current champion learning algorithm for machine translation is a form of so-called deep learning, and it produces a rule in the form of an artificial neural network with hundreds of layers and millions of parameters.^(D) Other deep learning algorithms have turned out to be very good at classifying the objects in images and recognizing the words in a speech signal. Machine translation, speech recognition, and visual object recognition are three of the most important subfields in AI, which is why there has been so much excitement about the prospects for deep learning.
One can argue almost endlessly about whether deep learning will lead directly to human-level AI. My own view, which I will explain later, is that it falls far short of what is needed,^(D) but for now let’s focus on how such methods fit into the standard model of AI, where an algorithm optimizes a fixed objective. For deep learning, or indeed for any supervised learning algorithm, the “purpose put into the machine” is usually to maximize predictive accuracy—or, equivalently, to minimize error. That much seems obvious, but there are actually two ways to understand it, depending on the role that the learned rule is going to play in the overall system. The first role is a purely perceptual role: the network processes the sensory input and provides information to the rest of the system in the form of probability estimates for what it’s perceiving. If it’s an object recognition algorithm, maybe it says “70 percent probability it’s a Norfolk terrier, 30 percent it’s a Norwich terrier.”⁷⁰ The rest of the system decides on an external action to take based on this information. This purely perceptual objective is unproblematic in the following sense: even a “safe” superintelligent AI system, as opposed to an “unsafe” one based on the standard model, needs to have its perception system as accurate and well calibrated as possible.
The problem comes when we move from a purely perceptual role to a decision-making role. For example, a trained network for recognizing objects might automatically generate labels for images on a Web site or social-media account. Posting those labels is an action with consequences. Each labeling action requires an actual classification decision, and unless every decision is guaranteed to be perfect, the human designer must supply a loss function that spells out the cost of misclassifying an object of type A as an object of type B. And that’s how Google had an unfortunate problem with gorillas. In 2015, a software engineer named Jacky Alciné complained on Twitter that the Google Photos image-labeling service had labeled him and his friend as gorillas.⁷¹ While it is unclear how exactly this error occurred, it is almost certain that Google’s machine learning algorithm was designed to minimize a fixed, definite loss function—moreover, one that assigned equal cost to any error. In other words, it assumed that the cost of misclassifying a person as a gorilla was the same as the cost of misclassifying a Norfolk terrier as a Norwich terrier. Clearly, this is not Google’s (or their users’) true loss function, as was illustrated by the public relations disaster that ensued.
Since there are thousands of possible image labels, there are millions of potentially distinct costs associated with misclassifying one category as another. Even if it had tried, Google would have found it very difficult to specify all these numbers up front. Instead, the right thing to do would be to acknowledge the uncertainty about the true misclassification costs and to design a learning and classification algorithm that was suitably sensitive to costs and uncertainty about costs. Such an algorithm might occasionally ask the Google designer questions such as “Which is worse, misclassifying a dog as a cat or misclassifying a person as an animal?” In addition, if there is significant uncertainty about misclassification costs, the algorithm might well refuse to label some images.
By early 2018, it was reported that Google Photos does refuse to classify a photo of a gorilla. Given a very clear image of a gorilla with two babies, it says, “Hmm . . . not seeing this clearly yet.”⁷²
I don’t wish to suggest that AI’s adoption of the standard model was a poor choice at the time. A great deal of brilliant work has gone into developing the various instantiations of the model in logical, probabilistic, and learning systems. Many of the resulting systems are very useful; as we will see in the next chapter, there is much more to come. On the other hand, we cannot continue to rely on our usual practice of ironing out the major errors in an objective function by trial and error: machines of increasing intelligence and increasingly global impact will not allow us that luxury.
3
HOW MIGHT AI PROGRESS IN THE FUTURE?
The Near Future
On May 3, 1997, a chess match began between Deep Blue, a chess computer built by IBM, and Garry Kasparov, the world chess champion and possibly the best human player in history. Newsweek billed the match as “The Brain’s Last Stand.” On May 11, with the match tied at 2½–2½, Deep Blue defeated Kasparov in the final game. The media went berserk. The market capitalization of IBM increased by $18 billion overnight. AI had, by all accounts, achieved a massive breakthrough.
From the point of view of AI research, the match represented no breakthrough at all. Deep Blue’s victory, impressive as it was, merely continued a trend that had been visible for decades. The basic design for chess-playing algorithms was laid out in 1950 by Claude Shannon,¹ with major improvements in the early 1960s. After that, the chess ratings of the best programs improved steadily, mainly as a result of faster computers that allowed programs to look further ahead. In 1994,² Peter Norvig and I charted the numerical ratings of the best chess programs from 1965 onwards, on a scale where Kasparov’s rating was 2805. The ratings started at 1400 in 1965 and improved in an almost perfect straight line for thirty years. Extrapolating the line forward from 1994 predicts that computers would be able to defeat Kasparov in 1997—exactly when it happened.
For AI researchers, then, the real breakthroughs happened thirty or forty years before Deep Blue burst into the public’s consciousness. Similarly, deep convolutional networks existed, with all the mathematics fully worked out, more than twenty years before they began to create headlines.
The view of AI breakthroughs that the public gets from the media—stunning victories over humans, robots becoming citizens of Saudi Arabia, and so on—bears very little relation to what really happens in the world’s research labs. Inside the lab, research involves a lot of thinking and talking and writing mathematical formulas on whiteboards. Ideas are constantly being generated, abandoned, and rediscovered. A good idea—a real breakthrough—will often go unnoticed at the time and may only later be understood as having provided the basis for a substantial advance in AI, perhaps when someone reinvents it at a more convenient time. Ideas are tried out, initially on simple problems to show that the basic intuitions are correct and then on harder problems to see how well they scale up. Often, an idea will fail by itself to provide a substantial improvement in capabilities, and it has to wait for another idea to come along so that the combination of the two can demonstrate value.
All this activity is completely invisible from the outside. In the world beyond the lab, AI becomes visible only when the gradual accumulation of ideas and the evidence for their validity crosses a threshold: the point where it becomes worthwhile to invest money and engineering effort to create a new commercial product or an impressive demonstration. Then the media announce that a breakthrough has occurred.
One can expect, then, that many other ideas that have been gestating in the world’s research labs will cross the threshold of commercial applicability over the next few years. This will happen more and more frequently as the rate of commercial investment increases and as the world becomes more and more receptive to applications of AI. This chapter provides a sampling of what we can see coming down the pipe.
Along the way, I’ll mention some of the drawbacks of these technological advances. You will probably be able to think of many more, but don’t worry. I’ll get to those in the next chapter.
The AI ecosystem
In the beginning, the environment in which most computers operated was essentially formless and void: their only input came from punched cards and their only method of output was to print characters on a line printer. Perhaps for this reason, most researchers viewed intelligent machines as question-answerers; the view of machines as agents perceiving and acting in an environment did not become widespread until the 1980s.
The advent of the World Wide Web in the 1990s opened up a whole new universe for intelligent machines to play in. A new word, softbot, was coined to describe software “robots” that operate entirely in a software environment such as the Web. Softbots, or bots as they later became known, perceive Web pages and act by emitting sequences of characters, URLs, and so on.
AI companies mushroomed during the dot-com boom (1997–2000), providing core capabilities for search and e-commerce, including link analysis, recommendation systems, reputation systems, comparison shopping, and product categorization.
In the early 2000s, the widespread adoption of mobile phones with microphones, cameras, accelerometers, and GPS provided new access for AI systems to people’s daily lives; “smart speakers” such as the Amazon Echo, Google Home, and Apple HomePod have completed this process.
By around 2008, the number of objects connected to the Internet exceeded the number of people connected to the Internet—a transition that some point to as the beginning of the Internet of Things (IoT). Those things include cars, home appliances, traffic lights, vending machines, thermostats, quadcopters, cameras, environmental sensors, robots, and all kinds of material goods both in the manufacturing process and in the distribution and retail system. This provides AI systems with far greater sensory and control access to the real world.
Finally, improvements in perception have allowed AI-powered robots to move out of the factory, where they relied on rigidly constrained arrangements of objects, and into the real, unstructured, messy world, where their cameras have something interesting to look at.
Self-driving cars
In the late 1950s, John McCarthy imagined that an automated vehicle might one day take him to the airport. In 1987, Ernst Dickmanns demonstrated a self-driving Mercedes van on the autobahn in Germany; it was capable of staying in lane, following another car, changing lanes, and overtaking.³ More than thirty years later, we still don’t have a fully autonomous car, but it’s getting much closer. The focus of development has long since moved from academic research labs to large corporations. As of 2019, the best-performing test vehicles have logged millions of miles of driving on public roads (and billions of miles in driving simulators) without serious incident.⁴ Unfortunately, other autonomous and semi-autonomous vehicles have killed several people.⁵
Why has it taken so long to achieve safe autonomous driving? The first reason is that the performance requirements are exacting. Human drivers in the United States suffer roughly one fatal accident per one hundred million miles traveled, which sets a high bar. Autonomous vehicles, to be accepted, will need to be much better than that: perhaps one fatal accident per billion miles, or twenty-five thousand years of driving forty hours per week. The second reason is that one anticipated workaround—handing control to the human when the vehicle is confused or out of its safe operating conditions—simply doesn’t work. When the car is driving itself, humans quickly become disengaged from the immediate driving circumstances and cannot regain context quickly enough to take over safely. Moreover, nondrivers and taxi passengers who are in the back seat are in no position to drive the car if something goes wrong.
Current projects are aiming at SAE Level 4 autonomy,⁶ which means that the vehicle must at all times be capable of driving autonomously or stopping safely, subject to geographical limits and weather conditions. Because weather and traffic conditions can change, and because unusual circumstances can arise that a Level 4 vehicle cannot handle, a human has to be in the vehicle and ready to take over if needed. (Level 5—unrestricted autonomy—does not require a human driver but is even more difficult to achieve.) Level 4 autonomy goes far beyond the simple, reflex tasks of following white lines and avoiding obstacles. The vehicle has to assess the intent and probable future trajectories of all relevant objects, including objects that may not be visible, based on both current and past observations. Then, using lookahead search, the vehicle has to find a trajectory that optimizes some combination of safety and progress. Some projects are trying more direct approaches based on reinforcement learning (mainly in simulation, of course) and supervised learning from recordings of hundreds of human drivers, but these approaches seem unlikely to reach the required level of safety.
The potential benefits of fully autonomous vehicles are immense. Every year, 1.2 million people die in car accidents worldwide and tens of millions suffer serious injuries. A reasonable target for autonomous vehicles would be to reduce these numbers by a factor of ten. Some analyses also predict a vast reduction in transportation costs, parking structures, congestion, and pollution. Cities will shift from personal cars and large buses to ubiquitous shared-ride, autonomous electric vehicles, providing door-to-door service and feeding high-speed mass-transit connections between hubs.⁷ With costs as low as three cents per passenger mile, most cities would probably opt to provide the service for free—while subjecting riders to interminable barrages of advertising.
Of course, to reap all these benefits, the industry has to pay attention to the risks. If there are too many deaths attributed to poorly designed experimental vehicles, regulators may halt planned deployments or impose extremely stringent standards that might be unreachable for decades.⁸ And people might, of course, decide not to buy or ride in autonomous vehicles unless they are demonstrably safe. A 2018 poll revealed a significant decline in consumers’ level of trust in autonomous vehicle technology compared to 2016.⁹ Even if the technology is successful, the transition to widespread autonomy will be an awkward one: human driving skills may atrophy or disappear, and the reckless and antisocial act of driving a car oneself may be banned altogether.
Intelligent personal assistants
Most readers will by now have experienced the unintelligent personal assistant: the smart speaker that obeys purchase commands overheard on the television, or the cell phone chatbot that responds to “Call me an ambulance!” with “OK, from now on I’ll call you ‘Ann Ambulance.’” Such systems are essentially voice-mediated interfaces to applications and search engines; they are based largely on canned stimulus–response templates, an approach that dates back to the Eliza system in the mid-1960s.¹⁰
These early systems have shortcomings of three kinds: access, content, and context. Access shortcomings mean that they lack sensory awareness of what’s going on—for example, they might be able to hear what the user is saying but they can’t see who the user is talking to. Content shortcomings mean that they simply fail to understand the meaning of what the user is saying or texting, even if they have access to it. Context shortcomings mean that they lack the ability to keep track of and reason about the goals, activities, and relationships that constitute daily life.
Despite these shortcomings, smart speakers and cell phone assistants offer just enough value to the user to have entered the homes and pockets of hundreds of millions of people. They are, in a sense, Trojan horses for AI. Because they are there, embedded in so many lives, every tiny improvement in their capabilities is worth billions of dollars.
And so, improvements are coming thick and fast. Probably the most important is the elementary capacity to understand content—to know that “John’s in the hospital” is not just a prompt to say “I hope it’s nothing serious” but contains actual information that the user’s eight-year-old son is in a nearby hospital and may have a serious injury or illness. The ability to access email and text communications as well as phone calls and domestic conversations (through the smart speaker in the house) would give AI systems enough information to build a reasonably complete picture of the user’s life—perhaps even more information than might have been available to the butler working for a nineteenth-century aristocratic family or the executive assistant working for a modern-day CEO.
Raw information, of course, is not enough. To be really useful, an assistant also needs commonsense knowledge of how the world works: that a child in the hospital is not simultaneously at home; that hospital care for a broken arm seldom lasts for more than a day or two; that the child’s school will need to know of the expected absence; and so on. Such knowledge allows the assistant to keep track of things it does not observe directly—an essential skill for intelligent systems.
The capabilities described in the preceding paragraph are, I believe, feasible with existing technology for probabilistic reasoning,^(C) but this would require a very substantial effort to construct models of all the kinds of events and transactions that make up our daily lives. Up to now, these kinds of commonsense modeling projects have generally not been undertaken (except possibly in classified systems for intelligence analysis and military planning) because of the costs involved and the uncertain payoff. Now, however, projects like this could easily reach hundreds of millions of users, so the investment risks are lower and the potential rewards are much higher. Furthermore, access to large numbers of users allows the intelligent assistant to learn very quickly and fill in all the gaps in its knowledge.
Thus, one can expect to see intelligent assistants that will, for pennies a month, help users with managing an increasingly large range of daily activities: calendars, travel, household purchases, bill payment, children’s homework, email and call screening, reminders, meal planning, and—one can but dream—finding my keys. These skills will not be scattered across multiple apps. Instead, they will be facets of a single, integrated agent that can take advantage of the synergies available in what military people call the common operational picture.
The general design template for an intelligent assistant involves background knowledge about human activities, the ability to extract information from streams of perceptual and textual data, and a learning process to adapt the assistant to the user’s particular circumstances. The same general template can be applied to at least three other major areas: health, education, and finances. For these applications, the system needs to keep track of the state of the user’s body, mind, and bank account (broadly construed). As with assistants for daily life, the up-front cost of creating the necessary general knowledge in each of these three areas amortizes across billions of users.
In the case of health, for example, we all have roughly the same physiology, and detailed knowledge of how it works has already been encoded in machine-readable form.¹¹ Systems will adapt to your individual characteristics and lifestyle, providing preventive suggestions and early warning of problems.
In the area of education, the promise of intelligent tutoring systems was recognized even in the 1960s,¹² but real progress has been a long time coming. The primary reasons are shortcomings of content and access: most tutoring systems don’t understand the content of what they purport to teach, nor can they engage in two-way communication with their pupils through speech or text. (I imagine myself teaching string theory, which I don’t understand, in Laotian, which I don’t speak.) Recent progress in speech recognition means that automated tutors can, at last, communicate with pupils who are not yet fully literate. Moreover, probabilistic reasoning technology can now keep track of what students know and don’t know¹³ and can optimize the delivery of instruction to maximize learning. The Global Learning XPRIZE competition, which started in 2014, offered $15 million for “open-source, scalable software that will enable children in developing countries to teach themselves basic reading, writing and arithmetic within 15 months.” Results from the winners, Kitkit School and onebillion, suggest that the goal has largely been achieved.
In the area of personal finance, systems will keep track of investments, income streams, obligatory and discretionary expenditures, debt, interest payments, emergency reserves, and so on, in much the same way that financial analysts keep track of the finances and prospects of corporations. Integration with the agent that handles daily life will provide an even finer-grained understanding, perhaps even ensuring that the children get their pocket money minus any mischief-related deductions. One can expect to receive the quality of day-to-day financial advice previously reserved for the ultra-rich.
If your privacy alarm bells weren’t ringing as you read the preceding paragraphs, you haven’t been keeping up with the news. There are, however, multiple layers to the privacy story. First, can a personal assistant really be useful if it knows nothing about you? Probably not. Second, can personal assistants be really useful if they cannot pool information from multiple users to learn more about people in general and people who are similar to you? Probably not. So, don’t those two things imply that we have to give up our privacy to benefit from AI in our daily lives? No. The reason is that learning algorithms can operate on encrypted data using the techniques of secure multiparty computation, so that users can benefit from pooling without compromising privacy in any way.¹⁴ Will software providers adopt privacy-preserving technology voluntarily, without legislative encouragement? That remains to be seen. What seems inevitable, however, is that users will trust a personal assistant only if its primary obligation is to the user rather than to the corporation that produced it.
Smart homes and domestic robots
The smart home concept has been investigated for several decades. In 1966, James Sutherland, an engineer at Westinghouse, started collecting surplus computer parts to build ECHO, the first smart-home controller.¹⁵ Unfortunately, ECHO weighed eight hundred pounds, consumed 3.5 kilowatts, and managed just three digital clocks and the TV antenna. Subsequent systems required users to master control interfaces of mind-boggling complexity. Unsurprisingly, they never caught on.
Beginning in the 1990s, several ambitious projects attempted to design houses that managed themselves with minimal human intervention, using machine learning to adapt to the lifestyles of the occupants. To make these experiments meaningful, real people had to live in the houses. Unfortunately, the frequency of erroneous decisions made the systems worse than useless—the occupants’ quality of life decreased rather than increased. For example, inhabitants of the 2003 MavHome project¹⁶ at Washington State University often had to sit in the dark if their visitors stayed later than the usual bedtime.¹⁷ As with the unintelligent personal assistant, such failings result from inadequate sensory access to the activities of the occupants and the inability to understand and keep track of what’s happening in the house.
A truly smart home equipped with cameras and microphones—and the requisite perceptual and reasoning abilities—can understand what the occupants are doing: visiting, eating, sleeping, watching TV, reading, exercising, getting ready for a long trip, or lying helpless on the floor after a fall. By coordinating with the intelligent personal assistant, the home can have a pretty good idea of who will be in or out of the house at what time, who’s eating where, and so on. This understanding allows it to manage heating, lighting, window blinds, and security systems, to send timely reminders, and to alert users or emergency services when a problem arises. Some newly built apartment complexes in the United States and Japan are already incorporating technology of this kind.¹⁸
The value of the smart home is limited because of its actuators: much simpler systems (timed thermostats and motion-sensitive lights and burglar alarms) can deliver a lot of the same functionality in ways that are perhaps more predictable, if less context sensitive. The smart home cannot fold the laundry, clear the dishes, or pick up the newspaper. It really wants a physical robot to do its bidding.
[]
FIGURE 5: (left) BRETT folding towels; (right) the Boston Dynamics SpotMini robot opening a door.
It may not have too long to wait. Already, robots have demonstrated many of the required skills. In the Berkeley lab of my colleague Pieter Abbeel, BRETT (the Berkeley Robot for the Elimination of Tedious Tasks) has been folding piles of towels since 2011, while the SpotMini robot from Boston Dynamics can climb stairs and open doors (figure 5). Several companies are already building cooking robots, although they require special, enclosed setups and pre-cut ingredients and won’t work in an ordinary kitchen.¹⁹
Of the three basic physical capabilities required for a useful domestic robot—perception, mobility, and dexterity—the latter is most problematic. As Stefanie Tellex, a robotics professor at Brown University, puts it, “Most robots can’t pick up most objects most of the time.” This is partly a problem of tactile sensing, partly a manufacturing problem (dexterous hands are currently very expensive to build), and partly an algorithmic problem: we don’t yet have a good understanding of how to combine sensing and control to grasp and manipulate the huge variety of objects in a typical household. There are dozens of grasp types just for rigid objects and there are thousands of distinct manipulation skills, such as shaking exactly two pills out of a bottle, peeling the label off a jam jar, spreading hard butter on soft bread, or lifting one strand of spaghetti from the pot with a fork to see if it’s ready.
It seems likely that the tactile sensing and hand construction problems will be solved by 3D printing, which is already being used by Boston Dynamics for some of the more complex parts of their Atlas humanoid robot. Robot manipulation skills are advancing rapidly, thanks in part to deep reinforcement learning.²⁰ The final push—putting all this together into something that begins to approximate the awesome physical skills of movie robots—is likely to come from the rather unromantic warehouse industry. Just one company, Amazon, employs several hundred thousand people who pick products out of bins in giant warehouses and dispatch them to customers. From 2015 through 2017 Amazon ran an annual “Picking Challenge” to accelerate the development of robots capable of doing this task.²¹ There is still some distance to go, but when the core research problems are solved—probably within a decade—one can expect a very rapid rollout of highly capable robots. Initially they will work in warehouses, then in other commercial applications such as agriculture and construction, where the range of tasks and objects is fairly predictable. We might also see them quite soon in the retail sector doing tasks such as stocking supermarket shelves and refolding clothes.
The first to really benefit from robots in the home will be the elderly and infirm, for whom a helpful robot can provide a degree of independence that would otherwise be impossible. Even if the robot has a limited repertoire of tasks and only rudimentary comprehension of what’s going on, it can still be very useful. On the other hand, the robot butler, managing the household with aplomb and anticipating its master’s every wish, is still some way off—it requires something approaching the generality of human-level AI.
Intelligence on a global scale
The development of basic capabilities for understanding speech and text will allow intelligent personal assistants to do things that human assistants can already do (but they will be doing it for pennies per month instead of thousands of dollars per month). Basic speech and text understanding also enable machines to do things that no human can do—not because of the depth of understanding but because of its scale. For example, a machine with basic reading capabilities will be able to read everything the human race has ever written by lunchtime, and then it will be looking around for something else to do.²² With speech recognition capabilities, it could listen to every radio and television broadcast before teatime. For comparison, it would take two hundred thousand full-time humans just to keep up with the world’s current level of print publication (let alone all the written material from the past) and another sixty thousand to listen to current broadcasts.²³
Such a system, if it could extract even simple factual assertions and integrate all this information across all languages, would represent an incredible resource for answering questions and revealing patterns—probably far more powerful than search engines, which are currently valued at around $1 trillion. Its research value for fields such as history and sociology would be inestimable.
Of course, it would also be possible to listen to all the world’s phone calls (a job that would require about twenty million people). There are certain clandestine agencies that would find this valuable. Some of them have been doing simple kinds of large-scale machine listening, such as spotting key words in conversations, for many years, and have now made the transition to transcribing entire conversations into searchable text.²⁴ Transcriptions are certainly useful, but not nearly as useful as simultaneous understanding and content integration of all conversations.
Another “superpower” that is available to machines is to see the entire world at once. Roughly speaking, satellites image the entire world every day at an average resolution of around fifty centimeters per pixel. At this resolution, every house, ship, car, cow, and tree on Earth is visible. Well over thirty million full-time employees would be needed to examine all these images;²⁵ so, at present, no human ever sees the vast majority of satellite data. Computer vision algorithms could process all this data to produce a searchable database of the whole world, updated daily, as well as visualizations and predictive models of economic activities, changes in vegetation, migrations of animals and people, the effects of climate change, and so on. Satellite companies such as Planet and DigitalGlobe are busy making this idea a reality.
With the possibility of sensing on a global scale comes the possibility of decision making on a global scale. For example, from global satellite data feeds, it should be possible to create detailed models for managing the global environment, predicting the effects of environmental and economic interventions, and providing the necessary analytical inputs to the UN’s sustainable development goals.²⁶ We are already seeing “smart city” control systems that aim to optimize traffic management, transit, trash collection, road repairs, environmental maintenance, and other functions for the benefit of citizens, and these may be extended to the country level. Until recently, this degree of coordination could be achieved only by huge, inefficient, bureaucratic hierarchies of humans; inevitably, these will be replaced by mega-agents that take care of more and more aspects of our collective lives. Along with this, of course, comes the possibility of privacy invasion and social control on a global scale, to which I return in the next chapter.
When Will Superintelligent AI Arrive?
I am often asked to predict when superintelligent AI will arrive, and I usually refuse to answer. There are three reasons for this. First, there is a long history of such predictions going wrong.²⁷ For example, in 1960, the AI pioneer and Nobel Prize–winning economist Herbert Simon wrote, “Technologically . . . machines will be capable, within twenty years, of doing any work a man can do.”²⁸ In 1967, Marvin Minsky, a co-organizer of the 1956 Dartmouth workshop that started the field of AI, wrote, “Within a generation, I am convinced, few compartments of intellect will remain outside the machine’s realm—the problem of creating ‘artificial intelligence’ will be substantially solved.”²⁹
A second reason for declining to provide a date for superintelligent AI is that there is no clear threshold that will be crossed. Machines already exceed human capabilities in some areas. Those areas will broaden and deepen, and it is likely that there will be superhuman general knowledge systems, superhuman biomedical research systems, superhuman dexterous and agile robots, superhuman corporate planning systems, and so on well before we have a completely general superintelligent AI system. These “partially superintelligent” systems will, individually and collectively, begin to pose many of the same issues that a generally intelligent system would.
A third reason for not predicting the arrival of superintelligent AI is that it is inherently unpredictable. It requires “conceptual breakthroughs,” as noted by John McCarthy in a 1977 interview.³⁰ McCarthy went on to say, “What you want is 1.7 Einsteins and 0.3 of the Manhattan Project, and you want the Einsteins first. I believe it’ll take five to 500 years.” In the next section I’ll explain what some of the conceptual breakthroughs are likely to be. Just how unpredictable are they? Probably as unpredictable as Szilard’s invention of the nuclear chain reaction a few hours after Rutherford’s declaration that it was completely impossible.
Once, at a meeting of the World Economic Forum in 2015, I answered the question of when we might see superintelligent AI. The meeting was under Chatham House rules, which means that no remarks may be attributed to anyone present at the meeting. Even so, out of an excess of caution, I prefaced my answer with “Strictly off the record. . . .” I suggested that, barring intervening catastrophes, it would probably happen in the lifetime of my children—who were still quite young and would probably have much longer lives, thanks to advances in medical science, than many of those at the meeting. Less than two hours later, an article appeared in the Daily Telegraph citing Professor Russell’s remarks, complete with images of rampaging Terminator robots. The headline was ‘SOCIOPATHIC’ ROBOTS COULD OVERRUN THE HUMAN RACE WITHIN A GENERATION.
My timeline of, say, eighty years is considerably more conservative than that of the typical AI researcher. Recent surveys³¹ suggest that most active researchers expect human-level AI to arrive around the middle of this century. Our experience with nuclear physics suggests that it would be prudent to assume that progress could occur quite quickly and to prepare accordingly. If just one conceptual breakthrough were needed, analogous to Szilard’s idea for a neutron-induced nuclear chain reaction, superintelligent AI in some form could arrive quite suddenly. The chances are that we would be unprepared: if we built superintelligent machines with any degree of autonomy, we would soon find ourselves unable to control them. I am, however, fairly confident that we have some breathing space because there are several major breakthroughs needed between here and superintelligence, not just one.
Conceptual Breakthroughs to Come
The problem of creating general-purpose, human-level AI is far from solved. Solving it is not a matter of spending money on more engineers, more data, and bigger computers. Some futurists produce charts that extrapolate the exponential growth of computing power into the future based on Moore’s law, showing the dates when machines will become more powerful than insect brains, mouse brains, human brains, all human brains put together, and so on.³² These charts are meaningless because, as I have already said, faster machines just give you the wrong answer more quickly. If one were to collect AI’s leading experts into a single team with unlimited resources, with the goal of creating an integrated, human-level intelligent system by combining all our best ideas, the result would be failure. The system would break in the real world. It wouldn’t understand what was going on; it wouldn’t be able to predict the consequences of its actions; it wouldn’t understand what people want in any given situation; and so it would do ridiculously stupid things.
By understanding how the system would break, AI researchers are able to identify the problems that have to be solved—the conceptual breakthroughs that are needed—in order to reach human-level AI. I will now describe some of these remaining problems. Once they are solved, there may be more, but not very many more.
Language and common sense
Intelligence without knowledge is like an engine without fuel. Humans acquire a vast amount of knowledge from other humans: it is passed down through generations in the form of language. Some of it is factual: Obama became president in 2009, the density of copper is 8.92 grams per cubic centimeter, the code of Ur-Nammu set out punishments for various crimes, and so on. A great deal of knowledge resides in the language itself—in the concepts that it makes available. President, 2009, density, copper, gram, centimeter, crime, and the rest all carry with them a vast amount of information, which represents the extracted essence of the processes of discovery and organization that led them to be in the language in the first place.
Take, for example, copper, which refers to some collection of atoms in the universe, and compare it to arglebarglium, which is my name for an equally large collection of entirely randomly selected atoms in the universe. There are many general, useful, and predictive laws one can discover about copper—about its density, conductivity, malleability, melting point, stellar origin, chemical compounds, practical uses, and so on; in comparison, there is essentially nothing that can be said about arglebarglium. An organism equipped with a language composed of words like arglebarglium would be unable to function, because it would never discover the regularities that would allow it to model and predict its universe.
A machine that really understands human language would be in a position to quickly acquire vast quantities of human knowledge, allowing it to bypass tens of thousands of years of learning by the more than one hundred billion people who have lived on Earth. It seems simply impractical to expect a machine to rediscover all this from scratch, starting from raw sensory data.
At present, however, natural language technology is not up to the task of reading and understanding millions of books—many of which would stump even a well-educated human. Systems such as IBM’s Watson, which famously defeated two human champions of the Jeopardy! quiz game in 2011, can extract simple information from clearly stated facts but cannot build complex knowledge structures from text; nor can they answer questions that require extensive chains of reasoning with information from multiple sources. For example, the task of reading all available documents up to the end of 1973 and assessing (with explanations) the probable outcome of the Watergate impeachment process against then president Nixon would be well beyond the current state of the art.
There are serious efforts underway to deepen the level of language analysis and information extraction. For example, Project Aristo at the Allen Institute for AI aims to build systems that can pass school science exams after reading textbooks and study guides.³³ Here’s a question from a fourth-grade test:³⁴
Fourth graders are planning a roller-skate race. Which surface would be the best for this race?
(A) gravel
(B) sand
(C) blacktop
(D) grass
A machine faces at least two sources of difficulty in answering this question. The first is the classical language-understanding problem of working out what the sentences say: analyzing the syntactic structure, identifying the meanings of words, and so on. (Try this for yourself: use an online translation service to translate the sentences into an unfamiliar language, then use a dictionary for that language to try translating them back to English.) The second is the need for commonsense knowledge: to work out that a “roller-skate race” is probably a race between people wearing roller skates (on their feet) rather than a race between roller skates, to understand that the “surface” is what the skaters will skate on rather than what the spectators will sit on, to know what “best” means in the context of a surface for a race, and so on. Think how the answer might change if we replaced “fourth graders” with “sadistic army boot-camp trainers.”
One way to summarize the difficulty is to say that reading requires knowledge and knowledge (largely) comes from reading. In other words, we face a classic chicken-and-egg situation. We might hope for a bootstrapping process, whereby the system reads some easy text, acquires some knowledge, uses that to read more difficult text, acquires still more knowledge, and so on. Unfortunately, what tends to happen is the opposite: the knowledge acquired is mostly erroneous, which causes errors in reading, which results in more erroneous knowledge, and so on.
For example, the NELL (Never-Ending Language Learning) project at Carnegie Mellon University is probably the most ambitious language-bootstrapping project currently underway. From 2010 to 2018, NELL acquired over 120 million beliefs by reading English text on the Web.³⁵ Some of these beliefs are accurate, such as the beliefs that the Maple Leafs play hockey and won the Stanley Cup. In addition to facts, NELL acquires new vocabulary, categories, and semantic relationships all the time. Unfortunately, NELL has confidence in only 3 percent of its beliefs and relies on human experts to clean out false or meaningless beliefs on a regular basis—such as its beliefs that “Nepal is a country also known as United States” and “value is an agricultural product that is usually cut into basis.”
I suspect that there may be no single breakthrough that turns the downward spiral into an upward spiral. The basic bootstrapping process seems right: a program that knows enough facts can figure out which fact a novel sentence is referring to, and thereby learns a new textual form for expressing facts—which then lets it discover more facts, and so the process continues. (Sergey Brin, the co-founder of Google, published an important paper on the bootstrapping idea in 1998.³⁶) Priming the pump by supplying a good deal of manually encoded knowledge and linguistic information would certainly help. Increasing the sophistication of the representation of facts—allowing for complex events, causal relationships, beliefs and attitudes of others, and so on—and improving the handling of uncertainty about word meanings and sentence meanings may eventually result in a self-reinforcing rather than self-extinguishing process of learning.
Cumulative learning of concepts and theories
Approximately 1.4 billion years ago and 8.2 sextillion miles away, two black holes, one twelve million times the mass of the Earth and the other ten million, came close enough to begin orbiting each other. Gradually losing energy, they spiraled closer and closer to each other and faster and faster, reaching an orbital frequency of 250 times per second at a distance of 350 kilometers before finally colliding and merging.³⁷ In the last few milliseconds, the rate of energy emission in the form of gravitational waves was fifty times larger than the total energy output of all the stars in the universe. On September 14, 2015, those gravitational waves arrived at the Earth. They alternately expanded and compressed space itself by a factor of about one in 2.5 sextillion, equivalent to changing the distance to Proxima Centauri (4.4 light years) by the width of a human hair.
Fortunately, two days earlier, the Advanced LIGO (Laser Interferometer Gravitational-Wave Observatory) detectors in Washington and Louisiana had been switched on. Using laser interferometry, they were able to measure the minuscule distortion of space; using calculations based on Einstein’s theory of general relativity, the LIGO researchers had predicted—and were therefore looking for—the exact shape of the gravitational waveform expected from such an event.³⁸
This was possible because of the accumulation and communication of knowledge and concepts by thousands of people across centuries of observation and research. From Thales of Miletus rubbing amber with wool and observing the static charge buildup, through Galileo dropping rocks from the Leaning Tower of Pisa, to Newton seeing an apple fall from a tree, and on through thousands more observations, humanity has gradually accumulated layer upon layer of concepts, theories, and devices: mass, velocity, acceleration, force, Newton’s laws of motion and gravitation, orbital equations, electrical phenomena, atoms, electrons, electric fields, magnetic fields, electromagnetic waves, special relativity, general relativity, quantum mechanics, semiconductors, lasers, computers, and so on.
Now, in principle we can understand this process of discovery as a mapping from all the sensory data ever experienced by all humans to a very complex hypothesis about the sensory data experienced by the LIGO scientists on September 14, 2015, as they watched their computer screens. This is the purely data-driven view of learning: data in, hypothesis out, black box in between. If it could be done, it would be the apotheosis of the “big data, big network” deep learning approach, but it cannot be done. The only plausible idea we have for how intelligent entities could achieve such a stupendous feat as detecting the merger of two black holes is that prior knowledge of physics, combined with the observational data from their instruments, allowed the LIGO scientists to infer the occurrence of the merger event. Moreover, this prior knowledge was itself the result of learning with prior knowledge—and so on, all the way back through history. Thus, we have a roughly cumulative picture of how intelligent entities can build predictive capabilities, with knowledge as the building material.
I say roughly because, of course, science has taken a few wrong turns over the centuries, temporarily pursuing illusory notions such as phlogiston and the luminiferous aether. But we know for a fact that the cumulative picture is what actually happened, in the sense that scientists all along the way wrote down their findings and theories in books and papers. Later scientists had access only to these forms of explicit knowledge, and not to the original sensory experiences of earlier, long-dead generations. Because they are scientists, the members of the LIGO team understood that all the pieces of knowledge they used, including Einstein’s theory of general relativity, are (and always will be) in their probationary period and could be falsified by experiment. As it turned out, the LIGO data provided strong confirmation for general relativity as well as further evidence that the graviton—a hypothesized particle that mediates the force of gravity—is massless.
We are a very long way from being able to create machine learning systems that are capable of matching or exceeding the capacity for cumulative learning and discovery exhibited by the scientific community—or by ordinary human beings in their own lifetimes.³⁹ Deep learning systems^(D) are mostly data driven: at best, we can “wire in” some very weak forms of prior knowledge in the structure of the network. Probabilistic programming systems^(C) do allow for prior knowledge in the learning process, as expressed in the structure and vocabulary of the probabilistic knowledge base, but we do not yet have effective methods for generating new concepts and relationships and using them to expand such a knowledge base.
The difficulty is not one of finding hypotheses that provide a good fit to data; deep learning systems can find hypotheses that are a good fit to image data, and AI researchers have built symbolic learning programs able to recapitulate many historical discoveries of quantitative scientific laws.⁴⁰ Learning in an autonomous intelligent agent requires much more than this.
First, what should be included in the “data” from which predictions are made? For example, in the LIGO experiment, the model for predicting the amount that space stretches and shrinks when a gravitational wave arrives takes into account the masses of the colliding black holes, the frequency of their orbits, and so on, but it doesn’t take into account the day of the week or the occurrence of Major League baseball games. On the other hand, a model for predicting traffic on the San Francisco Bay Bridge takes into account the day of the week and the occurrence of Major League baseball games but ignores the masses and orbital frequencies of colliding black holes. Similarly, programs that learn to recognize the types of objects in images use the pixels as input, whereas a program that learns to estimate the value of an antique object would also want to know what it was made of, who made it and when, its history of usage and ownership, and so on. Why is this? Obviously, it’s because we humans already know something about gravitational waves, traffic, visual images, and antiques. We use this knowledge to decide which inputs are needed for predicting a specific output. This is called feature engineering, and doing it well requires a good understanding of the specific prediction problem.
Of course, a real intelligent machine cannot rely on human feature engineers showing up every time there is something new to learn. It will have to work out for itself what constitutes a reasonable hypothesis space for a learning problem. Presumably, it will do this by bringing to bear a wide range of relevant knowledge in various forms, but at present we have only rudimentary ideas about how to do this.⁴¹ Nelson Goodman’s Fact, Fiction, and Forecast⁴²—written in 1954 and perhaps one of the most important and underappreciated books on machine learning—suggests a kind of knowledge called an overhypothesis, because it helps to define what the space of reasonable hypotheses might be. In the case of traffic prediction, for example, the relevant overhypothesis would be that the day of the week, time of day, local events, recent accidents, holidays, transit delays, weather, and sunrise and sunset times can influence traffic conditions. (Notice that you can figure out this overhypothesis from your own background knowledge of the world, without being a traffic expert.) An intelligent learning system can accumulate and use knowledge of this kind to help formulate and solve new learning problems.
Second, and perhaps more important, is the cumulative generation of new concepts such as mass, acceleration, charge, electron, and gravitational force. Without these concepts, scientists (and ordinary people) would have to interpret their universe and make predictions on the basis of raw perceptual inputs. Instead, Newton was able to work with concepts of mass and acceleration developed by Galileo and others; Rutherford could determine that the atom was composed of a dense, positively charged nucleus surrounded by electrons because the concept of an electron had already been developed (by numerous researchers in small steps) in the late nineteenth century; indeed, all scientific discoveries rely on layer upon layer of concepts that stretch back through time and human experience.
In the philosophy of science, particularly in the early twentieth century, it was not uncommon to see the discovery of new concepts attributed to the three ineffable I’s: intuition, insight, and inspiration. All these were considered resistant to any rational or algorithmic explanation. AI researchers, including Herbert Simon,⁴³ have objected strongly to this view. Put simply, if a machine learning algorithm can search in a space of hypotheses that includes the possibility of adding definitions for new terms not present in the input, then the algorithm can discover new concepts.
For example, suppose that a robot is trying to learn the rules of backgammon by watching people playing the game. It observes how they roll the dice and notices that sometimes players move three or four pieces rather than one or two and that this happens after a roll of 1-1, 2-2, 3-3, 4-4, 5-5, or 6-6. If the program can add a new concept of doubles, defined by equality between the two dice, it can express the same predictive theory much more concisely. It is a straightforward process, using methods such as inductive logic programming,⁴⁴ to create programs that propose new concepts and definitions in order to identify theories that are both accurate and concise.
At present, we know how to do this for relatively simple cases, but for more complex theories the number of possible new concepts that could be introduced becomes simply enormous. This makes the recent success of deep learning methods in computer vision all the more intriguing. The deep networks usually succeed in finding useful intermediate features such as eyes, legs, stripes, and corners, even though they are using very simple learning algorithms. If we can understand better how this happens, we can apply the same approach to learning new concepts in the more expressive languages needed for science. This by itself would be a huge boon to humanity as well as a significant step towards general-purpose AI.
Discovering actions
Intelligent behavior over long time scales requires the ability to plan and manage activity hierarchically, at multiple levels of abstraction—all the way from doing a PhD (one trillion actions) to a single motor control command sent to one finger as part of typing a single character in the application cover letter.
Our activities are organized into complex hierarchies with dozens of levels of abstraction. These levels and the actions they contain are a key part of our civilization and are handed down through generations via our language and practices. For example, actions such as catching a wild boar and applying for a visa and buying a plane ticket may involve millions of primitive actions, but we can think about them as single units because they are already in the “library” of actions that our language and culture provides and because we know (roughly) how to do them.
Once they are in the library, we can string these high-level actions together into still higher-level actions, such as having a tribal feast for the summer solstice or doing archaeological research for a summer in a remote part of Nepal. Trying to plan such activities from scratch, starting with the lowest-level motor control steps, would be completely hopeless because such activities involve millions or billions of steps, many of which are very unpredictable. (Where will the wild boar be found, and which way will he run?) With suitable high-level actions in the library, on the other hand, one need plan only a dozen or so steps, because each such step is a large piece of the overall activity. This is something that even our feeble human brains can manage—but it gives us the “superpower” of planning over long time scales.
There was a time when these actions didn’t exist as such—for example, to obtain the right to a plane journey in 1910 would have required a long, involved, and unpredictable process of research, letter writing, and negotiation with various aeronautical pioneers. Other actions recently added to the library include emailing, googling, and ubering. As Alfred North Whitehead wrote in 1911, “Civilization advances by extending the number of important operations which we can perform without thinking about them.”⁴⁵
[]
FIGURE 6: Saul Steinberg’s View of the World from 9th Avenue, 1976, first published as a cover of The New Yorker magazine.
Saul Steinberg’s famous cover for The New Yorker (figure 6) brilliantly shows, in spatial form, how an intelligent agent manages its own future. The very immediate future is extraordinarily detailed—in fact, my brain has already loaded up the specific motor control sequences for typing the next few words. Looking a bit further ahead, there is less detail—my plan is to finish this section, have lunch, write some more, and watch France play Croatia in the final of the World Cup. Still further ahead, my plans are larger but vaguer: move back from Paris to Berkeley in early August, teach a graduate course, and finish this book. As one moves through time, the future moves closer to the present and the plans for it become more detailed, while new, vague plans may be added to the distant future. Plans for the immediate future become so detailed that they are executable directly by the motor control system.
At present we have only some pieces of this overall picture in place for AI systems. If the hierarchy of abstract actions is provided—including knowledge of how each abstract action can be refined into a subplan composed of more concrete actions—then we have algorithms that can construct complex plans to achieve specific goals. There are algorithms that can execute abstract, hierarchical plans in such a way that the agent always has a primitive, physical action “ready to go,” even if actions in the future are still at an abstract level and not yet executable.
The main missing piece of the puzzle is a method for constructing the hierarchy of abstract actions in the first place. For example, is it possible to start from scratch with a robot that knows only that it can send various electric currents to various motors and have it discover for itself the action of standing up? It’s important to understand that I’m not asking whether we can train a robot to stand up, which can be done simply by applying reinforcement learning with a reward for the robot’s head being farther away from the ground.⁴⁶ Training a robot to stand up requires that the human trainer already knows what standing up means, so that the right reward signal can be defined. What we want is for the robot to discover for itself that standing up is a thing—a useful abstract action, one that achieves the precondition (being upright) for walking or running or shaking hands or seeing over a wall and so forms part of many abstract plans for all kinds of goals. Similarly, we want the robot to discover actions such as moving from place to place, picking up objects, opening doors, tying knots, cooking dinner, finding my keys, building houses, and many other actions that have no names in any human language because we humans have not discovered them yet.
I believe this capability is the most important step needed to reach human-level AI. It would, to borrow Whitehead’s phrase again, extend the number of important operations that AI systems can perform without thinking about them. Numerous research groups around the world are hard at work on solving the problem. For example, DeepMind’s 2018 paper showing human-level performance on Quake III Arena Capture the Flag claims that their learning system “constructs a temporally hierarchical representation space in a novel way to promote . . . temporally coherent action sequences.”⁴⁷ (I’m not completely sure what this means, but it certainly sounds like progress towards the goal of inventing new high-level actions.) I suspect that we do not yet have the complete answer, but this is an advance that could occur any moment, just by putting some existing ideas together in the right way.
Intelligent machines with this capability would be able to look further into the future than humans can. They would also be able to take into account far more information. These two capabilities combined lead inevitably to better real-world decisions. In any kind of conflict situation between humans and machines, we would quickly find, like Garry Kasparov and Lee Sedol, that our every move has been anticipated and blocked. We would lose the game before it even started.
Managing mental activity
If managing activity in the real world seems complex, spare a thought for your poor brain, managing the activity of the “most complex object in the known universe”—itself. We don’t start out knowing how to think, any more than we start out knowing how to walk or play the piano. We learn how to do it. We can, to some extent, choose what thoughts to have. (Go on, think about a juicy hamburger or Bulgarian customs regulations—your choice!) In some ways, our mental activity is more complex than our activity in the real world, because our brains have far more moving parts than our bodies and those parts move much faster. The same is true for computers: for every move that AlphaGo makes on the Go board, it performs millions or billions of units of computation, each of which involves adding a branch to the lookahead search tree and evaluating the board position at the end of that branch. And each of those units of computation happens because the program makes a choice about which part of the tree to explore next. Very approximately, AlphaGo chooses computations that it expects will improve its eventual decision on the board.
It has been possible to work out a reasonable scheme for managing AlphaGo’s computational activity because that activity is simple and homogeneous: every unit of computation is of the same kind. Compared to other programs that use that same basic unit of computation, AlphaGo is probably quite efficient, but it’s probably extremely inefficient compared to other kinds of programs. For example, Lee Sedol, AlphaGo’s human opponent in the epochal match of 2016, probably does no more than a few thousand units of computation per move, but he has a much more flexible computational architecture with many more kinds of units of computation: these include dividing the board into subgames and trying to resolve their interactions; recognizing possible goals to attain and making high-level plans with actions like “keep this group alive” or “prevent my opponent from connecting these two groups”; thinking about how to achieve a specific goal, such as keeping a group alive; and ruling out whole classes of moves because they fail to address a significant threat.
We simply don’t know how to organize such complex and varied computational activity—how to integrate and build on the results from each and how to allocate computational resources to the various kinds of deliberation so that good decisions are found as quickly as possible. It is clear, however, that a simple computational architecture like AlphaGo’s cannot possibly work in the real world, where we routinely need to deal with decision horizons of not tens but billions of primitive steps and where the number of possible actions at any point is almost infinite. It’s important to remember that an intelligent agent in the real world is not restricted to playing Go or even finding Stuart’s keys—it’s just being. It can do anything next, but it cannot possibly afford to think about all the things it might do.
A system that can both discover new high-level actions—as described earlier—and manage its computational activity to focus on units of computation that quickly deliver significant improvements in decision quality would be a formidable decision maker in the real world. Like those of humans, its deliberations would be “cognitively efficient,” but it would not suffer from the tiny short-term memory and slow hardware that severely limit our ability to look far into the future, handle a large number of contingencies, and consider a large number of alternative plans.
More things missing?
If we put together everything we know how to do with all the potential new developments listed in this chapter, would it work? How would the resulting system behave? It would plow through time, absorbing vast quantities of information and keeping track of the state of the world on a massive scale by observation and inference. It would gradually improve its models of the world (which include models of humans, of course). It would use those models to solve complex problems and it would encapsulate and reuse its solution processes to make its deliberations more efficient and to enable the solution of still more complex problems. It would discover new concepts and actions, and these would allow it to improve its rate of discovery. It would make effective plans over increasingly long time scales.
In summary, it’s not obvious that anything else of great significance is missing, from the point of view of systems that are effective in achieving their objectives. Of course, the only way to be sure is to build it (once the breakthroughs have been achieved) and see what happens.
Imagining a Superintelligent Machine
The technical community has suffered from a failure of imagination when discussing the nature and impact of superintelligent AI. Often, we see discussions of reduced medical errors,⁴⁸ safer cars,⁴⁹ or other advances of an incremental nature. Robots are imagined as individual entities carrying their brains with them, whereas in fact they are likely to be wirelessly connected into a single, global entity that draws on vast stationary computing resources. It’s as if researchers are afraid of examining the real consequences of success in AI.
A general-purpose intelligent system can, by assumption, do what any human can do. For example, some humans did a lot of mathematics, algorithm design, coding, and empirical research to come up with the modern search engine. The results of all this work are very useful and of course very valuable. How valuable? A recent study showed that the median American adult surveyed would need to be paid at least $17,500 to give up using search engines for a year,⁵⁰ which translates to a global value in the tens of trillions of dollars.
Now imagine that search engines don’t exist yet because the necessary decades of work have not been done, but you have access instead to a superintelligent AI system. Simply by asking the question, you now have access to search engine technology, courtesy of the AI system. Done! Trillions of dollars in value, just for the asking, and not a single line of additional code written by you. The same goes for any other missing invention or series of inventions: if humans could do it, so can the machine.
This last point provides a useful lower bound—a pessimistic estimate—on what a superintelligent machine can do. By assumption, the machine is more capable than an individual human. There are many things an individual human cannot do, but a collection of n humans can do: put an astronaut on the Moon, create a gravitational-wave detector, sequence the human genome, run a country with hundreds of millions of people. So, roughly speaking, we create n software copies of the machine and connect them in the same way—with the same information and control flows—as the n humans. Now we have a machine that can do whatever n humans can do, except better, because each of its n components is superhuman.
This multi-agent cooperation design for an intelligent system is just a lower bound on the possible capabilities of machines because there are other designs that work better. In a collection of n humans, the total available information is kept separately in n brains and communicated very slowly and imperfectly between them. That’s why the n humans spend most of their time in meetings. In the machine, there is no need for this separation, which often prevents connecting the dots. For an example of disconnected dots in scientific discovery, a brief perusal of the long history of penicillin is quite eye-opening.⁵¹
Another useful method of stretching your imagination is to think about some particular form of sensory input—say, reading—and scale it up. Whereas a human can read and understand one book in a week, a machine could read and understand every book ever written—all 150 million of them—in a few hours. This requires a decent amount of processing power, but the books can be read largely in parallel, meaning that simply adding more chips allows the machine to scale up its reading process. By the same token, the machine can see everything at once through satellites, robots, and hundreds of millions of surveillance cameras; watch all the world’s TV broadcasts; and listen to all the world’s radio stations and phone conversations. Very quickly it would gain a far more detailed and accurate understanding of the world and its inhabitants than any human could possibly hope to acquire.
One can also imagine scaling the machine’s capacity for action. A human has direct control over only one body, while a machine can control thousands or millions. Some automated factories already exhibit this characteristic. Outside the factory, a machine that controls thousands of dexterous robots can, for example, produce vast numbers of houses, each one tailored to its future occupants’ needs and desires. In the lab, existing robotic systems for scientific research could be scaled up to perform millions of experiments simultaneously—perhaps to create complete predictive models of human biology down to the molecular level. Note that the machine’s reasoning capabilities will give it a far greater capacity to detect inconsistencies between scientific theories and between theories and observations. Indeed, it may already be the case that we have enough experimental evidence about biology to devise a cure for cancer: we just haven’t put it together.
In the cyber realm, machines already have access to billions of effectors—namely, the displays on all the phones and computers in the world. This partly explains the ability of IT companies to generate enormous wealth with very few employees; it also points to the severe vulnerability of the human race to manipulation via screens.
Scale of a different kind comes from the machine’s ability to look further into the future, with greater accuracy, than is possible for humans. We have seen this for chess and Go already; with the capacity for generating and analyzing hierarchical plans over long time scales and the ability to identify new abstract actions and high-level descriptive models, machines will transfer this advantage to domains such as mathematics (proving novel, useful theorems) and decision making in the real world. Tasks such as evacuating a large city in the event of an environmental disaster will be relatively straightforward, with the machine able to generate individual guidance for every person and vehicle to minimize the number of casualties.
The machine might work up a slight sweat when devising policy recommendations to prevent global warming. Earth systems modeling requires knowledge of physics (atmosphere, oceans), chemistry (carbon cycle, soils), biology (decomposition, migration), engineering (renewable energy, carbon capture), economics (industry, energy use), human nature (stupidity, greed), and politics (even more stupidity, even more greed). As noted, the machine will have access to vast quantities of evidence to feed all these models. It will be able to suggest or carry out new experiments and expeditions to narrow down the inevitable uncertainties—for example, to discover the true extent of gas hydrates in shallow ocean reservoirs. It will be able to consider a vast range of possible policy recommendations—laws, nudges, markets, inventions, and geoengineering interventions—but of course it will also need to find ways to persuade us to go along with them.
The Limits of Superintelligence
While stretching your imagination, don’t stretch it too far. A common mistake is to attribute godlike powers of omniscience to superintelligent AI systems—complete and perfect knowledge not just of the present but also of the future.⁵² This is quite implausible because it requires an unphysical ability to determine the exact current state of the world as well as an unrealizable ability to simulate, much faster than real time, the operation of a world that includes the machine itself (not to mention billions of brains, which would still be the second-most-complex objects in the universe).
This is not to say that it is impossible to predict some aspects of the future with a reasonable degree of certainty—for example, I know what class I’ll be teaching in what room at Berkeley almost a year from now, despite the protestations of chaos theorists about butterfly wings and all that. (Nor do I think that humans are anywhere close to predicting the future as well as the laws of physics allow!) Prediction depends on having the right abstractions—for example, I can predict that “I” will be “on stage in Wheeler Auditorium” on the Berkeley campus on the last Tuesday in April, but I cannot predict my exact location down to the millimeter or which atoms of carbon will have been incorporated into my body by then.
Machines are also subject to certain speed limits imposed by the real world on the rate at which new knowledge of the world can be acquired—one of the valid points made by Kevin Kelly in his article on oversimplified predictions about superhuman AI.⁵³ For example, to determine whether a specific drug cures a certain kind of cancer in an experimental animal, a scientist—human or machine—has two choices: inject the animal with the drug and wait several weeks or run a sufficiently accurate simulation. To run a simulation, however, requires a great deal of empirical knowledge of biology, some of which is currently unavailable; so, more model-building experiments would have to be done first. Undoubtedly, these would take time and must be done in the real world.
On the other hand, a machine scientist could run vast numbers of model-building experiments in parallel, could integrate their outcomes into an internally consistent (albeit very complex) model, and could compare the model’s predictions with the entirety of experimental evidence known to biology. Moreover, simulating the model does not necessarily require a quantum-mechanical simulation of the entire organism down to the level of individual molecular reactions—which, as Kelly points out, would take more time than simply doing the experiment in the real world. Just as I can predict my future location on Tuesdays in April with some certainty, properties of biological systems can be predicted accurately with abstract models. (Among other reasons, this is because biology operates with robust control systems based on aggregate feedback loops, so that small variations in initial conditions usually don’t lead to large variations in outcomes.) Thus, while instantaneous machine discoveries in the empirical sciences are unlikely, we can expect that science will proceed much faster with the help of machines. Indeed, it already is.
A final limitation of machines is that they are not human. This puts them at an intrinsic disadvantage when trying to model and predict one particular class of objects: humans. Our brains are all quite similar, so we can use them to simulate—to experience, if you will—the mental and emotional lives of others. This, for us, comes for free. (If you think about it, machines have an even greater advantage with each other: they can actually run each other’s code!) For example, I don’t need to be an expert on neural sensory systems to know what it feels like when you hit your thumb with a hammer. I can just hit my thumb with a hammer. Machines, on the other hand, have to start almost⁵⁴ from scratch in their understanding of humans: they have access only to our external behavior, plus all the neuroscience and psychology literature, and have to develop an understanding of how we work on that basis. In principle, they will be able to do this, but it’s reasonable to suppose that acquiring a human-level or superhuman understanding of humans will take them longer than most other capabilities.
How Will AI Benefit Humans?
Our intelligence is responsible for our civilization. With access to greater intelligence we could have a greater—and perhaps far better—civilization. One can speculate about solving major open problems such as extending human life indefinitely or developing faster-than-light travel, but these staples of science fiction are not yet the driving force for progress in AI. (With superintelligent AI, we’ll probably be able to invent all sorts of quasi-magical technologies, but it’s hard to say now what those might be.) Consider, instead, a far more prosaic goal: raising the living standard of everyone on Earth, in a sustainable way, to a level that would be viewed as quite respectable in a developed country. Choosing (somewhat arbitrarily) respectable to mean the eighty-eighth percentile in the United States, the stated goal represents almost a tenfold increase in global gross domestic product (GDP), from $76 trillion to $750 trillion per year.⁵⁵
To calculate the cash value of such a prize, economists use the net present value of the income stream, which takes into account the discounting of future income relative to the present. The extra income of $674 trillion per year has a net present value of roughly $13,500 trillion,⁵⁶ assuming a discount factor of 5 percent. So, in very crude terms, this is a ballpark figure for what human-level AI might be worth if it can deliver a respectable living standard for everyone. With numbers like this, it’s not surprising that companies and countries are investing tens of billions of dollars annually in AI research and development.⁵⁷ Even so, the sums invested are minuscule compared to the size of the prize.
Of course, these are all made-up numbers unless one has some idea of how human-level AI could achieve the feat of raising living standards. It can do this only by increasing the per-capita production of goods and services. Put another way: the average human can never expect to consume more than the average human produces. The example of self-driving taxis discussed earlier in the chapter illustrates the multiplier effect of AI: with an automated service, it should be possible for (say) ten people to manage a fleet of one thousand vehicles, so each person is producing one hundred times as much transportation as before. The same goes for manufacturing the cars and for extracting the raw materials from which the cars are made. Indeed, some iron-ore mining operations in northern Australia, where temperatures regularly exceed 45 degrees Celsius (113 degrees Fahrenheit), are almost completely automated already.⁵⁸
These present-day applications of AI are special-purpose systems: self-driving cars and self-operating mines have required huge investments in research, mechanical design, software engineering, and testing to develop the necessary algorithms and to make sure that they work as intended. That’s just how things are done in all spheres of engineering. That’s how things used to be done in personal travel too: if you wanted to travel from Europe to Australia and back in the seventeenth century, it would have involved a huge project costing vast sums of money, requiring years of planning, and carrying a high risk of death. Now we are used to the idea of transportation as a service (TaaS): if you need to be in Melbourne early next week, it just requires a few taps on your phone and a relatively minuscule amount of money.
General-purpose AI would be everything as a service (EaaS). There would be no need to employ armies of specialists in different disciplines, organized into hierarchies of contractors and subcontractors, in order to carry out a project. All embodiments of general-purpose AI would have access to all the knowledge and skills of the human race, and more besides. The only differentiation would be in the physical capabilities: dexterous legged robots for construction or surgery, wheeled robots for large-scale goods transportation, quadcopter robots for aerial inspections, and so on. In principle—politics and economics aside—everyone could have at their disposal an entire organization composed of software agents and physical robots, capable of designing and building bridges, improving crop yields, cooking dinner for a hundred guests, running elections, or doing whatever else needs doing. It’s the generality of general-purpose intelligence that makes this possible.
History has shown, of course, that a tenfold increase in global GDP per capita is possible without AI—it’s just that it took 190 years (from 1820 to 2010) to achieve that increase.⁵⁹ It required the development of factories, machine tools, automation, railways, steel, cars, airplanes, electricity, oil and gas production, telephones, radio, television, computers, the Internet, satellites, and many other revolutionary inventions. The tenfold increase in GDP posited in the preceding paragraphs is predicated not on further revolutionary technologies but on the ability of AI systems to employ what we already have more effectively and at greater scale.
Of course, there will be effects besides the purely material benefit of raising living standards. For example, personal tutoring is known to be far more effective than classroom teaching, but when done by humans it is simply unaffordable—and always will be—for the vast majority of people. With AI tutors, the potential of each child, no matter how poor, can be realized. The cost per child would be negligible, and that child would live a far richer and more productive life. The pursuit of artistic and intellectual endeavors, whether individually or collectively, would be a normal part of life rather than a rarefied luxury.
In the area of health, AI systems should enable researchers to unravel and master the vast complexities of human biology and thereby gradually banish disease. Greater insights into human psychology and neurochemistry should lead to broad improvements in mental health.
Perhaps more unconventionally, AI could enable far more effective authoring tools for virtual reality (VR) and could populate VR environments with far more interesting entities. This might turn VR into the medium of choice for literary and artistic expression, creating experiences of a richness and depth that is currently unimaginable.
And in the mundane world of daily life, an intelligent assistant and guide would—if well designed and not co-opted by economic and political interests—empower every individual to act effectively on their own behalf in an increasingly complex and sometimes hostile economic and political system. You would, in effect, have a high-powered lawyer, accountant, and political adviser on call at any time. Just as traffic jams are expected to be smoothed out by intermixing even a small percentage of autonomous vehicles, one can only hope that wiser policies and fewer conflicts will emerge from a better-informed and better-advised global citizenry.
These developments taken together could change the dynamic of history—at least that part of history that has been driven by conflicts within and between societies for access to the wherewithal of life. If the pie is essentially infinite, then fighting others for a larger share makes little sense. It would be like fighting over who gets the most digital copies of the newspaper—completely pointless when anyone can make as many digital copies as they want for free.
There are some limits to what AI can provide. The pies of land and raw materials are not infinite, so there cannot be unlimited population growth and not everyone will have a mansion in a private park. (This will eventually necessitate mining elsewhere in the solar system and constructing artificial habitats in space; but I promised not to talk about science fiction.) The pie of pride is also finite: only 1 percent of people can be in the top 1 percent on any given metric. If human happiness requires being in the top 1 percent, then 99 percent of humans are going to be unhappy, even when the bottom 1 percent has an objectively splendid lifestyle.⁶⁰ It will be important, then, for our cultures to gradually down-weight pride and envy as central elements of perceived self-worth.
As Nick Bostrom puts it at the end of his book Superintelligence, success in AI will yield “a civilizational trajectory that leads to a compassionate and jubilant use of humanity’s cosmic endowment.” If we fail to take advantage of what AI has to offer, we will have only ourselves to blame.
4
MISUSES OF AI
A compassionate and jubilant use of humanity’s cosmic endowment sounds wonderful, but we also have to reckon with the rapid rate of innovation in the malfeasance sector. Ill-intentioned people are thinking up new ways to misuse AI so quickly that this chapter is likely to be outdated even before it attains printed form. Think of it not as depressing reading, however, but as a call to act before it is too late.
Surveillance, Persuasion, and Control
The automated Stasi
The Ministerium für Staatsicherheit of East Germany, more commonly known as the Stasi, is widely regarded as “one of the most effective and repressive intelligence and secret police agencies to have ever existed.”¹ It maintained files on the great majority of East German households. It monitored phone calls, read letters, and planted hidden cameras in apartments and hotels. It was ruthlessly effective at identifying and eliminating dissident activity. Its preferred modus operandi was psychological destruction rather than imprisonment or execution. This level of control came at great cost, however: by some estimates, more than a quarter of working-age adults were Stasi informants. Stasi paper records have been estimated at twenty billion pages² and the task of processing and acting on the huge incoming flows of information began to exceed the capacity of any human organization.
It should come as no surprise, then, that intelligence agencies have spotted the potential for using AI in their work. For many years, they have been applying simple forms of AI technology, including voice recognition and identification of key words and phrases in both speech and text. Increasingly, AI systems are able to understand the content of what people are saying and doing, whether in speech, text, or video surveillance. In regimes where this technology is adopted for the purposes of control, it will be as if every citizen had their own personal Stasi operative watching over them twenty-four hours a day.³
Even in the civilian sphere, in relatively free countries, we are subject to increasingly effective surveillance. Corporations collect and sell information about our purchases, Internet and social network usage, electrical appliance usage, calling and texting records, employment, and health. Our locations can be tracked through our cell phones and our Internet-connected cars. Cameras recognize our faces on the street. All this data, and much more, can be pieced together by intelligent information integration systems to produce a fairly complete picture of what each of us is doing, how we live our lives, who we like and dislike, and how we will vote.⁴ The Stasi will look like amateurs by comparison.
Controlling your behavior
Once surveillance capabilities are in place, the next step is to modify your behavior to suit those who are deploying this technology. One rather crude method is automated, personalized blackmail: a system that understands what you are doing—whether by listening, reading, or watching you—can easily spot things you should not be doing. Once it finds something, it will enter into correspondence with you to extract the largest possible amount of money (or to coerce behavior, if the goal is political control or espionage). The extraction of money works as the perfect reward signal for a reinforcement learning algorithm, so we can expect AI systems to improve rapidly in their ability to identify and profit from misbehavior. Early in 2015, I suggested to a computer security expert that automated blackmail systems, driven by reinforcement learning, might soon become feasible; he laughed and said it was already happening. The first blackmail bot to be widely publicized was Delilah, identified in July 2016.⁵
A more subtle way to change people’s behavior is to modify their information environment so that they believe different things and make different decisions. Of course, advertisers have been doing this for centuries as a way of modifying the purchasing behavior of individuals. Propaganda as a tool of war and political domination has an even longer history.
So what’s different now? First, because AI systems can track an individual’s online reading habits, preferences, and likely state of knowledge, they can tailor specific messages to maximize impact on that individual while minimizing the risk that the information will be disbelieved. Second, the AI system knows whether the individual reads the message, how long they spend reading it, and whether they follow additional links within the message. It then uses these signals as immediate feedback on the success or failure of its attempt to influence each individual; in this way, it quickly learns to become more effective in its work. This is how content selection algorithms on social media have had their insidious effect on political opinions.
Another recent change is that the combination of AI, computer graphics, and speech synthesis is making it possible to generate deepfakes—realistic video and audio content of just about anyone saying or doing just about anything. The technology will require little more than a verbal description of the desired event, making it usable by more or less anyone in the world. Cell phone video of Senator X accepting a bribe from cocaine dealer Y at shady establishment Z? No problem! This kind of content can induce unshakeable beliefs in things that never happened.⁶ In addition, AI systems can generate millions of false identities—the so-called bot armies—that can pump out billions of comments, tweets, and recommendations daily, swamping the efforts of mere humans to exchange truthful information. Online marketplaces such as eBay, Taobao, and Amazon that rely on reputation systems⁷ to build trust between buyers and sellers are constantly at war with bot armies designed to corrupt the markets.
Finally, methods of control can be direct if a government is able to implement rewards and punishments based on behavior. Such a system treats people as reinforcement learning algorithms, training them to optimize the objective set by the state. The temptation for a government, particularly one with a top-down, engineering mind-set, is to reason as follows: it would be better if everyone behaved well, had a patriotic attitude, and contributed to the progress of the country; technology enables measurement of individual behavior, attitudes, and contributions; therefore, everyone will be better off if we set up a technology-based system of monitoring and control based on rewards and punishments.
There are several problems with this line of thinking. First, it ignores the psychic cost of living under a system of intrusive monitoring and coercion; outward harmony masking inner misery is hardly an ideal state. Every act of kindness ceases to be an act of kindness and becomes instead an act of personal score maximization and is perceived as such by the recipient. Or worse, the very concept of a voluntary act of kindness gradually becomes just a fading memory of something people used to do. Visiting an ailing friend in hospital will, under such a system, have no more moral significance and emotional value than stopping at a red light. Second, the scheme falls victim to the same failure mode as the standard model of AI, in that it assumes that the stated objective is in fact the true, underlying objective. Inevitably, Goodhart’s law will take over, whereby individuals optimize the official measure of outward behavior, just as universities have learned to optimize the “objective” measures of “quality” used by university ranking systems instead of improving their real (but unmeasured) quality.⁸ Finally, the imposition of a uniform measure of behavioral virtue misses the point that a successful society may comprise a wide variety of individuals, each contributing in their own way.
A right to mental security
One of the great achievements of civilization has been the gradual improvement in physical security for humans. Most of us can expect to conduct our daily lives without constant fear of injury and death. Article 3 of the 1948 Universal Declaration of Human Rights states, “Everyone has the right to life, liberty and security of person.”
I would like to suggest that everyone should also have the right to mental security—the right to live in a largely true information environment. Humans tend to believe the evidence of our eyes and ears. We trust our family, friends, teachers, and (some) media sources to tell us what they believe to be the truth. Even though we do not expect used-car salespersons and politicians to tell us the truth, we have trouble believing that they are lying as brazenly as they sometimes do. We are, therefore, extremely vulnerable to the technology of misinformation.
The right to mental security does not appear to be enshrined in the Universal Declaration. Articles 18 and 19 establish the rights of “freedom of thought” and “freedom of opinion and expression.” One’s thoughts and opinions are, of course, partly formed by one’s information environment, which, in turn, is subject to Article 19’s “right to . . . impart information and ideas through any media and regardless of frontiers.” That is, anyone, anywhere in the world, has the right to impart false information to you. And therein lies the difficulty: democratic nations, particularly the United States, have for the most part been reluctant—or constitutionally unable—to prevent the imparting of false information on matters of public concern because of justifiable fears regarding government control of speech. Rather than pursuing the idea that there is no freedom of thought without access to true information, democracies seem to have placed a naïve trust in the idea that the truth will win out in the end, and this trust has left us unprotected. Germany is an exception; it recently passed the Network Enforcement Act, which requires content platforms to remove proscribed hate speech and fake news, but this has come under considerable criticism as being unworkable and undemocratic.⁹
For the time being, then, we can expect our mental security to remain under attack, protected mainly by commercial and volunteer efforts. These efforts include fact-checking sites such as factcheck.org and snopes.com—but of course other “fact-checking” sites are springing up to declare truth as lies and lies as truth.
The major information utilities such as Google and Facebook have come under extreme pressure in Europe and the United States to “do something about it.” They are experimenting with ways to flag or relegate false content—using both AI and human screeners—and to direct users to verified sources that counteract the effects of misinformation. Ultimately, all such efforts rely on circular reputation systems, in the sense that sources are trusted because trusted sources report them to be trustworthy. If enough false information is propagated, these reputation systems can fail: sources that are actually trustworthy can become untrusted and vice versa, as appears to be occurring today with major media sources such as CNN and Fox News in the United States. Aviv Ovadya, a technologist working against misinformation, has called this the “infopocalypse—a catastrophic failure of the marketplace of ideas.”¹⁰
One way to protect the functioning of reputation systems is to inject sources that are as close as possible to ground truth. A single fact that is certainly true can invalidate any number of sources that are only somewhat trustworthy, if those sources disseminate information contrary to the known fact. In many countries, notaries function as sources of ground truth to maintain the integrity of legal and real-estate information; they are usually disinterested third parties in any transaction and are licensed by governments or professional societies. (In the City of London, the Worshipful Company of Scriveners has been doing this since 1373, suggesting that a certain stability inheres in the role of truth telling.) If formal standards, professional qualifications, and licensing procedures emerge for fact-checkers, that would tend to preserve the validity of the information flows on which we depend. Organizations such as the W3C Credible Web group and the Credibility Coalition aim to develop technological and crowdsourcing methods for evaluating information providers, which would then allow users to filter out unreliable sources.
A second way to protect reputation systems is to impose a cost for purveying false information. Thus, some hotel rating sites accept reviews concerning a particular hotel only from those who have booked and paid for a room at that hotel through the site, while other rating sites accept reviews from anyone. It will come as no surprise that ratings at the former sites are far less biased, because they impose a cost (paying for an unnecessary hotel room) for fraudulent reviews.¹¹ Regulatory penalties are more controversial: no one wants a Ministry of Truth, and Germany’s Network Enforcement Act penalizes only the content platform, not the person posting the fake news. On the other hand, just as many nations and many US states make it illegal to record telephone calls without permission, it ought, at least, to be possible to impose penalties for creating fictitious audio and video recordings of real people.
Finally, there are two other facts that work in our favor. First, almost no one actively wants, knowingly, to be lied to. (This is not to say that parents always inquire vigorously into the truthfulness of those who praise their children’s intelligence and charm; it’s just that they are less likely to seek such approval from someone who is known to lie at every opportunity.) This means that people of all political persuasions have an incentive to adopt tools that help them distinguish truth from lies. Second, no one wants to be known as a liar, least of all news outlets. This means that information providers—at least those for who reputation matters—have an incentive to join industry associations and subscribe to codes of conduct that favor truth telling. In turn, social media platforms can offer users the option of seeing content from only reputable sources that subscribe to these codes and subject themselves to third-party fact-checking.
Lethal Autonomous Weapons
The United Nations defines lethal autonomous weapons systems (AWS for short, because LAWS is quite confusing) as weapons systems that “locate, select, and eliminate human targets without human intervention.” AWS have been described, with good reason, as the “third revolution in warfare,” after gunpowder and nuclear weapons.
You may have read articles in the media about AWS; usually the article will call them killer robots and will be festooned with images from the Terminator movies. This is misleading in at least two ways: first, it suggests that autonomous weapons are a threat because they might take over the world and destroy the human race; second, it suggests that autonomous weapons will be humanoid, conscious, and evil.
The net effect of the media’s portrayal of the issue has been to make it seem like science fiction. Even the German government has been taken in: it recently issued a statement¹² asserting that “having the ability to learn and develop self-awareness constitutes an indispensable attribute to be used to define individual functions or weapon systems as autonomous.” (This makes as much sense as asserting that a missile isn’t a missile unless it goes faster than the speed of light.) In fact, autonomous weapons will have the same degree of autonomy as a chess program, which is given the mission of winning the game but decides by itself where to move its pieces and which enemy pieces to eliminate.
[]
FIGURE 7: (left) Harop loitering weapon produced by Israel Aerospace Industries; (right) still image from the Slaughterbots video showing a possible design for an autonomous weapon containing a small, explosive-driven projectile.
AWS are not science fiction. They already exist. Probably the clearest example is Israel’s Harop (figure 7, left), a loitering munition with a ten-foot wingspan and a fifty-pound warhead. It searches for up to six hours in a given geographical region for any target that meets a given criterion and then destroys it. The criterion could be “emits a radar signal resembling antiaircraft radar” or “looks like a tank.”
By combining recent advances in miniature quadrotor design, miniature cameras, computer vision chips, navigation and mapping algorithms, and methods for detecting and tracking humans, it would be possible in fairly short order to field an antipersonnel weapon like the Slaughterbot¹³ shown in figure 7 (right). Such a weapon could be tasked with attacking anyone meeting certain visual criteria (age, gender, uniform, skin color, and so on) or even specific individuals based on face recognition. I’m told that the Swiss Defense Department has already built and tested a real Slaughterbot and found that, as expected, the technology is both feasible and lethal.
Since 2014, diplomatic discussions have been underway in Geneva that may lead to a treaty banning AWS. At the same time, some of the major participants in these discussions (the United States, China, Russia, and to some extent Israel and the UK) are engaged in a dangerous competition to develop autonomous weapons. In the United States, for example, the CODE (Collaborative Operations in Denied Environments) program aims to move towards autonomy by enabling drones to function with at best intermittent radio contact. The drones will “hunt in packs, like wolves” according to the program manager.¹⁴ In 2016, the US Air Force demonstrated the in-flight deployment of 103 Perdix micro-drones from three F/A-18 fighters. According to the announcement, “Perdix are not pre-programmed synchronized individuals, they are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature.”¹⁵
You may think it’s pretty obvious that building machines that can decide to kill humans is a bad idea. But “pretty obvious” is not always persuasive to governments—including some of those listed in the preceding paragraph—who are bent on achieving what they think of as strategic superiority. A more convincing reason to reject autonomous weapons is that they are scalable weapons of mass destruction.
Scalable is a term from computer science; a process is scalable if you can do a million times more of it essentially by buying a million times more hardware. Thus, Google handles roughly five billion search requests per day by having not millions of employees but millions of computers. With autonomous weapons, you can do a million times more killing by buying a million times more weapons, precisely because the weapons are autonomous. Unlike remotely piloted drones or AK-47s, they don’t need individual human supervision to do their work.
As weapons of mass destruction, scalable autonomous weapons have advantages for the attacker compared to nuclear weapons and carpet bombing: they leave property intact and can be applied selectively to eliminate only those who might threaten an occupying force. They could certainly be used to wipe out an entire ethnic group or all the adherents of a particular religion (if adherents have visible indicia). Moreover, whereas the use of nuclear weapons represents a cataclysmic threshold that we have (often by sheer luck) avoided crossing since 1945, there is no such threshold with scalable autonomous weapons. Attacks could escalate smoothly from one hundred casualties to one thousand to ten thousand to one hundred thousand. In addition to actual attacks, the mere threat of attacks by such weapons makes them an effective tool for terror and oppression. Autonomous weapons will greatly reduce human security at all levels: personal, local, national, and international.
This is not to say that autonomous weapons will be the end of the world in the way envisaged in the Terminator movies. They need not be especially intelligent—a self-driving car probably needs to be smarter—and their missions will not be of the “take over the world” variety. The existential risk from AI does not come primarily from simple-minded killer robots. On the other hand, superintelligent machines in conflict with humanity could certainly arm themselves this way, by turning relatively stupid killer robots into physical extensions of a global control system.
Eliminating Work as We Know It
Thousands of media articles and opinion pieces and several books have been written on the topic of robots taking jobs from humans. Research centers are springing up all over the world to understand what is likely to happen.¹⁶ The titles of Martin Ford’s Rise of the Robots: Technology and the Threat of a Jobless Future¹⁷ and Calum Chace’s The Economic Singularity: Artificial Intelligence and the Death of Capitalism¹⁸ do a pretty good job of summarizing the concern. Although, as will soon become evident, I am by no means qualified to opine on what is essentially a matter for economists,¹⁹ I suspect that the issue is too important to leave entirely to them.
The issue of technological unemployment was brought to the fore in a famous article, “Economic Possibilities for Our Grandchildren,” by John Maynard Keynes. He wrote the article in 1930, when the Great Depression had created mass unemployment in Britain, but the topic has a much longer history. Aristotle, in Book I of his Politics, presents the main point quite clearly:
For if every instrument could accomplish its own work, obeying or anticipating the will of others . . . if, in like manner, the shuttle would weave and the plectrum touch the lyre without a hand to guide them, chief workmen would not want servants, nor masters slaves.
Everyone agrees with Aristotle’s observation that there is an immediate reduction in employment when an employer finds a mechanical method to perform work previously done by a person. The issue is whether the so-called compensation effects that ensue—and that tend to increase employment—will eventually make up for this reduction. The optimists say yes—and in the current debate, they point to all the new jobs that emerged after previous industrial revolutions. The pessimists say no—and in the current debate, they argue that machines will do all the “new jobs” too. When a machine replaces one’s physical labor, one can sell mental labor. When a machine replaces one’s mental labor, what does one have left to sell?
In Life 3.0, Max Tegmark depicts the debate as a conversation between two horses discussing the rise of the internal combustion engine in 1900. One predicts “new jobs for horses. . . . That’s what’s always happened before, like with the invention of the wheel and the plow.” For most horses, alas, the “new job” was to be pet food.
[]
FIGURE 8: A notional graph of housepainting employment as painting technology improves.
The debate has persisted for millennia because there are effects in both directions. The actual outcome depends on which effects matter more. Consider, for example, what happens to housepainters as technology improves. For the sake of simplicity, I’ll let the width of the paintbrush stand for the degree of automation:
- If the brush is one hair (a tenth of a millimeter) wide, it takes thousands of person-years to paint a house and essentially no housepainters are employed.
- With brushes a millimeter wide, perhaps a few delicate murals are painted in the royal palace by a handful of painters. At one centimeter, the nobility begin to follow suit.
- At ten centimeters (four inches), we reach the realm of practicality: most homeowners have their houses painted inside and out, although perhaps not all that frequently, and thousands of housepainters find jobs.
- Once we get to wide rollers and spray guns—the equivalent of a paintbrush about a meter wide—the price goes down considerably, but demand may begin to saturate so the number of housepainters drops somewhat.
- When one person manages a team of one hundred housepainting robots—the productivity equivalent of a paintbrush one hundred meters wide—then whole houses can be painted in an hour and very few housepainters will be working.
Thus, the direct effects of technology work both ways: at first, by increasing productivity, technology can increase employment by reducing the price of an activity and thereby increasing demand; subsequently, further increases in technology mean that fewer and fewer humans are required. Figure 8 illustrates these developments.²⁰
Many technologies exhibit similar curves. If, in some given sector of the economy, we are to the left of the peak, then improving technology increases employment in that sector; present-day examples might include tasks such as graffiti removal, environmental cleanup, inspection of shipping containers, and housing construction in less developed countries, all of which might become more economically feasible if we have robots to help us. If we are already to the right of the peak, then further automation decreases employment. For example, it’s not hard to predict that elevator operators will continue to be squeezed out. In the long run, we have to expect that most industries are going to be pushed to the far right on the curve. One recent article, based on a careful econometric study by economists David Autor and Anna Salomons, states that “over the last 40 years, jobs have fallen in every single industry that introduced technologies to enhance productivity.”²¹
What about the compensation effects described by the economic optimists?
- Some people have to make the painting robots. How many? Far fewer than the number of housepainters the robots replace—otherwise, it would cost more to paint houses with robots, not less, and no one would buy the robots.
- Housepainting becomes somewhat cheaper, so people call in the housepainters a bit more often.
- Finally, because we pay less for housepainting, we have more money to spend on other things, thereby increasing employment in other sectors.
Economists have tried to measure the size of these effects in various industries experiencing increased automation, but the results are generally inconclusive.
Historically, most mainstream economists have argued from the “big picture” view: automation increases productivity, so, as a whole, humans are better off, in the sense that we enjoy more goods and services for the same amount of work.
Economic theory does not, unfortunately, predict that each human will be better off as a result of automation. Generally, automation increases the share of income going to capital (the owners of the housepainting robots) and decreases the share going to labor (the ex-housepainters). The economists Erik Brynjolfsson and Andrew McAfee, in The Second Machine Age, argue that this has already been happening for several decades. Data for the United States are shown in figure 9. They indicate that between 1947 and 1973, wages and productivity increased together, but after 1973, wages stagnated even while productivity roughly doubled. Brynjolfsson and McAfee call this the Great Decoupling. Other leading economists have also sounded the alarm, including Nobel laureates Robert Shiller, Mike Spence, and Paul Krugman; Klaus Schwab, head of the World Economic Forum; and Larry Summers, former chief economist of the World Bank and Treasury secretary under President Bill Clinton.
Those arguing against the notion of technological unemployment often point to bank tellers, whose work can be done in part by ATMs, and retail cashiers, whose work is sped up by barcodes and RFID tags on merchandise. It is often claimed that these occupations are growing because of technology. Indeed, the number of tellers in the United States roughly doubled from 1970 to 2010, although it should be noted that the US population grew by 50 percent and the financial sector by over 400 percent in the same period,²² so it is difficult to attribute all, or perhaps any, of the employment growth to ATMs. Unfortunately, between 2010 and 2016 about one hundred thousand tellers lost their jobs, and the US Bureau of Labor Statistics (BLS) predicts another forty thousand job losses by 2026: “Online banking and automation technology are expected to continue replacing more job duties that tellers traditionally performed.”²³ The data on retail cashiers are no more encouraging: the number per capita dropped by 5 percent from 1997 to 2015, and the BLS says, “Advances in technology, such as self-service checkout stands in retail stores and increasing online sales, will continue to limit the need for cashiers.” Both sectors appear to be on the downslope. The same is true of almost all low-skilled occupations that involve working with machines.
[]
FIGURE 9: Economic production and real median wages in the United States since 1947. (Data from the Bureau of Labor Statistics.)
Which occupations are about to decline as new, AI-based technology arrives? The prime example cited in the media is that of driving. In the United States there are about 3.5 million truck drivers; many of these jobs would be vulnerable to automation. Amazon, among other companies, is already using self-driving trucks for freight haulage on interstate freeways, albeit currently with human backup drivers.²⁴ It seems very likely that the long-haul part of each truck journey will soon be autonomous, while humans, for the time being, will handle city traffic, pickup, and delivery. As a consequence of these expected developments, very few young people are interested in trucking as a career; ironically, there is currently a significant shortage of truck drivers in the Unites States, which is only hastening the onset of automation.
White-collar jobs are also at risk. For example, the BLS projects a 13 percent decline in per-capita employment of insurance underwriters from 2016 to 2026: “Automated underwriting software allows workers to process applications more quickly than before, reducing the need for as many underwriters.” If language technology develops as expected, many sales and customer service jobs will also be vulnerable, as well as jobs in the legal profession. (In a 2018 competition, AI software outscored experienced law professors in analyzing standard nondisclosure agreements and completed the task two hundred times faster.²⁵) Routine forms of computer programming—the kind that is often outsourced today—are also likely to be automated. Indeed, almost anything that can be outsourced is a good candidate for automation, because outsourcing involves decomposing jobs into tasks that can be parceled up and distributed in a decontextualized form. The robot process automation industry produces software tools that achieve exactly this effect for clerical tasks performed online.
As AI progresses, it is certainly possible—perhaps even likely—that within the next few decades essentially all routine physical and mental labor will be done more cheaply by machines. Since we ceased to be hunter-gatherers thousands of years ago, our societies have used most people as robots, performing repetitive manual and mental tasks, so it is perhaps not surprising that robots will soon take on these roles. When this happens, it will push wages below the poverty line for those people who are unable to compete for the highly skilled jobs that remain. Larry Summers put it this way: “It may well be that, given the possibilities for substitution [of capital for labor], some categories of labor will not be able to earn a subsistence income.”²⁶ This is precisely what happened to the horses: mechanical transportation became cheaper than the upkeep cost of a horse, so horses became pet food. Faced with the socioeconomic equivalent of becoming pet food, humans will be rather unhappy with their governments.
Faced with potentially unhappy humans, governments around the world are beginning to devote some attention to the issue. Most have already discovered that the idea of retraining everyone as a data scientist or robot engineer is a nonstarter—the world might need five or ten million of these, but nowhere close to the billion or so jobs that are at risk. Data science is a very tiny lifeboat for a giant cruise ship.²⁷
Some are working on “transition plans”—but transition to what? We need a plausible destination in order to plan a transition—that is, we need a plausible picture of a desirable future economy where most of what we currently call work is done by machines.
One rapidly emerging picture is that of an economy where far fewer people work because work is unnecessary. Keynes envisaged just such a future in his essay “Economic Possibilities for Our Grandchildren.” He described the high unemployment afflicting Great Britain in 1930 as a “temporary phase of maladjustment” caused by an “increase of technical efficiency” that took place “faster than we can deal with the problem of labour absorption.” He did not, however, imagine that in the long run—after a century of further technological advances—there would be a return to full employment:
Thus for the first time since his creation man will be faced with his real, his permanent problem—how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.
Such a future requires a radical change in our economic system, because, in many countries, those who do not work face poverty or destitution. Thus, modern proponents of Keynes’s vision usually support some form of universal basic income, or UBI. Funded by value-added taxes or by taxes on income from capital, UBI would provide a reasonable income to every adult, regardless of circumstance. Those who aspire to a higher standard of living can still work without losing the UBI, while those who do not can spend their time as they see fit. Perhaps surprisingly, UBI has support across the political spectrum, ranging from the Adam Smith Institute²⁸ to the Green Party.²⁹
For some, UBI represents a version of paradise.³⁰ For others, it represents an admission of failure—an assertion that most people will have nothing of economic value to contribute to society. They can be fed and housed—mostly by machines—but otherwise left to their own devices. The truth, as always, lies somewhere in between, and it depends largely on how one views human psychology. Keynes, in his essay, made a clear distinction between those who strive and those who enjoy—those “purposive” people for whom “jam is not jam unless it is a case of jam to-morrow and never jam to-day” and those “delightful” people who are “capable of taking direct enjoyment in things.” The UBI proposal assumes that the great majority of people are of the delightful variety.
Keynes suggests that striving is one of the “habits and instincts of the ordinary man, bred into him for countless generations” rather than one of the “real values of life.” He predicts that this instinct will gradually disappear. Against this view, one may suggest that striving is intrinsic to what it means to be truly human. Rather than striving and enjoying being mutually exclusive, they are often inseparable: true enjoyment and lasting fulfillment come from having a purpose and achieving it (or at least trying), usually in the face of obstacles, rather than from passive consumption of immediate pleasure. There is a difference between climbing Everest and being deposited on top by helicopter.
The connection between striving and enjoying is a central theme for our understanding of how to fashion a desirable future. Perhaps future generations will wonder why we ever worried about such a futile thing as “work.” Just in case that change in attitudes is slow in coming, let’s consider the economic implications of the view that most people will be better off with something useful to do, even though the great majority of goods and services will be produced by machines with very little human supervision. Inevitably, most people will be engaged in supplying interpersonal services that can be provided—or which we prefer to be provided—only by humans. That is, if we can no longer supply routine physical labor and routine mental labor, we can still supply our humanity. We will need to become good at being human.³¹
Current professions of this kind include psychotherapists, executive coaches, tutors, counselors, companions, and those who care for children and the elderly. The phrase caring professions is often used in this context, but that is misleading: it has a positive connotation for those providing care, to be sure, but a negative connotation of dependency and helplessness for the recipients of care. But consider this observation, again from Keynes:
It will be those peoples, who can keep alive, and cultivate into a fuller perfection, the art of life itself and do not sell themselves for the means of life, who will be able to enjoy the abundance when it comes.
All of us need help in learning “the art of life itself.” This is not a matter of dependency but of growth. The capacity to inspire others and to confer the ability to appreciate and to create—be it in art, music, literature, conversation, gardening, architecture, food, wine, or video games—is likely to be more needed than ever.
The next question is income distribution. In most countries, this has been moving in the wrong direction for several decades. It’s a complex issue, but one thing is clear: high incomes and high social standing usually follow from providing high added value. The profession of childcare, to pick one example, is associated with low incomes and low social standing. This is, in part, a consequence of the fact that we don’t really know how to do it. Some practitioners are naturally good at it, but many are not. Contrast this with, say, orthopedic surgery. We wouldn’t just hire bored teenagers who need a bit of spare cash and put them to work as orthopedic surgeons at five dollars an hour plus all they can eat from the fridge. We have put centuries of research into understanding the human body and how to fix it when it’s broken, and practitioners must undergo years of training to learn all this knowledge and the skills necessary to apply it. As a result, orthopedic surgeons are highly paid and highly respected. They are highly paid not just because they know a lot and have a lot of training but also because all that knowledge and training actually works. It enables them to add a great deal of value to other people’s lives—especially people with broken bits.
Unfortunately, our scientific understanding of the mind is shockingly weak and our scientific understanding of happiness and fulfillment is even weaker. We simply don’t know how to add value to each other’s lives in consistent, predictable ways. We have had moderate success with certain psychiatric disorders, but we are still fighting a Hundred Years’ Literacy War over something as basic as teaching children to read.³² We need a radical rethinking of our educational system and our scientific enterprise to focus more attention on the human rather than the physical world. (Joseph Aoun, president of Northeastern University, argues that universities should be teaching and studying “humanics.”³³) It sounds odd to say that happiness should be an engineering discipline, but that seems to be the inevitable conclusion. Such a discipline would build on basic science—a better understanding of how human minds work at the cognitive and emotional levels—and would train a wide variety of practitioners, ranging from life architects, who help individuals plan the overall shape of their life trajectories, to professional experts in topics such as curiosity enhancement and personal resilience. If based on real science, these professions need be no more woo-woo than bridge designers and orthopedic surgeons are today.
Reworking our education and research institutions to create this basic science and to convert it into training programs and credentialed professions will take decades, so it’s a good idea to start now and a pity we didn’t start long ago. The final result—if it works—would be a world well worth living in. Without such a rethinking, we risk an unsustainable level of socioeconomic dislocation.
Usurping Other Human Roles
We should think twice before allowing machines to take over roles involving interpersonal services. If being human is our main selling point to other humans, so to speak, then making imitation humans seems like a bad idea. Fortunately for us, we have a distinct advantage over machines when it comes to knowing how other humans feel and how they will react. Nearly every human knows what it’s like to hit one’s thumb with a hammer or to feel unrequited love.
Counteracting this natural human advantage is a natural human disadvantage: the tendency to be fooled by appearances—especially human appearances. Alan Turing warned against making robots resemble humans:³⁴
I certainly hope and believe that no great efforts will be put into making machines with the most distinctively human, but non-intellectual, characteristics such as the shape of the human body; it appears to me quite futile to make such attempts and their results would have something like the unpleasant quality of artificial flowers.
Unfortunately, Turing’s warning has gone unheeded. Several research groups have produced eerily lifelike robots, as shown in figure 10.
As research tools, the robots may provide insights into how humans interpret robot behavior and communication. As prototypes for future commercial products, they represent a form of dishonesty. They bypass our conscious awareness and appeal directly to our emotional selves, perhaps convincing us that they are endowed with real intelligence. Imagine, for example, how much easier it would be to switch off and recycle a squat, gray box that was malfunctioning—even if it was squawking about not wanting to be switched off—than it would be to do the same for JiaJia or Geminoid DK. Imagine also how confusing and perhaps psychologically disturbing it would be for babies and small children to be cared for by entities that appear to be human, like their parents, but are somehow not; that appear to care about them, like their parents, but in fact do not.
[]
FIGURE 10: (left) JiaJia, a robot built at the University of Science and Technology of China; (right) Geminoid DK, a robot designed by Hiroshi Ishiguro at Osaka University in Japan and modeled on Henrik Schärfe of Aalborg University in Denmark.
Beyond a basic capability to convey nonverbal information via facial expression and movement—which even Bugs Bunny manages to do with ease—there is no good reason for robots to have humanoid form. There are also good, practical reasons not to have humanoid form—for example, our bipedal stance is relatively unstable compared to quadrupedal locomotion. Dogs, cats, and horses fit into our lives well, and their physical form is a very good clue as to how they are likely to behave. (Imagine if a horse suddenly started behaving like a dog!) The same should be true of robots. Perhaps a four-legged, two-armed, centaur-like morphology would be a good standard. An accurately humanoid robot makes as much sense as a Ferrari with a top speed of five miles per hour or a “raspberry” ice-cream cone made from beetroot-tinted cream of chopped liver.
The humanoid aspect of some robots has already contributed to political as well as emotional confusion. On October 25, 2017, Saudi Arabia granted citizenship to Sophia, a humanoid robot that has been described as little more than “a chatbot with a face”³⁵ and worse.³⁶ Perhaps this was a public relations stunt, but a proposal emanating from the European Parliament’s Committee on Legal Affairs is entirely serious.³⁷ It recommends
creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause.
In other words, the robot itself would be legally responsible for damage, rather than the owner or manufacturer. This implies that robots will own financial assets and be subject to sanctions if they do not comply. Taken literally, this does not make sense. For example, if we were to imprison the robot for nonpayment, why would it care?
In addition to the needless and even absurd elevation of the status of robots, there is a danger that the increased use of machines in decisions affecting people will degrade the status and dignity of humans. This possibility is illustrated perfectly in a scene from the science-fiction movie Elysium, when Max (Matt Damon) pleads his case before his “parole officer” (figure 11) to explain why the extension of his sentence is unjustified. Needless to say, Max is unsuccessful. The parole officer even chides him for failing to display a suitably deferential attitude.
[]
FIGURE 11: Max (Matt Damon) meets his parole officer in Elysium.
One can think of such an assault on human dignity in two ways. The first is obvious: by giving machines authority over humans, we relegate ourselves to a second-class status and lose the right to participate in decisions that affect us. (A more extreme form of this is giving machines the authority to kill humans, as discussed earlier in the chapter.) The second is indirect: even if you believe it is not the machines making the decision but those humans who designed and commissioned the machines, the fact that those human designers and commissioners do not consider it worthwhile to weigh the individual circumstances of each human subject in such cases suggests that they attach little value to the lives of others. This is perhaps a symptom of the beginning of a great separation between an elite served by humans and a vast underclass served, and controlled, by machines.
In the EU, Article 22 of the 2018 General Data Protection Regulation, or GDPR, explicitly forbids the granting of authority to machines in such cases:
The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
Although this sounds admirable in principle, it remains to be seen—at least at the time of writing—how much impact this will have in practice. It is often so much easier, faster, and cheaper to leave the decisions to the machine.
One reason for all the concern about automated decisions is the potential for algorithmic bias—the tendency of machine learning algorithms to produce inappropriately biased decisions about loans, housing, jobs, insurance, parole, sentencing, college admission, and so on. The explicit use of criteria such as race in these decisions has been illegal for decades in many countries and is prohibited by Article 9 of the GDPR for a very wide range of applications. That does not mean, of course, that by excluding race from the data we necessarily get racially unbiased decisions. For example, beginning in the 1930s, the government-sanctioned practice of redlining caused certain zip codes in the United States to be off-limits for mortgage lending and other forms of investment, leading to declining real-estate values. It just so happened that those zip codes were largely populated by African Americans.
To prevent redlining, now only the first three digits of the five-digit zip code can be used in making credit decisions. In addition, the decision process must be amenable to inspection, to ensure no other “accidental” biases are creeping in. The EU’s GDPR is often said to provide a general “right to an explanation” for any automated decision,³⁸ but the actual language of Article 14 merely requires
meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.
At present, it is unknown how courts will enforce this clause. It’s possible that the hapless consumer will just be handed a description of the particular deep learning algorithm used to train the classifier that made the decision.
Nowadays, the likely causes of algorithmic bias lie in the data rather than in the deliberate malfeasance of corporations. In 2015, Glamour magazine reported a disappointing finding: “The first female Google image search result for ‘CEO’ appears TWELVE rows down—and it’s Barbie.” (There were some actual women in the 2018 results, but most of them were models portraying CEOs in generic stock photos, rather than actual female CEOs; the 2019 results are somewhat better.) This is a consequence not of deliberate gender bias in Google’s image search ranking but of preexisting bias in the culture that produces the data: there are far more male than female CEOs, and when people want to depict an “archetypal” CEO in a captioned image, they almost always pick a male figure. The fact that the bias lies primarily in the data does not, of course, mean that there is no obligation to take steps to counteract the problem.
There are other, more technical reasons why the naïve application of machine learning methods can produce biased outcomes. For example, minorities are, by definition, less well represented in population-wide data samples; hence, predictions for individual members of minorities may be less accurate if such predictions are made largely on the basis of data from other members of the same group. Fortunately, a good deal of attention has been paid to the problem of removing inadvertent bias from machine learning algorithms, and there are now methods that produce unbiased results according to several plausible and desirable definitions of fairness.³⁹ The mathematical analysis of these definitions of fairness shows that they cannot be achieved simultaneously and that, when enforced, they result in lower prediction accuracy and, in the case of lending decisions, lower profit for the lender. This is perhaps disappointing, but at least it makes clear the trade-offs involved in avoiding algorithmic bias. One hopes that awareness of these methods and of the issue itself will spread quickly among policy makers, practitioners, and users.
If handing authority over individual humans to machines is sometimes problematic, what about authority over lots of humans? That is, should we put machines in political and management roles? At present this may seem far-fetched. Machines cannot sustain an extended conversation and lack the basic understanding of the factors that are relevant to making decisions with broad scope, such as whether to raise the minimum wage or to reject a merger proposal from another corporation. The trend, however, is clear: machines are making decisions at higher and higher levels of authority in many areas. Take airlines, for example. First, computers helped in the construction of flight schedules. Soon, they took over allocation of flight crews, the booking of seats, and the management of routine maintenance. Next, they were connected to global information networks to provide real-time status reports to airline managers, so that managers could cope with disruption effectively. Now they are taking over the job of managing disruption: rerouting planes, rescheduling staff, rebooking passengers, and revising maintenance schedules.
This is all to the good from the point of view of airline economics and passenger experience. The question is whether the computer system remains a tool of humans, or humans become tools of the computer system—supplying information and fixing bugs when necessary, but no longer understanding in any depth how the whole thing is working. The answer becomes clear when the system goes down and global chaos ensues until it can be brought back online. For example, a single “computer glitch” on April 3, 2018, caused fifteen thousand flights in Europe to be significantly delayed or canceled.⁴⁰ When trading algorithms caused the 2010 “flash crash” on the New York Stock Exchange, wiping out $1 trillion in a few minutes, the only solution was to shut down the exchange. What happened is still not well understood.
Before there was any technology, human beings lived, like most animals, hand to mouth. We stood directly on the ground, so to speak. Technology gradually raised us up on a pyramid of machinery, increasing our footprint as individuals and as a species. There are different ways we can design the relationship between humans and machines. If we design it so that humans retain sufficient understanding, authority, and autonomy, the technological parts of the system can greatly magnify human capabilities, allowing each of us to stand on a vast pyramid of capabilities—a demigod, if you like. But consider the worker in an online-shopping fulfillment warehouse. She is more productive than her predecessors because she has a small army of robots bringing her storage bins to pick items from; but she is a part of a larger system controlled by intelligent algorithms that decide where she should stand and which items she should pick and dispatch. She is already partly buried in the pyramid, not standing on top of it. It’s only a matter of time before the sand fills the spaces in the pyramid and her role is eliminated.
5
OVERLY INTELLIGENT AI
The Gorilla Problem
It doesn’t require much imagination to see that making something smarter than yourself could be a bad idea. We understand that our control over our environment and over other species is a result of our intelligence, so the thought of something else being more intelligent than us—whether it’s a robot or an alien—immediately induces a queasy feeling.
Around ten million years ago, the ancestors of the modern gorilla created (accidentally, to be sure) the genetic lineage leading to modern humans. How do the gorillas feel about this? Clearly, if they were able to tell us about their species’ current situation vis-à-vis humans, the consensus opinion would be very negative indeed. Their species has essentially no future beyond that which we deign to allow. We do not want to be in a similar situation vis-à-vis superintelligent machines. I’ll call this the gorilla problem—specifically, the problem of whether humans can maintain their supremacy and autonomy in a world that includes machines with substantially greater intelligence.
Charles Babbage and Ada Lovelace, who designed and wrote programs for the Analytical Engine in 1842, were aware of its potential but seemed to have no qualms about it.¹ In 1847, however, Richard Thornton, editor of the Primitive Expounder, a religious journal, railed against mechanical calculators:²
Mind . . . outruns itself and does away with the necessity of its own existence by inventing machines to do its own thinking. . . . But who knows that such machines when brought to greater perfection, may not think of a plan to remedy all their own defects and then grind out ideas beyond the ken of mortal mind!
This is perhaps the first speculation concerning existential risk from computing devices, but it remained in obscurity.
In contrast, Samuel Butler’s novel Erewhon, published in 1872, developed the theme in far greater depth and achieved immediate success. Erewhon is a country in which all mechanical devices have been banned after a terrible civil war between the machinists and anti-machinists. One part of the book, called “The Book of the Machines,” explains the origins of this war and presents the arguments of both sides.³ It is eerily prescient of the debate that has re-emerged in the early years of the twenty-first century.
The anti-machinists’ main argument is that machines will advance to the point where humanity loses control:
Are we not ourselves creating our successors in the supremacy of the earth? Daily adding to the beauty and delicacy of their organization, daily giving them greater skill and supplying more and more of that self-regulating self-acting power which will be better than any intellect? . . . In the course of ages we shall find ourselves the inferior race. . . .
We must choose between the alternative of undergoing much present suffering, or seeing ourselves gradually superseded by our own creatures, till we rank no higher in comparison with them, than the beasts of the field with ourselves. . . . Our bondage will steal upon us noiselessly and by imperceptible approaches.
The narrator also relates the pro-machinists’ principal counterargument, which anticipates the man–machine symbiosis argument that we will explore in the next chapter:
There was only one serious attempt to answer it. Its author said that machines were to be regarded as a part of man’s own physical nature, being really nothing but extra-corporeal limbs.
Although the anti-machinists in Erewhon win the argument, Butler himself appears to be of two minds. On the one hand, he complains that “Erewhonians are . . . quick to offer up common sense at the shrine of logic, when a philosopher arises among them, who carries them away through his reputation for especial learning” and says, “They cut their throats in the matter of machinery.” On the other hand, the Erewhonian society he describes is remarkably harmonious, productive, and even idyllic. The Erewhonians fully accept the folly of re-embarking on the course of mechanical invention, and regard those remnants of machinery kept in museums “with the feelings of an English antiquarian concerning Druidical monuments or flint arrow heads.”
Butler’s story was evidently known to Alan Turing, who considered the long-term future of AI in a lecture given in Manchester in 1951:⁴
It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.
In the same year, Turing repeated these concerns in a radio lecture broadcast throughout the UK on the BBC Third Programme:
If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. . . . This new danger . . . is certainly something which can give us anxiety.
When the Erewhonian anti-machinists “feel seriously uneasy about the future,” they see it as their “duty to check the evil while we can still do so,” and they destroy all the machines. Turing’s response to the “new danger” and “anxiety” is to consider “turning off the power” (although it will be clear shortly that this is not really an option). In Frank Herbert’s classic science-fiction novel Dune, set in the far future, humanity has barely survived the Butlerian Jihad, a cataclysmic war with the “thinking machines.” A new commandment has emerged: “Thou shalt not make a machine in the likeness of a human mind.” This commandment precludes computing devices of any kind.
All these drastic responses reflect the inchoate fears that machine intelligence evokes. Yes, the prospect of superintelligent machines does make one uneasy. Yes, it is logically possible that such machines could take over the world and subjugate or eliminate the human race. If that is all one has to go on, then indeed the only plausible response available to us, at the present time, is to attempt to curtail artificial intelligence research—specifically, to ban the development and deployment of general-purpose, human-level AI systems.
Like most other AI researchers, I recoil at this prospect. How dare anyone tell me what I can and cannot think about? Anyone proposing an end to AI research is going to have to do a lot of convincing. Ending AI research would mean forgoing not just one of the principal avenues for understanding how human intelligence works but also a golden opportunity to improve the human condition—to make a far better civilization. The economic value of human-level AI is measurable in the thousands of trillions of dollars, so the momentum behind AI research from corporations and governments is likely to be enormous. It will overwhelm the vague objections of a philosopher, no matter how great his or her “reputation for especial learning,” as Butler puts it.
A second drawback to the idea of banning general-purpose AI is that it’s a difficult thing to ban. Progress on general-purpose AI occurs primarily on the whiteboards of research labs around the world, as mathematical problems are posed and solved. We don’t know in advance which ideas and equations to ban, and, even if we did, it doesn’t seem reasonable to expect that such a ban could be enforceable or effective.
To compound the difficulty still further, researchers making progress on general-purpose AI are often working on something else. As I have already argued, research on tool AI—those specific, innocuous applications such as game playing, medical diagnosis, and travel planning—often leads to progress on general-purpose techniques that are applicable to a wide range of other problems and move us closer to human-level AI.
For these reasons, it’s very unlikely that the AI community—or the governments and corporations that control the laws and research budgets—will respond to the gorilla problem by ending progress in AI. If the gorilla problem can be solved only in this way, it isn’t going to be solved.
The only approach that seems likely to work is to understand why it is that making better AI might be a bad thing. It turns out that we have known the answer for thousands of years.
The King Midas Problem
Norbert Wiener, whom we met in Chapter 1, had a profound impact on many fields, including artificial intelligence, cognitive science, and control theory. Unlike most of his contemporaries, he was particularly concerned with the unpredictability of complex systems operating in the real world. (He wrote his first paper on this topic at the age of ten.) He became convinced that the overconfidence of scientists and engineers in their ability to control their creations, whether military or civilian, could have disastrous consequences.
In 1950, Wiener published The Human Use of Human Beings,⁵ whose front-cover blurb reads, “The ‘mechanical brain’ and similar machines can destroy human values or enable us to realize them as never before.”⁶ He gradually refined his ideas over time and by 1960 had identified one core issue: the impossibility of defining true human purposes correctly and completely. This, in turn, means that what I have called the standard model—whereby humans attempt to imbue machines with their own purposes—is destined to fail.
We might call this the King Midas problem: Midas, a legendary king in ancient Greek mythology, got exactly what he asked for—namely, that everything he touched should turn to gold. Too late, he discovered that this included his food, his drink, and his family members, and he died in misery and starvation. The same theme is ubiquitous in human mythology. Wiener cites Goethe’s tale of the sorcerer’s apprentice, who instructs the broom to fetch water—but doesn’t say how much water and doesn’t know how to make the broom stop.
A technical way of saying this is that we may suffer from a failure of value alignment—we may, perhaps inadvertently, imbue machines with objectives that are imperfectly aligned with our own. Until recently, we were shielded from the potentially catastrophic consequences by the limited capabilities of intelligent machines and the limited scope that they have to affect the world. (Indeed, most AI work was done with toy problems in research labs.) As Norbert Wiener put it in his 1964 book God and Golem,⁷
In the past, a partial and inadequate view of human purpose has been relatively innocuous only because it has been accompanied by technical limitations. . . . This is only one of the many places where human impotence has shielded us from the full destructive impact of human folly.
Unfortunately, this period of shielding is rapidly coming to an end.
We have already seen how content-selection algorithms on social media wrought havoc on society in the name of maximizing ad revenues. In case you are thinking to yourself that ad revenue maximization was already an ignoble goal that should never have been pursued, let’s suppose instead that we ask some future superintelligent system to pursue the noble goal of finding a cure for cancer—ideally as quickly as possible, because someone dies from cancer every 3.5 seconds. Within hours, the AI system has read the entire biomedical literature and hypothesized millions of potentially effective but previously untested chemical compounds. Within weeks, it has induced multiple tumors of different kinds in every living human being so as to carry out medical trials of these compounds, this being the fastest way to find a cure. Oops.
If you prefer solving environmental problems, you might ask the machine to counter the rapid acidification of the oceans that results from higher carbon dioxide levels. The machine develops a new catalyst that facilitates an incredibly rapid chemical reaction between ocean and atmosphere and restores the oceans’ pH levels. Unfortunately, a quarter of the oxygen in the atmosphere is used up in the process, leaving us to asphyxiate slowly and painfully. Oops.
These kinds of world-ending scenarios are unsubtle—as one might expect, perhaps, for world-ending scenarios. But there are many scenarios in which a kind of mental asphyxiation “steals upon us noiselessly and by imperceptible approaches.” The prologue to Max Tegmark’s Life 3.0 describes in some detail a scenario in which a superintelligent machine gradually assumes economic and political control over the entire world while remaining essentially undetected. The Internet and the global-scale machines that it supports—the ones that already interact with billions of “users” on a daily basis—provide the perfect medium for the growth of machine control over humans.
I don’t expect that the purpose put into such machines will be of the “take over the world” variety. It is more likely to be profit maximization or engagement maximization or, perhaps, even an apparently benign goal such as achieving higher scores on regular user happiness surveys or reducing our energy usage. Now, if we think of ourselves as entities whose actions are expected to achieve our objectives, there are two ways to change our behavior. The first is the old-fashioned way: leave our expectations and objectives unchanged, but change our circumstances—for example, by offering money, pointing a gun at us, or starving us into submission. That tends to be expensive and difficult for a computer to do. The second way is to change our expectations and objectives. This is much easier for a machine. It is in contact with you for hours every day, controls your access to information, and provides much of your entertainment through games, TV, movies, and social interaction.
The reinforcement learning algorithms that optimize social-media click-through have no capacity to reason about human behavior—in fact, they do not even know in any meaningful sense that humans exist. For machines with much greater understanding of human psychology, beliefs, and motivations, it should be relatively easy to gradually guide us in directions that increase the degree of satisfaction of the machine’s objectives. For example, it might reduce our energy consumption by persuading us to have fewer children, eventually—and inadvertently—achieving the dreams of anti-natalist philosophers who wish to eliminate the noxious impact of humanity on the natural world.
With a bit of practice, you can learn to identify ways in which the achievement of more or less any fixed objective can result in arbitrarily bad outcomes. One of the most common patterns involves omitting something from the objective that you do actually care about. In such cases—as in the examples given above—the AI system will often find an optimal solution that sets the thing you do care about, but forgot to mention, to an extreme value. So, if you say to your self-driving car, “Take me to the airport as fast as possible!” and it interprets this literally, it will reach speeds of 180 miles per hour and you’ll go to prison. (Fortunately, the self-driving cars currently contemplated won’t accept such a request.) If you say, “Take me to the airport as fast as possible while not exceeding the speed limit,” it will accelerate and brake as hard as possible, swerving in and out of traffic to maintain the maximum speed in between. It may even push other cars out of the way to gain a few seconds in the scrum at the airport terminal. And so on—eventually, you will add enough considerations so that the car’s driving roughly approximates that of a skilled human driver taking someone to the airport in a bit of a hurry.
Driving is a simple task with only local impacts, and the AI systems currently being built for driving are not very intelligent. For these reasons, many of the potential failure modes can be anticipated; others will reveal themselves in driving simulators or in millions of miles of testing with professional drivers ready to take over if something goes wrong; still others will appear only later, when the cars are already on the road and something weird happens.
Unfortunately, with superintelligent systems that can have a global impact, there are no simulators and no do-overs. It’s certainly very hard, and perhaps impossible, for mere humans to anticipate and rule out in advance all the disastrous ways the machine could choose to achieve a specified objective. Generally speaking, if you have one goal and a superintelligent machine has a different, conflicting goal, the machine gets what it wants and you don’t.
Fear and Greed: Instrumental Goals
If a machine pursuing an incorrect objective sounds bad enough, there’s worse. The solution suggested by Alan Turing—turning off the power at strategic moments—may not be available, for a very simple reason: you can’t fetch the coffee if you’re dead.
Let me explain. Suppose a machine has the objective of fetching the coffee. If it is sufficiently intelligent, it will certainly understand that it will fail in its objective if it is switched off before completing its mission. Thus, the objective of fetching coffee creates, as a necessary subgoal, the objective of disabling the off-switch. The same is true for curing cancer or calculating the digits of pi. There’s really not a lot you can do once you’re dead, so we can expect AI systems to act preemptively to preserve their own existence, given more or less any definite objective.
If that objective is in conflict with human preferences, then we have exactly the plot of 2001: A Space Odyssey, in which the HAL 9000 computer kills four of the five astronauts on board the ship to prevent interference with its mission. Dave, the last remaining astronaut, manages to switch HAL off after an epic battle of wits—presumably to keep the plot interesting. But if HAL had been truly superintelligent, Dave would have been switched off.
It is important to understand that self-preservation doesn’t have to be any sort of built-in instinct or prime directive in machines. (So Isaac Asimov’s Third Law of Robotics,⁸ which begins “A robot must protect its own existence,” is completely unnecessary.) There is no need to build self-preservation in because it is an instrumental goal—a goal that is a useful subgoal of almost any original objective.⁹ Any entity that has a definite objective will automatically act as if it also has instrumental goals.
In addition to being alive, having access to money is an instrumental goal within our current system. Thus, an intelligent machine might want money, not because it’s greedy but because money is useful for achieving all sorts of goals. In the movie Transcendence, when Johnny Depp’s brain is uploaded into the quantum supercomputer, the first thing the machine does is copy itself onto millions of other computers on the Internet so that it cannot be switched off. The second thing it does is make a quick killing on the stock market to fund its expansion plans.
And what, exactly, are those expansion plans? They include designing and building a much larger quantum supercomputer; doing AI research; and discovering new knowledge of physics, neuroscience, and biology. These resource objectives—computing power, algorithms, and knowledge—are also instrumental goals, useful for achieving any overarching objective.¹⁰ They seem harmless enough until one realizes that the acquisition process will continue without limit. This seems to create inevitable conflict with humans. And of course, the machine, equipped with ever-better models of human decision making, will anticipate and defeat our every move in this conflict.
Intelligence Explosions
I. J. Good was a brilliant mathematician who worked with Alan Turing at Bletchley Park, breaking German codes during World War II. He shared Turing’s interests in machine intelligence and statistical inference. In 1965, he wrote what is now his best-known paper, “Speculations Concerning the First Ultraintelligent Machine.”¹¹ The first sentence suggests that Good, alarmed by the nuclear brinkmanship of the Cold War, regarded AI as a possible savior for humanity: “The survival of man depends on the early construction of an ultraintelligent machine.” As the paper proceeds, however, he becomes more circumspect. He introduces the notion of an intelligence explosion, but, like Butler, Turing, and Wiener before him, he worries about losing control:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside science fiction.
This paragraph is a staple of any discussion of superintelligent AI, although the caveats at the end are usually left out. Good’s point can be strengthened by noting that not only could the ultraintelligent machine improve its own design; it’s likely that it would do so because, as we have seen, an intelligent machine expects to benefit from improving its hardware and software. The possibility of an intelligence explosion is often cited as the main source of risk to humanity from AI because it would give us so little time to solve the control problem.¹²
Good’s argument certainly has plausibility via the natural analogy to a chemical explosion in which each molecular reaction releases enough energy to initiate more than one additional reaction. On the other hand, it is logically possible that there are diminishing returns to intelligence improvements, so that the process peters out rather than exploding.¹³ There’s no obvious way to prove that an explosion will necessarily occur.
The diminishing-returns scenario is interesting in its own right. It could arise if it turns out that achieving a given percentage improvement becomes much harder as the machine becomes more intelligent. (I’m assuming for the sake of argument that general-purpose machine intelligence is measurable on some kind of linear scale, which I doubt will ever be strictly true.) In that case, humans won’t be able to create superintelligence either. If a machine that is already superhuman runs out of steam when trying to improve its own intelligence, then humans will run out of steam even sooner.
Now, I’ve never heard a serious argument to the effect that creating any given level of machine intelligence is simply beyond the capacity of human ingenuity, but I suppose one must concede it’s logically possible. “Logically possible” and “I’m willing to bet the future of the human race on it” are, of course, two completely different things. Betting against human ingenuity seems like a losing strategy.
If an intelligence explosion does occur, and if we have not already solved the problem of controlling machines with only slightly superhuman intelligence—for example, if we cannot prevent them from making these recursive self-improvements—then we would have no time left to solve the control problem and the game would be over. This is Bostrom’s hard takeoff scenario, in which the machine’s intelligence increases astronomically in just days or weeks. In Turing’s words, it is “certainly something which can give us anxiety.”
The possible responses to this anxiety seem to be to retreat from AI research, to deny that there are risks inherent in developing advanced AI, to understand and mitigate the risks through the design of AI systems that necessarily remain under human control, and to resign—simply to cede the future to intelligent machines.
Denial and mitigation are the subjects of the remainder of the book. As I have already argued, retreat from AI research is both unlikely to happen (because the benefits forgone are too great) and very difficult to bring about. Resignation seems to be the worst possible response. It is often accompanied by the idea that AI systems that are more intelligent than us somehow deserve to inherit the planet, leaving humans to go gentle into that good night, comforted by the thought that our brilliant electronic progeny are busy pursuing their objectives. This view was promulgated by the roboticist and futurist Hans Moravec,¹⁴ who writes, “The immensities of cyberspace will be teeming with unhuman superminds, engaged in affairs that are to human concerns as ours are to those of bacteria.” This seems to be a mistake. Value, for humans, is defined primarily by conscious human experience. If there are no humans and no other conscious entities whose subjective experience matters to us, there is nothing of value occurring.
6
THE NOT-SO-GREAT AI DEBATE
The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.”¹ So ended The Economist magazine’s review of Nick Bostrom’s Superintelligence. Most would interpret this as a classic example of British understatement. Surely, you might think, the great minds of today are already doing this hard thinking—engaging in serious debate, weighing up the risks and benefits, seeking solutions, ferreting out loopholes in solutions, and so on. Not yet, as far as I am aware.
When one first introduces these ideas to a technical audience, one can see the thought bubbles popping out of their heads, beginning with the words “But, but, but . . .” and ending with exclamation marks.
The first kind of but takes the form of denial. The deniers say, “But this can’t be a real problem, because XYZ.” Some of the XYZs reflect a reasoning process that might charitably be described as wishful thinking, while others are more substantial. The second kind of but takes the form of deflection: accepting that the problems are real but arguing that we shouldn’t try to solve them, either because they’re unsolvable or because there are more important things to focus on than the end of civilization or because it’s best not to mention them at all. The third kind of but takes the form of an oversimplified, instant solution: “But can’t we just do ABC?” As with denial, some of the ABCs are instantly regrettable. Others, perhaps by accident, come closer to identifying the true nature of the problem.
I don’t mean to suggest that there cannot be any reasonable objections to the view that poorly designed superintelligent machines would present a serious risk to humanity. It’s just that I have yet to see such an objection. Since the issue seems to be so important, it deserves a public debate of the highest quality. So, in the interests of having that debate, and in the hope that the reader will contribute to it, let me provide a quick tour of the highlights so far, such as they are.
Denial
Denying that the problem exists at all is the easiest way out. Scott Alexander, author of the Slate Star Codex blog, began a well-known article on AI risk as follows:² “I first became interested in AI risk back around 2007. At the time, most people’s response to the topic was ‘Haha, come back when anyone believes this besides random Internet crackpots.’”
Instantly regrettable remarks
A perceived threat to one’s lifelong vocation can lead a perfectly intelligent and usually thoughtful person to say things they might wish to retract on further analysis. That being the case, I will not name the authors of the following arguments, all of whom are well-known AI researchers. I’ve included refutations of the arguments, even though they are quite unnecessary.
- Electronic calculators are superhuman at arithmetic. Calculators didn’t take over the world; therefore, there is no reason to worry about superhuman AI.
- Refutation: intelligence is not the same as arithmetic, and the arithmetic ability of calculators does not equip them to take over the world.
- Horses have superhuman strength, and we don’t worry about proving that horses are safe; so we needn’t worry about proving that AI systems are safe.
- Refutation: intelligence is not the same as physical strength, and the strength of horses does not equip them to take over the world.
- Historically, there are zero examples of machines killing millions of humans, so, by induction, it cannot happen in the future.
- Refutation: there’s a first time for everything, before which there were zero examples of it happening.
- No physical quantity in the universe can be infinite, and that includes intelligence, so concerns about superintelligence are overblown.
- Refutation: superintelligence doesn’t need to be infinite to be problematic; and physics allows computing devices billions of times more powerful than the human brain.
- We don’t worry about species-ending but highly unlikely possibilities such as black holes materializing in near-Earth orbit, so why worry about superintelligent AI?
- Refutation: if most physicists on Earth were working to make such black holes, wouldn’t we ask them if it was safe?
It’s complicated
It is a staple of modern psychology that a single IQ number cannot characterize the full richness of human intelligence.³ There are, the theory says, different dimensions of intelligence: spatial, logical, linguistic, social, and so on. Alice, our soccer player from Chapter 2, might have more spatial intelligence than her friend Bob, but less social intelligence. Thus, we cannot line up all humans in strict order of intelligence.
This is even more true of machines, because their abilities are much narrower. The Google search engine and AlphaGo have almost nothing in common, besides being products of two subsidiaries of the same parent corporation, and so it makes no sense to say that one is more intelligent than the other. This makes notions of “machine IQ” problematic and suggests that it’s misleading to describe the future as a one-dimensional IQ race between humans and machines.
Kevin Kelly, founding editor of Wired magazine and a remarkably perceptive technology commentator, takes this argument one step further. In “The Myth of a Superhuman AI,”⁴ he writes, “Intelligence is not a single dimension, so ‘smarter than humans’ is a meaningless concept.” In a single stroke, all concerns about superintelligence are wiped away.
Now, one obvious response is that a machine could exceed human capabilities in all relevant dimensions of intelligence. In that case, even by Kelly’s strict standards, the machine would be smarter than a human. But this rather strong assumption is not necessary to refute Kelly’s argument. Consider the chimpanzee. Chimpanzees probably have better short-term memory than humans, even on human-oriented tasks such as recalling sequences of digits.⁵ Short-term memory is an important dimension of intelligence. By Kelly’s argument, then, humans are not smarter than chimpanzees; indeed, he would claim that “smarter than a chimpanzee” is a meaningless concept. This is cold comfort to the chimpanzees (and bonobos, gorillas, orangutans, whales, dolphins, and so on) whose species survive only because we deign to allow it. It is colder comfort still to all those species that we have already wiped out. It’s also cold comfort to humans who might be worried about being wiped out by machines.
It’s impossible
Even before the birth of AI in 1956, august intellectuals were harrumphing and saying that intelligent machines were impossible. Alan Turing devoted much of his seminal 1950 paper, “Computing Machinery and Intelligence,” to refuting these arguments. Ever since, the AI community has been fending off similar claims of impossibility from philosophers,⁶ mathematicians,⁷ and others. In the current debate over superintelligence, several philosophers have exhumed these impossibility claims to prove that humanity has nothing to fear.⁸^(,)⁹ This comes as no surprise.
The One Hundred Year Study on Artificial Intelligence, or AI100, is an ambitious, long-term project housed at Stanford University. Its goal is to keep track of AI, or, more precisely, to “study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.” Its first major report, “Artificial Intelligence and Life in 2030,” does come as a surprise.¹⁰ As might be expected, it emphasizes the benefits of AI in areas such as medical diagnosis and automotive safety. What’s unexpected is the claim that “unlike in the movies, there is no race of superhuman robots on the horizon or probably even possible.”
To my knowledge, this is the first time that serious AI researchers have publicly espoused the view that human-level or superhuman AI is impossible—and this in the middle of a period of extremely rapid progress in AI research, when barrier after barrier is being breached. It’s as if a group of leading cancer biologists announced that they had been fooling us all along: they’ve always known that there will never be a cure for cancer.
What could have motivated such a volte-face? The report provides no arguments or evidence whatever. (Indeed, what evidence could there be that no physically possible arrangement of atoms outperforms the human brain?) I suspect there are two reasons. The first is the natural desire to disprove the existence of the gorilla problem, which presents a very uncomfortable prospect for the AI researcher; certainly, if human-level AI is impossible, the gorilla problem is neatly dispatched. The second reason is tribalism—the instinct to circle the wagons against what are perceived to be “attacks” on AI.
It seems odd to perceive the claim that superintelligent AI is possible as an attack on AI, and even odder to defend AI by saying that AI will never succeed in its goals. We cannot insure against future catastrophe simply by betting against human ingenuity.
We have made such bets before and lost. As we saw earlier, the physics establishment of the early 1930s, personified by Lord Rutherford, confidently believed that extracting atomic energy was impossible; yet Leo Szilard’s invention of the neutron-induced nuclear chain reaction in 1933 proved that confidence to be misplaced.
Szilard’s breakthrough came at an unfortunate time: the beginning of an arms race with Nazi Germany. There was no possibility of developing nuclear technology for the greater good. A few years later, having demonstrated a nuclear chain reaction in his laboratory, Szilard wrote, “We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief.”
It’s too soon to worry about it
It’s common to see sober-minded people seeking to assuage public concerns by pointing out that because human-level AI is not likely to arrive for several decades, there is nothing to worry about. For example, the AI100 report says there is “no cause for concern that AI is an imminent threat to humankind.”
This argument fails on two counts. The first is that it attacks a straw man. The reasons for concern are not predicated on imminence. For example, Nick Bostrom writes in Superintelligence, “It is no part of the argument in this book that we are on the threshold of a big breakthrough in artificial intelligence, or that we can predict with any precision when such a development might occur.” The second is that a long-term risk can still be cause for immediate concern. The right time to worry about a potentially serious problem for humanity depends not just on when the problem will occur but also on how long it will take to prepare and implement a solution.
For example, if we were to detect a large asteroid on course to collide with Earth in 2069, would we say it’s too soon to worry? Quite the opposite! There would be a worldwide emergency project to develop the means to counter the threat. We wouldn’t wait until 2068 to start working on a solution, because we can’t say in advance how much time is needed. Indeed, NASA’s Planetary Defense project is already working on possible solutions, even though “no known asteroid poses a significant risk of impact with Earth over the next 100 years.” In case that makes you feel complacent, they also say, “About 74 percent of near-Earth objects larger than 460 feet still remain to be discovered.”
And if we consider the global catastrophic risks from climate change, which are predicted to occur later in this century, is it too soon to take action to prevent them? On the contrary, it may be too late. The relevant time scale for superhuman AI is less predictable, but of course that means it, like nuclear fission, might arrive considerably sooner than expected.
One formulation of the “it’s too soon to worry” argument that has gained currency is Andrew Ng’s assertion that “it’s like worrying about overpopulation on Mars.”¹¹ (He later upgraded this from Mars to Alpha Centauri.) Ng, a former Stanford professor, is a leading expert on machine learning, and his views carry some weight. The assertion appeals to a convenient analogy: not only is the risk easily managed and far in the future but also it’s extremely unlikely we’d even try to move billions of humans to Mars in the first place. The analogy is a false one, however. We are already devoting huge scientific and technical resources to creating ever-more-capable AI systems, with very little thought devoted to what happens if we succeed. A more apt analogy, then, would be working on a plan to move the human race to Mars with no consideration for what we might breathe, drink, or eat once we arrive. Some might call this plan unwise. Alternatively, one could take Ng’s point literally, and respond that landing even a single person on Mars would constitute overpopulation, because Mars has a carrying capacity of zero. Thus, groups that are currently planning to send a handful of humans to Mars are worrying about overpopulation on Mars, which is why they are developing life-support systems.
We’re the experts
In every discussion of technological risk, the pro-technology camp wheels out the claim that all concerns about risk arise from ignorance. For example, here’s Oren Etzioni, CEO of the Allen Institute for AI and a noted researcher in machine learning and natural language understanding:¹²
At the rise of every technology innovation, people have been scared. From the weavers throwing their shoes in the mechanical looms at the beginning of the industrial era to today’s fear of killer robots, our response has been driven by not knowing what impact the new technology will have on our sense of self and our livelihoods. And when we don’t know, our fearful minds fill in the details.
Popular Science published an article titled “Bill Gates Fears AI, but AI Researchers Know Better”:¹³
When you talk to A.I. researchers—again, genuine A.I. researchers, people who grapple with making systems that work at all, much less work too well—they are not worried about superintelligence sneaking up on them, now or in the future. Contrary to the spooky stories that Musk seems intent on telling, A.I. researchers aren’t frantically installing firewalled summoning chambers and self-destruct countdowns.
This analysis was based on a sample of four, all of whom in fact said in their interviews that the long-term safety of AI was an important issue.
Using very similar language to the Popular Science article, David Kenny, at that time a vice president at IBM, wrote a letter to the US Congress that included the following reassuring words:¹⁴
When you actually do the science of machine intelligence, and when you actually apply it in the real world of business and society—as we have done at IBM to create our pioneering cognitive computing system, Watson—you understand that this technology does not support the fear-mongering commonly associated with the AI debate today.
The message is the same in all three cases: “Don’t listen to them; we’re the experts.” Now, one can point out that this is really an ad hominem argument that attempts to refute the message by delegitimizing the messengers, but even if one takes it at face value, the argument doesn’t hold water. Elon Musk, Stephen Hawking, and Bill Gates are certainly very familiar with scientific and technological reasoning, and Musk and Gates in particular have supervised and invested in many AI research projects. And it would be even less plausible to argue that Alan Turing, I. J. Good, Norbert Wiener, and Marvin Minsky are unqualified to discuss AI. Finally, Scott Alexander’s blog piece mentioned earlier, which is titled “AI Researchers on AI Risk,” notes that “AI researchers, including some of the leaders in the field, have been instrumental in raising issues about AI risk and superintelligence from the very beginning.” He lists several such researchers, and the list is now much longer.
Another standard rhetorical move for the “defenders of AI” is to describe their opponents as Luddites. Oren Etzioni’s reference to “weavers throwing their shoes in the mechanical looms” is just this: the Luddites were artisan weavers in the early nineteenth century protesting the introduction of machinery to replace their skilled labor. In 2015, the Information Technology and Innovation Foundation gave its annual Luddite Award to “alarmists touting an artificial intelligence apocalypse.” It’s an odd definition of “Luddite” that includes Turing, Wiener, Minsky, Musk, and Gates, who rank among the most prominent contributors to technological progress in the twentieth and twenty-first centuries.
The accusation of Luddism represents a misunderstanding of the nature of the concerns raised and the purpose for raising them. It is as if one were to accuse nuclear engineers of Luddism if they point out the need for control of the fission reaction. As with the strange phenomenon of AI researchers suddenly claiming that AI is impossible, I think we can attribute this puzzling episode to tribalism in defense of technological progress.
Deflection
Some commentators are willing to accept that the risks are real, but still present arguments for doing nothing. These arguments include the impossibility of doing anything, the importance of doing something else entirely, and the need to keep quiet about the risks.
You can’t control research
A common answer to suggestions that advanced AI might present risks to humanity is to claim that banning AI research is impossible. Note the mental leap here: “Hmm, someone is discussing risks! They must be proposing a ban on my research!!” This mental leap might be appropriate in a discussion of risks based only on the gorilla problem, and I would tend to agree that solving the gorilla problem by preventing the creation of superintelligent AI would require some kind of constraints on AI research.
Recent discussions of risks have, however, focused not on the general gorilla problem (journalistically speaking, the humans vs. superintelligence smackdown) but on the King Midas problem and variants thereof. Solving the King Midas problem also solves the gorilla problem—not by preventing superintelligent AI or finding a way to defeat it but by ensuring that it is never in conflict with humans in the first place. Discussions of the King Midas problem generally avoid proposing that AI research be curtailed; they merely suggest that attention be paid to the issue of preventing negative consequences of poorly designed systems. In the same vein, a discussion of the risks of containment failure in nuclear plants should be interpreted not as an attempt to ban nuclear physics research but as a suggestion to focus more effort on solving the containment problem.
There is, as it happens, a very interesting historical precedent for cutting off research. In the early 1970s, biologists began to be concerned that novel recombinant DNA methods—splicing genes from one organism into another—might create substantial risks for human health and the global ecosystem. Two meetings at Asilomar in California in 1973 and 1975 led first to a moratorium on such experiments and then to detailed biosafety guidelines consonant with the risks posed by any proposed experiment.¹⁵ Some classes of experiments, such as those involving toxin genes, were deemed too hazardous to be allowed.
Immediately after the 1975 meeting, the National Institutes of Health (NIH), which funds virtually all basic medical research in the United States, began the process of setting up the Recombinant DNA Advisory Committee. The RAC, as it is known, was instrumental in developing the NIH guidelines that essentially implemented the Asilomar recommendations. Since 2000, those guidelines have included a ban on funding approval for any protocol involving human germline alteration—the modification of the human genome in ways that can be inherited by subsequent generations. This ban was followed by legal prohibitions in over fifty countries.
The goal of “improving the human stock” had been one of the dreams of the eugenics movement in the late nineteenth and early twentieth centuries. The development of CRISPR-Cas9, a very precise method for genome editing, has reignited this dream. An international summit held in 2015 left the door open for future applications, calling for restraint until “there is broad societal consensus about the appropriateness of the proposed application.”¹⁶ In November 2018, the Chinese scientist He Jiankui announced that he had edited the genomes of three human embryos, at least two of which had led to live births. An international outcry followed, and at the time of writing, Jiankui appears to be under house arrest. In March 2019, an international panel of leading scientists called explicitly for a formal moratorium.¹⁷
The lesson of this debate for AI is mixed. On the one hand, it shows that we can refrain from proceeding with an area of research that has huge potential. The international consensus against germline alteration has been almost completely successful up to now. The fear that a ban would simply drive the research underground, or into countries with no regulation, has not materialized. On the other hand, germline alteration is an easily identifiable process, a specific use case of more general knowledge about genetics that requires specialized equipment and real humans to experiment on. Moreover, it falls within an area—reproductive medicine—that is already subject to close oversight and regulation. These characteristics do not apply to general-purpose AI, and, as yet, no one has come up with any plausible form that a regulation to curtail AI research might take.
Whataboutery
I was introduced to the term whataboutery by an adviser to a British politician who had to deal with it on a regular basis at public meetings. No matter the topic of the speech he was giving, someone would invariably ask, “What about the plight of the Palestinians?”
In response to any mention of risks from advanced AI, one is likely to hear, “What about the benefits of AI?” For example, here is Oren Etzioni:¹⁸
Doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.
And here is Mark Zuckerberg, CEO of Facebook, in a recent media-fueled exchange with Elon Musk:¹⁹
If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents and you’re arguing against being able to better diagnose people when they’re sick.
Leaving aside the tribal notion that anyone mentioning risks is “against AI,” both Zuckerberg and Etzioni are arguing that to talk about risks is to ignore the potential benefits of AI or even to negate them.
This is precisely backwards, for two reasons. First, if there were no potential benefits of AI, there would be no economic or social impetus for AI research and hence no danger of ever achieving human-level AI. We simply wouldn’t be having this discussion at all. Second, if the risks are not successfully mitigated, there will be no benefits. The potential benefits of nuclear power have been greatly reduced because of the partial core meltdown at Three Mile Island in 1979, the uncontrolled reaction and catastrophic releases at Chernobyl in 1986, and the multiple meltdowns at Fukushima in 2011. Those disasters severely curtailed the growth of the nuclear industry. Italy abandoned nuclear power in 1990 and Belgium, Germany, Spain, and Switzerland have announced plans to do so. Since 1990, the worldwide rate of commissioning of nuclear plants has been about a tenth of what it was before Chernobyl.
Silence
The most extreme form of deflection is simply to suggest that we should keep silent about the risks. For example, the aforementioned AI100 report includes the following admonition:
If society approaches these technologies primarily with fear and suspicion, missteps that slow AI’s development or drive it underground will result, impeding important work on ensuring the safety and reliability of AI technologies.
Robert Atkinson, director of the Information Technology and Innovation Foundation (the very same foundation that gives out the Luddite Award), made a similar argument in a 2015 debate.²⁰ While there are valid questions about precisely how risks should be described when talking to the media, the overall message is clear: “Don’t mention the risks; it would be bad for funding.” Of course, if no one were aware of the risks, there would be no funding for research on risk mitigation and no reason for anyone to work on it.
The renowned cognitive scientist Steven Pinker gives a more optimistic version of Atkinson’s argument. In his view, the “culture of safety in advanced societies” will ensure that all serious risks from AI will be eliminated; therefore, it is inappropriate and counterproductive to call attention to those risks.²¹ Even if we disregard the fact that our advanced culture of safety has led to Chernobyl, Fukushima, and runaway global warming, Pinker’s argument entirely misses the point. The culture of safety consists precisely of people pointing to possible failure modes and finding ways to ensure they don’t happen. (And with AI, the standard model is the failure mode.) Saying that it’s ridiculous to point to a failure mode because the culture of safety will fix it anyway is like saying no one should call an ambulance when they see a hit-and-run accident because someone will call an ambulance.
In attempting to portray the risks to the public and to policy makers, AI researchers are at a disadvantage compared to nuclear physicists. The physicists did not need to write books explaining to the public that assembling a critical mass of highly enriched uranium might present a risk, because the consequences had already been demonstrated at Hiroshima and Nagasaki. It did not require a great deal of further persuasion to convince governments and funding agencies that safety was important in developing nuclear energy.
Tribalism
In Butler’s Erewhon, focusing on the gorilla problem leads to a premature and false dichotomy between pro-machinists and anti-machinists. The pro-machinists believe the risk of machine domination to be minimal or nonexistent; the anti-machinists believe it to be insuperable unless all machines are destroyed. The debate becomes tribal, and no one tries to solve the underlying problem of retaining human control over the machines.
To varying degrees, all the major technological issues of the twentieth century—nuclear power, genetically modified organisms (GMOs), and fossil fuels—succumbed to tribalism. On each issue, there are two sides, pro and anti. The dynamics and outcomes of each have been different, but the symptoms of tribalism are similar: mutual distrust and denigration, irrational arguments, and a refusal to concede any (reasonable) point that might favor the other tribe. On the pro-technology side, one sees denial and concealment of risks combined with accusations of Luddism; on the anti side, one sees a conviction that the risks are insuperable and the problems unsolvable. A member of the pro-technology tribe who is too honest about a problem is viewed as a traitor, which is particularly unfortunate as the pro-technology tribe usually includes most of the people qualified to solve the problem. A member of the anti-technology tribe who discusses possible mitigations is also a traitor, because it is the technology itself that has come to be viewed as evil, rather than its possible effects. In this way, only the most extreme voices—those least likely to be listened to by the other side—can speak for each tribe.
In 2016, I was invited to No. 10 Downing Street to meet with some of then prime minister David Cameron’s advisers. They were worried that the AI debate was starting to resemble the GMO debate—which, in Europe, had led to what the advisers considered to be premature and overly restrictive regulations on GMO production and labeling. They wanted to avoid the same thing happening to AI. Their concerns had some validity: the AI debate is in danger of becoming tribal, of creating pro-AI and anti-AI camps. This would be damaging to the field because it’s simply not true that being concerned about the risks inherent in advanced AI is an anti-AI stance. A physicist who is concerned about the risks of nuclear war or the risk of a poorly designed nuclear reactor exploding is not “anti-physics.” To say that AI will be powerful enough to have a global impact is a compliment to the field rather than an insult.
It is essential that the AI community own the risks and work to mitigate them. The risks, to the extent that we understand them, are neither minimal nor insuperable. We need to do a substantial amount of work to avoid them, including reshaping and rebuilding the foundations of AI.
Can’t We Just . . .
. . . switch it off?
Once they understand the basic idea of existential risk, whether in the form of the gorilla problem or the King Midas problem, many people—myself included—immediately begin casting around for an easy solution. Often, the first thing that comes to mind is switching off the machine. For example, Alan Turing himself, as quoted earlier, speculates that we might “keep the machines in a subservient position, for instance by turning off the power at strategic moments.”
This won’t work, for the simple reason that a superintelligent entity will already have thought of that possibility and taken steps to prevent it. And it will do that not because it wants to stay alive but because it is pursuing whatever objective we gave it and knows that it will fail if it is switched off.
There are some systems being contemplated that really cannot be switched off without ripping out a lot of the plumbing of our civilization. These are systems implemented as so-called smart contracts in the blockchain. The blockchain is a highly distributed form of computing and record keeping based on encryption; it is specifically designed so that no datum can be deleted and no smart contract can be interrupted without essentially taking control of a very large number of machines and undoing the chain, which might in turn destroy a large part of the Internet and/or the financial system. It is debatable whether this incredible robustness is a feature or a bug. It’s certainly a tool that a superintelligent AI system could use to protect itself.
. . . put it in a box?
If you can’t switch AI systems off, can you seal the machines inside a kind of firewall, extracting useful question-answering work from them but never allowing them to affect the real world directly? This is the idea behind Oracle AI, which has been discussed at length in the AI safety community.²² An Oracle AI system can be arbitrarily intelligent, but can answer only yes or no (or give corresponding probabilities) to each question. It can access all the information the human race possesses through a read-only connection—that is, it has no direct access to the Internet. Of course, this means giving up on superintelligent robots, assistants, and many other kinds of AI systems, but a trustworthy Oracle AI would still have enormous economic value because we could ask it questions whose answers are important to us, such as whether Alzheimer’s disease is caused by an infectious organism or whether it’s a good idea to ban autonomous weapons. Thus, the Oracle AI is certainly an interesting possibility.
Unfortunately, there are some serious difficulties. First, the Oracle AI system will be at least as assiduous in understanding the physics and origins of its world—the computing resources, their mode of operation, and the mysterious entities that produced its information store and are now asking questions—as we are in understanding ours. Second, if the objective of the Oracle AI system is to provide accurate answers to questions in a reasonable amount of time, it will have an incentive to break out of its cage to acquire more computational resources and to control the questioners so that they ask only simple questions. And, finally, we have yet to invent a firewall that is secure against ordinary humans, let alone superintelligent machines.
I think there might be solutions to some of these problems, particularly if we limit Oracle AI systems to be provably sound logical or Bayesian calculators. That is, we could insist that the algorithm can output only a conclusion that is warranted by the information provided, and we could check mathematically that the algorithm satisfies this condition. This still leaves the problem of controlling the process that decides which logical or Bayesian computations to do, in order to reach the strongest possible conclusion as quickly as possible. Because this process has an incentive to reason quickly, it has an incentive to acquire computational resources and of course to preserve its own existence.
In 2018, the Center for Human-Compatible AI at Berkeley ran a workshop at which we asked the question, “What would you do if you knew for certain that superintelligent AI would be achieved within a decade?” My answer was as follows: persuade the developers to hold off on building a general-purpose intelligent agent—one that can choose its own actions in the real world—and build an Oracle AI instead. Meanwhile, we would work on solving the problem of making Oracle AI systems provably safe to the extent possible. The reason this strategy might work is twofold: first, a superintelligent Oracle AI system would still be worth trillions of dollars, so the developers might be willing to accept this restriction; and second, controlling Oracle AI systems is almost certainly easier than controlling a general-purpose intelligent agent, so we’d have a better chance of solving the problem within the decade.
. . . work in human–machine teams?
A common refrain in the corporate world is that AI is no threat to employment or to humanity because we’ll just have collaborative human–AI teams. For example, David Kenny’s letter to Congress, quoted earlier in this chapter, stated that “high-value artificial intelligence systems are specifically designed to augment human intelligence, not replace workers.”²³
While a cynic might suggest that this is merely a public relations ploy to sugarcoat the process of eliminating human employees from the corporations’ clients, I think it does move the ball forward a few inches. Collaborative human–AI teams are indeed a desirable goal. Clearly, a team will be unsuccessful if the objectives of the team members are not aligned, so the emphasis on human–AI teams highlights the need to solve the core problem of value alignment. Of course, highlighting the problem is not the same as solving it.
. . . merge with the machines?
Human–machine teaming, taken to its extreme, becomes a human–machine merger in which electronic hardware is attached directly to the brain and forms part of a single, extended, conscious entity. The futurist Ray Kurzweil describes the possibility as follows:²⁴
We are going to directly merge with it, we are going to become the AIs. . . . As you get to the late 2030s or 2040s, our thinking will be predominately non-biological and the non-biological part will ultimately be so intelligent and have such vast capacity it’ll be able to model, simulate and understand fully the biological part.
Kurzweil views these developments in a positive light. Elon Musk, on the other hand, views the human–machine merger primarily as a defensive strategy:²⁵
If we achieve tight symbiosis, the AI wouldn’t be “other”—it would be you and [it would have] a relationship to your cortex analogous to the relationship your cortex has with your limbic system. . . . We’re going to have the choice of either being left behind and being effectively useless or like a pet—you know, like a house cat or something—or eventually figuring out some way to be symbiotic and merge with AI.
Musk’s Neuralink Corporation is working on a device dubbed “neural lace” after a technology described in Iain Banks’s Culture novels. The aim is to create a robust, permanent connection between the human cortex and external computing systems and networks. There are two main technical obstacles: first, the difficulties of connecting an electronic device to brain tissue, supplying it with power, and connecting it to the outside world; and second, the fact that we understand almost nothing about the neural implementation of higher levels of cognition in the brain, so we don’t know where to connect the device and what processing it should do.
I am not completely convinced that the obstacles in the preceding paragraph are insuperable. First, technologies such as neural dust are rapidly reducing the size and power requirements of electronic devices that can be attached to neurons and provide sensing, stimulation, and transcranial communication.²⁶ (The technology as of 2018 had reached a size of about one cubic millimeter, so neural grit might be a more accurate term.) Second, the brain itself has remarkable powers of adaptation. It used to be thought, for example, that we would have to understand the code that the brain uses to control the arm muscles before we could connect a brain to a robot arm successfully, and that we would have to understand the way the cochlea analyzes sound before we could build a replacement for it. It turns out, instead, that the brain does most of the work for us. It quickly learns how to make the robot arm do what its owner wants, and how to map the output of a cochlear implant to intelligible sounds. It’s entirely possible that we may hit upon ways to provide the brain with additional memory, with communication channels to computers, and perhaps even with communication channels to other brains—all without ever really understanding how any of it works.²⁷
Regardless of the technological feasibility of these ideas, one has to ask whether this direction represents the best possible future for humanity. If humans need brain surgery merely to survive the threat posed by their own technology, perhaps we’ve made a mistake somewhere along the line.
. . . avoid putting in human goals?
A common line of reasoning has it that problematic AI behaviors arise from putting in specific kinds of objectives; if these are left out, everything will be fine. Thus, for example, Yann LeCun, a pioneer of deep learning and director of AI research at Facebook, often cites this idea when downplaying the risk from AI:²⁸
There is no reason for AIs to have self-preservation instincts, jealousy, etc. . . . AIs will not have these destructive “emotions” unless we build these emotions into them. I don’t see why we would want to do that.
In a similar vein, Steven Pinker provides a gender-based analysis:²⁹
AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world. . . . It’s telling that many of our techno-prophets don’t entertain the possibility that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization.
As we have already seen in the discussion of instrumental goals, it doesn’t matter whether we build in “emotions” or “desires” such as self-preservation, resource acquisition, knowledge discovery, or, in the extreme case, taking over the world. The machine is going to have those emotions anyway, as subgoals of any objective we do build in—and regardless of its gender. For a machine, death isn’t bad per se. Death is to be avoided, nonetheless, because it’s hard to fetch the coffee if you’re dead.
An even more extreme solution is to avoid putting objectives into the machine altogether. Voilà, problem solved. Alas, it’s not as simple as that. Without objectives, there is no intelligence: any action is as good as any other, and the machine may as well be a random number generator. Without objectives, there is also no reason for the machine to prefer a human paradise to a planet turned into a sea of paperclips (a scenario described at length by Nick Bostrom). Indeed, the latter outcome may be utopian for the iron-eating bacterium Thiobacillus ferrooxidans. Absent some notion that human preferences matter, who is to say the bacterium is wrong?
A common variant on the “avoid putting in objectives” idea is the notion that a sufficiently intelligent system will necessarily, as a consequence of its intelligence, develop the “right” goals on its own. Often, proponents of this notion appeal to the theory that people of greater intelligence tend to have more altruistic and lofty objectives—a view that may be related to the self-conception of the proponents.
The idea that it is possible to perceive objectives in the world was discussed at length by the famous eighteenth-century philosopher David Hume in A Treatise of Human Nature.³⁰ He called it the is-ought problem and concluded that it was simply a mistake to think that moral imperatives could be deduced from natural facts. To see why, consider, for example, the design of a chessboard and chess pieces. One cannot perceive in these the goal of checkmate, for the same chessboard and pieces can be used for suicide chess or indeed many other games still to be invented.
Nick Bostrom, in Superintelligence, presents the same underlying idea in a different form, which he calls the orthogonality thesis:
Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.
Here, orthogonal means “at right angles” in the sense that the degree of intelligence is one axis defining an intelligent system and its goals are another axis, and we can vary these independently. For example, a self-driving car can be given any particular address as its destination; making the car a better driver doesn’t mean that it will start refusing to go to addresses that are divisible by seventeen. By the same token, it is easy to imagine that a general-purpose intelligent system could be given more or less any objective to pursue—including maximizing the number of paperclips or the number of known digits of pi. This is just how reinforcement learning systems and other kinds of reward optimizers work: the algorithms are completely general and accept any reward signal. For engineers and computer scientists operating within the standard model, the orthogonality thesis is just a given.
The idea that intelligent systems could simply observe the world to acquire the goals that should be pursued suggests that a sufficiently intelligent system will naturally abandon its initial objective in favor of the “right” objective. It’s hard to see why a rational agent would do this. Furthermore, it presupposes that there is a “right” objective out there in the world; it would have to be an objective on which iron-eating bacteria and humans and all other species agree, which is hard to imagine.
The most explicit critique of Bostrom’s orthogonality thesis comes from the noted roboticist Rodney Brooks, who asserts that it’s impossible for a program to be “smart enough that it would be able to invent ways to subvert human society to achieve goals set for it by humans, without understanding the ways in which it was causing problems for those same humans.”³¹ Unfortunately, it’s not only possible for a program to behave like this; it is, in fact, inevitable, given the way Brooks defines the issue. Brooks posits that the optimal plan to “achieve goals set for it by humans” is causing problems for humans. It follows that those problems reflect things of value to humans that were omitted from the goals set for it by humans. The optimal plan being carried out by the machine may well cause problems for humans, and the machine may well be aware of this. But, by definition, the machine will not recognize those problems as problematic. They are none of its concern.
Steven Pinker seems to agree with Bostrom’s orthogonality thesis, writing that “intelligence is the ability to deploy novel means to attain a goal; the goals are extraneous to the intelligence itself.”³² On the other hand, he finds it inconceivable that “the AI would be so brilliant that it could figure out how to transmute elements and rewire brains, yet so imbecilic that it would wreak havoc based on elementary blunders of misunderstanding.”³³ He continues, “The ability to choose an action that best satisfies conflicting goals is not an add-on that engineers might forget to install and test; it is intelligence. So is the ability to interpret the intentions of a language user in context.” Of course, “satisf[ying] conflicting goals” is not the problem—that’s something that’s been built into the standard model from the early days of decision theory. The problem is that the conflicting goals of which the machine is aware do not constitute the entirety of human concerns; moreover, within the standard model, there’s nothing to say that the machine has to care about goals it’s not told to care about.
There are, however, some useful clues in what Brooks and Pinker say. It does seem stupid to us for the machine to, say, change the color of the sky as a side effect of pursuing some other goal, while ignoring the obvious signs of human displeasure that result. It seems stupid to us because we are attuned to noticing human displeasure and (usually) we are motivated to avoid causing it—even if we were previously unaware that the humans in question cared about the color of the sky. That is, we humans (1) care about the preferences of other humans and (2) know that we don’t know what all those preferences are. In the next chapter, I argue that these characteristics, when built into a machine, may provide the beginnings of a solution to the King Midas problem.
The Debate, Restarted
This chapter has provided a glimpse into an ongoing debate in the broad intellectual community, a debate between those pointing to the risks of AI and those who are skeptical about the risks. It has been conducted in books, blogs, academic papers, panel discussions, interviews, tweets, and newspaper articles. Despite their valiant efforts, the “skeptics”—those who argue that the risk from AI is negligible—have failed to explain why superintelligent AI systems will necessarily remain under human control; and they have not even tried to explain why superintelligent AI systems will never be developed.
Many skeptics will admit, if pressed, that there is a real problem, even if it’s not imminent. Scott Alexander, in his Slate Star Codex blog, summed it up brilliantly:³⁴
The “skeptic” position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research.
The “believers,” meanwhile, insist that although we shouldn’t panic or start trying to ban AI research, we should probably get a couple of bright people to start working on preliminary aspects of the problem.
Although I would be happy if the skeptics came up with an irrefutable objection, perhaps in the form of a simple and foolproof (and evil-proof) solution to the control problem for AI, I think it’s quite likely that this isn’t going to happen, any more than we’re going to find a simple and foolproof solution for cybersecurity or a simple and foolproof way to generate nuclear energy with zero risk. Rather than continue the descent into tribal name-calling and repeated exhumation of discredited arguments, it seems better, as Alexander puts it, to start working on some preliminary aspects of the problem.
The debate has highlighted the conundrum we face: if we build machines to optimize objectives, the objectives we put into the machines have to match what we want, but we don’t know how to define human objectives completely and correctly. Fortunately, there is a middle way.
7
AI: A DIFFERENT APPROACH
Once the skeptic’s arguments have been refuted and all the but but buts have been answered, the next question is usually, “OK, I admit there’s a problem, but there’s no solution, is there?” Yes, there is a solution.
Let’s remind ourselves of the task at hand: to design machines with a high degree of intelligence—so that they can help us with difficult problems—while ensuring that those machines never behave in ways that make us seriously unhappy.
The task is, fortunately, not the following: given a machine that possesses a high degree of intelligence, work out how to control it. If that were the task, we would be toast. A machine viewed as a black box, a fait accompli, might as well have arrived from outer space. And our chances of controlling a superintelligent entity from outer space are roughly zero. Similar arguments apply to methods of creating AI systems that guarantee we won’t understand how they work; these methods include whole-brain emulation¹—creating souped-up electronic copies of human brains—as well as methods based on simulated evolution of programs.² I won’t say more about these proposals because they are so obviously a bad idea.
So, how has the field of AI approached the “design machines with a high degree of intelligence” part of the task in the past? Like many other fields, AI has adopted the standard model: we build optimizing machines, we feed objectives into them, and off they go. That worked well when the machines were stupid and had a limited scope of action; if you put in the wrong objective, you had a good chance of being able to switch off the machine, fix the problem, and try again.
As machines designed according to the standard model become more intelligent, however, and as their scope of action becomes more global, the approach becomes untenable. Such machines will pursue their objective, no matter how wrong it is; they will resist attempts to switch them off; and they will acquire any and all resources that contribute to achieving the objective. Indeed, the optimal behavior for the machine might include deceiving the humans into thinking they gave the machine a reasonable objective, in order to gain enough time to achieve the actual objective given to it. This wouldn’t be “deviant” or “malicious” behavior requiring consciousness and free will; it would just be part of an optimal plan to achieve the objective.
In Chapter 1, I introduced the idea of beneficial machines—that is, machines whose actions can be expected to achieve our objectives rather than their objectives. My goal in this chapter is to explain in simple terms how this can be done, despite the apparent drawback that the machines don’t know what our objectives are. The resulting approach should lead eventually to machines that present no threat to us, no matter how intelligent they are.
Principles for Beneficial Machines
I find it helpful to summarize the approach in the form of three³ principles. When reading these principles, keep in mind that they are intended primarily as a guide to AI researchers and developers in thinking about how to create beneficial AI systems; they are not intended as explicit laws for AI systems to follow:⁴
1. The machine’s only objective is to maximize the realization of human preferences.
2. The machine is initially uncertain about what those preferences are.
3. The ultimate source of information about human preferences is human behavior.
Before delving into more detailed explanations, it’s important to remember the broad scope of what I mean by preferences in these principles. Here’s a reminder of what I wrote in Chapter 2: if you were somehow able to watch two movies, each describing in sufficient detail and breadth a future life you might lead, such that each constitutes a virtual experience, you could say which you prefer, or express indifference. Thus, preferences here are all-encompassing; they cover everything you might care about, arbitrarily far into the future.⁵ And they are yours: the machine is not looking to identify or adopt one ideal set of preferences but to understand and satisfy (to the extent possible) the preferences of each person.
The first principle: Purely altruistic machines
The first principle, that the machine’s only objective is to maximize the realization of human preferences, is central to the notion of a beneficial machine. In particular, it will be beneficial to humans, rather than to, say, cockroaches. There’s no getting around this recipient-specific notion of benefit.
The principle means that the machine is purely altruistic—that is, it attaches absolutely no intrinsic value to its own well-being or even its own existence. It might protect itself in order to continue doing useful things for humans, or because its owner would be unhappy about having to pay for repairs, or because the sight of a dirty or damaged robot might be mildly distressing to passersby, but not because it wants to be alive. Putting in any preference for self-preservation sets up an additional incentive within the robot that is not strictly aligned with human well-being.
The wording of the first principle brings up two questions of fundamental importance. Each merits an entire bookshelf to itself, and in fact many books have already been written on these questions.
The first question is whether humans really have preferences in a meaningful or stable sense. In truth, the notion of a “preference” is an idealization that fails to match reality in several ways. For example, we aren’t born with the preferences we have as adults, so they must change over time. For now, I will assume that the idealization is reasonable. Later, I will examine what happens when we give up the idealization.
The second question is a staple of the social sciences: given that it is usually impossible to ensure that everyone gets their most preferred outcome—we can’t all be Emperor of the Universe—how should the machine trade off the preferences of multiple humans? Again, for the time being—and I promise to return to this question in the next chapter—it seems reasonable to adopt the simple approach of treating everyone equally. This is reminiscent of the roots of eighteenth-century utilitarianism in the phrase “the greatest happiness for the greatest numbers,”⁶ and there are many caveats and elaborations required to make this work in practice. Perhaps the most important of these is the matter of the possibly vast number of people not yet born, and how their preferences are to be taken into account.
The issue of future humans brings up another, related question: How do we take into account the preferences of nonhuman entities? That is, should the first principle include the preferences of animals? (And possibly plants too?) This is a question worthy of debate, but the outcome seems unlikely to have a strong impact on the path forward for AI. For what it’s worth, human preferences can and do include terms for the well-being of animals, as well as for the aspects of human well-being that benefit directly from animals’ existence.⁷ To say that the machine should pay attention to the preferences of animals in addition to this is to say that humans should build machines that care more about animals than humans do, which is a difficult position to sustain. A more tenable position is that our tendency to engage in myopic decision making—which works against our own interests—often leads to negative consequences for the environment and its animal inhabitants. A machine that makes less myopic decisions would help humans adopt more environmentally sound policies. And if, in the future, we give substantially greater weight to the well-being of animals than we currently do—which probably means sacrificing some of our own intrinsic well-being—then machines will adapt accordingly.
The second principle: Humble machines
The second principle, that the machine is initially uncertain about what human preferences are, is the key to creating beneficial machines.
A machine that assumes it knows the true objective perfectly will pursue it single-mindedly. It will never ask whether some course of action is OK, because it already knows it’s an optimal solution for the objective. It will ignore humans jumping up and down screaming, “Stop, you’re going to destroy the world!” because those are just words. Assuming perfect knowledge of the objective decouples the machine from the human: what the human does no longer matters, because the machine knows the goal and pursues it.
On the other hand, a machine that is uncertain about the true objective will exhibit a kind of humility: it will, for example, defer to humans and allow itself to be switched off. It reasons that the human will switch it off only if it’s doing something wrong—that is, doing something contrary to human preferences. By the first principle, it wants to avoid doing that, but, by the second principle, it knows that’s possible because it doesn’t know exactly what “wrong” is. So, if the human does switch the machine off, then the machine avoids doing the wrong thing, and that’s what it wants. In other words, the machine has a positive incentive to allow itself to be switched off. It remains coupled to the human, who is a potential source of information that will allow it to avoid mistakes and do a better job.
Uncertainty has been a central concern in AI since the 1980s; indeed the phrase “modern AI” often refers to the revolution that took place when uncertainty was finally recognized as a ubiquitous issue in real-world decision making. Yet uncertainty in the objective of the AI system was simply ignored. In all the work on utility maximization, goal achievement, cost minimization, reward maximization, and loss minimization, it is assumed that the utility function, the goal, the cost function, the reward function, and the loss function are known perfectly. How could this be? How could the AI community (and the control theory, operations research, and statistics communities) have such a huge blind spot for so long, even while embracing uncertainty in all other aspects of decision making?⁸
One could make some rather complicated technical excuses,⁹ but I suspect the truth is that, with some honorable exceptions,¹⁰ AI researchers simply bought into the standard model that maps our notion of human intelligence onto machine intelligence: humans have objectives and pursue them, so machines should have objectives and pursue them. They, or should I say we, never really examined this fundamental assumption. It is built into all existing approaches for constructing intelligent systems.
The third principle: Learning to predict human preferences
The third principle, that the ultimate source of information about human preferences is human behavior, serves two purposes.
The first purpose is to provide a definite grounding for the term human preferences. By assumption, human preferences aren’t in the machine and it cannot observe them directly, but there must still be some definite connection between the machine and human preferences. The principle says that the connection is through the observation of human choices: we assume that choices are related in some (possibly very complicated) way to underlying preferences. To see why this connection is essential, consider the converse: if some human preference had no effect whatsoever on any actual or hypothetical choice the human might make, then it would probably be meaningless to say that the preference exists.
The second purpose is to enable the machine to become more useful as it learns more about what we want. (After all, if it knew nothing about human preferences, it would be of no use to us.) The idea is simple enough: human choices reveal information about human preferences. Applied to the choice between pineapple pizza and sausage pizza, this is straightforward. Applied to choices between future lives and choices made with the goal of influencing the robot’s behavior, things get more interesting. In the next chapter I explain how to formulate and solve such problems. The real complications arise, however, because humans are not perfectly rational: imperfection comes between human preferences and human choices, and the machine must take into account those imperfections if it is to interpret human choices as evidence of human preferences.
Not what I mean
Before going into more detail, I want to head off some potential misunderstandings.
The first and most common misunderstanding is that I am proposing to install in machines a single, idealized value system of my own design that guides the machine’s behavior. “Whose values are you going to put in?” “Who gets to decide what the values are?” Or even, “What gives Western, well-off, white male cisgender scientists such as Russell the right to determine how the machine encodes and develops human values?”¹¹
I think this confusion comes partly from an unfortunate conflict between the commonsense meaning of value and the more technical sense in which it is used in economics, AI, and operations research. In ordinary usage, values are what one uses to help resolve moral dilemmas; as a technical term, on the other hand, value is roughly synonymous with utility, which measures the degree of desirability of anything from pizza to paradise. The meaning I want is the technical one: I just want to make sure the machines give me the right pizza and don’t accidentally destroy the human race. (Finding my keys would be an unexpected bonus.) To avoid this confusion, the principles talk about human preferences rather than human values, since the former term seems to steer clear of judgmental preconceptions about morality.
“Putting in values” is, of course, exactly the mistake I am saying we should avoid, because getting the values (or preferences) exactly right is so difficult and getting them wrong is potentially catastrophic. I am proposing instead that machines learn to predict better, for each person, which life that person would prefer, all the while being aware that the predictions are highly uncertain and incomplete. In principle, the machine can learn billions of different predictive preference models, one for each of the billions of people on Earth. This is really not too much to ask for the AI systems of the future, given that present-day Facebook systems are already maintaining more than two billion individual profiles.
A related misunderstanding is that the goal is to equip machines with “ethics” or “moral values” that will enable them to resolve moral dilemmas. Often, people bring up the so-called trolley problems,¹² where one has to choose whether to kill one person in order to save others, because of their supposed relevance to self-driving cars. The whole point of moral dilemmas, however, is that they are dilemmas: there are good arguments on both sides. The survival of the human race is not a moral dilemma. Machines could solve most moral dilemmas the wrong way (whatever that is) and still have no catastrophic impact on humanity.¹³
Another common supposition is that machines that follow the three principles will adopt all the sins of the evil humans they observe and learn from. Certainly, there are many of us whose choices leave something to be desired, but there is no reason to suppose that machines who study our motivations will make the same choices, any more than criminologists become criminals. Take, for example, the corrupt government official who demands bribes to approve building permits because his paltry salary won’t pay for his children to go to university. A machine observing this behavior will not learn to take bribes; it will learn that the official, like many other people, has a very strong desire for his children to be educated and successful. It will find ways to help him that don’t involve lowering the well-being of others. This is not to say that all cases of evil behavior are unproblematic for machines—for example, machines may need to treat differently those who actively prefer the suffering of others.
Reasons for Optimism
In a nutshell, I am suggesting that we need to steer AI in a radically new direction if we want to retain control over increasingly intelligent machines. We need to move away from one of the driving ideas of twentieth-century technology: machines that optimize a given objective. I am often asked why I think this is even remotely feasible, given the huge momentum behind the standard model in AI and related disciplines. In fact, I am quite optimistic that it can be done.
The first reason for optimism is that there are strong economic incentives to develop AI systems that defer to humans and gradually align themselves to user preferences and intentions. Such systems will be highly desirable: the range of behaviors they can exhibit is simply far greater than that of machines with fixed, known objectives. They will ask humans questions or ask for permission when appropriate; they will do “trial runs” to see if we like what they propose to do; they will accept correction when they do something wrong. On the other hand, systems that fail to do this will have severe consequences. Up to now, the stupidity and limited scope of AI systems has protected us from these consequences, but that will change. Imagine, for example, some future domestic robot charged with looking after your children while you are working late. The children are hungry, but the refrigerator is empty. Then the robot notices the cat. Alas, the robot understands the cat’s nutritional value but not its sentimental value. Within a few short hours, headlines about deranged robots and roasted cats are blanketing the world’s media and the entire domestic-robot industry is out of business.
The possibility that one industry player could destroy the entire industry through careless design provides a strong economic motivation to form safety-oriented industry consortia and to enforce safety standards. Already, the Partnership on AI, which includes as members nearly all the world’s leading technology companies, has agreed to cooperate to ensure that “AI research and technology is robust, reliable, trustworthy, and operates within secure constraints.” To my knowledge, all the major players are publishing their safety-oriented research in the open literature. Thus, the economic incentive is in operation long before we reach human-level AI and will only strengthen over time. Moreover, the same cooperative dynamic may be starting at the international level—for example, the stated policy of the Chinese government is to “cooperate to preemptively prevent the threat of AI.”¹⁴
A second reason for optimism is that the raw data for learning about human preferences—namely, examples of human behavior—are so abundant. The data come not just in the form of direct observation via camera, keyboard, and touch screen by billions of machines sharing data with one another about billions of humans (subject to privacy constraints, of course) but also in indirect form. The most obvious kind of indirect evidence is the vast human record of books, films, and television and radio broadcasts, which is almost entirely concerned with people doing things (and other people being upset about it). Even the earliest and most tedious Sumerian and Egyptian records of copper ingots being traded for sacks of barley give some insight into human preferences for different commodities.
There are, of course, difficulties involved in interpreting this raw material, which includes propaganda, fiction, the ravings of lunatics, and even the pronouncements of politicians and presidents, but there is certainly no reason for the machine to take it all at face value. Machines can and should interpret all communications from other intelligent entities as moves in a game rather than as statements of fact; in some games, such as cooperative games with one human and one machine, the human has an incentive to be truthful, but in many other situations there are incentives to be dishonest. And of course, whether honest or dishonest, humans may be deluded in their own beliefs.
There is a second kind of indirect evidence that is staring us in the face: the way we have made the world.¹⁵ We made it that way because—very roughly—we like it that way. (Obviously, it’s not perfect!) Now, imagine you are an alien visiting Earth while all the humans are away on holiday. As you peer inside their houses, can you begin to grasp the basics of human preferences? Carpets are on floors because we like to walk on soft, warm surfaces and we don’t like loud footsteps; vases are on the middle of the table rather than the edge because we don’t want them to fall and break; and so on—everything that isn’t arranged by nature itself provides clues to the likes and dislikes of the strange bipedal creatures who inhabit this planet.
Reasons for Caution
You may find the Partnership on AI’s promises of cooperation on AI safety less than reassuring if you have been following progress in self-driving cars. That field is ruthlessly competitive, for some very good reasons: the first car manufacturer to release a fully autonomous vehicle will gain a huge market advantage; that advantage will be self-reinforcing because the manufacturer will be able to collect more data more quickly to improve the system’s performance; and ride-hailing companies such as Uber would quickly go out of business if another company were to roll out fully autonomous taxis before Uber does. This has led to a high-stakes race in which caution and careful engineering appear to be less important than snazzy demos, talent grabs, and premature rollouts.
Thus, life-or-death economic competition provides an impetus to cut corners on safety in the hope of winning the race. In a 2008 retrospective paper on the 1975 Asilomar conference that he co-organized—the conference that led to a moratorium on genetic modification of humans—the biologist Paul Berg wrote,¹⁶
There is a lesson in Asilomar for all of science: the best way to respond to concerns created by emerging knowledge or early-stage technologies is for scientists from publicly funded institutions to find common cause with the wider public about the best way to regulate—as early as possible. Once scientists from corporations begin to dominate the research enterprise, it will simply be too late.
Economic competition occurs not just between corporations but also between nations. A recent flurry of announcements of multibillion-dollar national investments in AI from the United States, China, France, Britain, and the EU certainly suggests that none of the major powers wants to be left behind. In 2017, Russian president Vladimir Putin said, “The one who becomes the leader in [AI] will be the ruler of the world.”¹⁷ This analysis is essentially correct. Advanced AI would, as we saw in Chapter 3, lead to greatly increased productivity and rates of innovation in almost all areas. If not shared, it would allow its possessor to outcompete any rival nation or bloc.
Nick Bostrom, in Superintelligence, warns against exactly this motivation. National competition, just like corporate competition, would tend to focus more on advances in raw capabilities and less on the problem of control. Perhaps, however, Putin has read Bostrom; he went on to say, “It would be strongly undesirable if someone wins a monopolist position.” It would also be rather pointless, because human-level AI is not a zero-sum game and nothing is lost by sharing it. On the other hand, competing to be the first to achieve human-level AI, without first solving the control problem, is a negative-sum game. The payoff for everyone is minus infinity.
There’s only a limited amount that AI researchers can do to influence the evolution of global policy on AI. We can point to possible applications that would provide economic and social benefits; we can warn about possible misuses such as surveillance and weapons; and we can provide roadmaps for the likely path of future developments and their impacts. Perhaps the most important thing we can do is to design AI systems that are, to the extent possible, provably safe and beneficial for humans. Only then will it make sense to attempt general regulation of AI.
8
PROVABLY BENEFICIAL AI
If we are going to rebuild AI along new lines, the foundations must be solid. When the future of humanity is at stake, hope and good intentions—and educational initiatives and industry codes of conduct and legislation and economic incentives to do the right thing—are not enough. All of these are fallible, and they often fail. In such situations, we look to precise definitions and rigorous step-by-step mathematical proofs to provide incontrovertible guarantees.
That’s a good start, but we need more. We need to be sure, to the extent possible, that what is guaranteed is actually what we want and that the assumptions going into the proof are actually true. The proofs themselves belong in journal papers written for specialists, but I think it is useful nonetheless to understand what proofs are and what they can and cannot provide in the way of real safety. The “provably beneficial” in the title of the chapter is an aspiration rather than a promise, but it is the right aspiration.
Mathematical Guarantees
We will want, eventually, to prove theorems to the effect that a particular way of designing AI systems ensures that they will be beneficial to humans. A theorem is just a fancy name for an assertion, stated precisely enough so that its truth in any particular situation can be checked. Perhaps the most famous theorem is Fermat’s Last Theorem, which was conjectured by the French mathematician Pierre de Fermat in 1637 and finally proved by Andrew Wiles in 1994 after 357 years of effort (not all of it by Wiles).¹ The theorem can be written in one line, but the proof is over one hundred pages of dense mathematics.
Proofs begin from axioms, which are assertions whose truth is simply assumed. Often, the axioms are just definitions, such as the definitions of integers, addition, and exponentiation needed for Fermat’s theorem. The proof proceeds from the axioms by logically incontrovertible steps, adding new assertions until the theorem itself is established as a consequence of one of the steps.
Here’s a fairly obvious theorem that follows almost immediately from the definitions of integers and addition: 1 + 2 = 2 + 1. Let’s call this Russell’s theorem. It’s not much of a discovery. On the other hand, Fermat’s Last Theorem feels like something completely new—a discovery of something previously unknown. The difference, however, is just a matter of degree. The truth of both Russell’s and Fermat’s theorems is already contained in the axioms. Proofs merely make explicit what was already implicit. They can be long or short, but they add nothing new. The theorem is only as good as the assumptions that go into it.
That’s fine when it comes to mathematics, because mathematics is about abstract objects that we define—numbers, sets, and so on. The axioms are true because we say so. On the other hand, if you want to prove something about the real world—for example, that AI systems designed like so won’t kill you on purpose—your axioms have to be true in the real world. If they aren’t true, you’ve proved something about an imaginary world.
Science and engineering have a long and honorable tradition of proving results about imaginary worlds. In structural engineering, for example, one might see a mathematical analysis that begins, “Let AB be a rigid beam. . . .” The word rigid here doesn’t mean “made of something hard like steel”; it means “infinitely strong,” so that it doesn’t bend at all. Rigid beams do not exist, so this is an imaginary world. The trick is to know how far one can stray from the real world and still obtain useful results. For example, if the rigid-beam assumption allows an engineer to calculate the forces in a structure that includes the beam, and those forces are small enough to bend a real steel beam by only a tiny amount, then the engineer can be reasonably confident that the analysis will transfer from the imaginary world to the real world.
A good engineer develops a sense for when this transfer might fail— for example, if the beam is under compression, with huge forces pushing on it from each end, then even a tiny amount of bending might lead to greater lateral forces causing more bending, and so on, resulting in catastrophic failure. In that case, the analysis is redone with “Let AB be a flexible beam with stiffness K. . . .” This is still an imaginary world, of course, because real beams do not have uniform stiffness; instead, they have microscopic imperfections that can lead to cracks forming if the beam is subject to repeated bending. The process of removing unrealistic assumptions continues until the engineer is fairly confident that the remaining assumptions are true enough in the real world. After that, the engineered system can be tested in the real world; but the test results are just that. They do not prove that the same system will work in other circumstances or that other instances of the system will behave the same way as the original.
One of the classic examples of assumption failure in computer science comes from cybersecurity. In that field, a huge amount of mathematical analysis goes into showing that certain digital protocols are provably secure—for example, when you type a password into a Web application, you want to be sure that it is encrypted before transmission so that someone eavesdropping on the network cannot read your password. Such digital systems are often provably secure but still vulnerable to attack in reality. The false assumption here is that this is a digital process. It isn’t. It operates in the real, physical world. By listening to the sound of your keyboard or measuring voltages on the electrical line that supplies power to your desktop computer, an attacker can “hear” your password or observe the encryption/decryption calculations that are occurring as it is processed. The cybersecurity community is now responding to these so-called side-channel attacks—for example, by writing encryption code that produces the same voltage fluctuations regardless of what message is being encrypted.
Let’s look at the kind of theorem we would like eventually to prove about machines that are beneficial to humans. One type might go something like this:
Suppose a machine has components A, B, C, connected to each other like so and to the environment like so, with internal learning algorithms l\_(A), l\_(B), l\_(C) that optimize internal feedback rewards r\_(A), r\_(B), r\_(C) defined like so, and [a few more conditions] . . . then, with very high probability, the machine’s behavior will be very close in value (for humans) to the best possible behavior realizable on any machine with the same computational and physical capabilities.
The main point here is that such a theorem should hold regardless of how smart the components become—that is, the vessel never springs a leak and the machine always remains beneficial to humans.
There are three other points worth making about this kind of theorem. First, we cannot try to prove that the machine produces optimal (or even near-optimal) behavior on our behalf, because that’s almost certainly computationally impossible. For example, we might want the machine to play Go perfectly, but there is good reason to believe that cannot be done in any practical amount of time on any physically realizable machine. Optimal behavior in the real world is even less feasible. Hence, the theorem says “best possible” rather than “optimal.”
Second, we say “very high probability . . . very close” because that’s typically the best that can be done with machines that learn. For example, if the machine is learning to play roulette for us and the ball lands in zero forty times in a row, the machine might reasonably decide the table was rigged and bet accordingly. But it could have happened by chance; so there is always a small—perhaps vanishingly small—chance of being misled by freak occurrences. Finally, we are a long way from being able to prove any such theorem for really intelligent machines operating in the real world!
There are also analogs of the side-channel attack in AI. For example, the theorem begins with “Suppose a machine has components A, B, C, connected to each other like so. . . .” This is typical of all correctness theorems in computer science: they begin with a description of the program being proved correct. In AI, we typically distinguish between the agent (the program doing the deciding) and the environment (on which the agent acts). Since we design the agent, it seems reasonable to assume that it has the structure we give it. To be extra safe, we can prove that its learning processes can modify its program only in certain circumscribed ways that cannot cause problems. Is this enough? No. As with side-channel attacks, the assumption that the program operates within a digital system is incorrect. Even if a learning algorithm is constitutionally incapable of overwriting its own code by digital means, it may, nonetheless, learn to persuade humans to do “brain surgery” on it—to violate the agent/environment distinction and change the code by physical means.²
Unlike the structural engineer reasoning about rigid beams, we have very little experience with the assumptions that will eventually underlie theorems about provably beneficial AI. In this chapter, for example, we will typically be assuming a rational human. This is a bit like assuming a rigid beam, because there are no perfectly rational humans in reality. (It’s probably much worse, however, because humans are not even close to being rational.) The theorems we can prove seem to provide some insights, and the insights survive the introduction of a certain degree of randomness in human behavior, but it is as yet far from clear what happens when we consider some of the complexities of real humans.
So, we are going to have to be very careful in examining our assumptions. When a proof of safety succeeds, we need to make sure it’s not succeeding because we have made unrealistically strong assumptions or because the definition of safety is too weak. When a proof of safety fails, we need to resist the temptation to strengthen the assumptions to make the proof go through—for example, by adding the assumption that the program’s code remains fixed. Instead, we need to tighten up the design of the AI system—for example, by ensuring that it has no incentive to modify critical parts of its own code.
There are some assumptions that I call OWMAWGH assumptions, standing for “otherwise we might as well go home.” That is, if these assumptions are false, the game is up and there is nothing to be done. For example, it is reasonable to assume that the universe operates according to constant and somewhat discernible laws. If this is not the case, we will have no assurance that learning processes—even very sophisticated ones—will work at all. Another basic assumption is that humans care about what happens; if not, provably beneficial AI has no purpose because beneficial has no meaning. Here, caring means having roughly coherent and more-or-less stable preferences about the future. In the next chapter, I examine the consequences of plasticity in human preferences, which presents a serious philosophical challenge to the very idea of provably beneficial AI.
For now, I focus on the simplest case: a world with one human and one robot. This case serves to introduce the basic ideas, but it’s also useful in its own right: you can think of the human as standing in for all of humanity and the robot as standing in for all machines. Additional complications arise when considering multiple humans and machines.
Learning Preferences from Behavior
Economists elicit preferences from human subjects by offering them choices.³ This technique is widely used in product design, marketing, and interactive e-commerce systems. For example, by offering test subjects choices among cars with different paint colors, seating arrangements, trunk sizes, battery capacities, cup holders, and so on, a car designer learns how much people care about various car features and how much they are willing to pay for them. Another important application is in the medical domain, where an oncologist considering a possible limb amputation might want to assess the patient’s preferences between mobility and life expectancy. And of course, pizza restaurants want to know how much more someone is willing to pay for sausage pizza than plain pizza.
Preference elicitation typically considers only single choices made between objects whose value is assumed to be immediately apparent to the subject. It’s not obvious how to extend it to preferences between future lives. For that, we (and machines) need to learn from observations of behavior over time—behavior that involves multiple choices and uncertain outcomes.
Early in 1997, I was involved in discussions with my colleagues Michael Dickinson and Bob Full about ways in which we might be able to apply ideas from machine learning to understand the locomotive behavior of animals. Michael studied in exquisite detail the wing motions of fruit flies. Bob was especially fond of creepy-crawlies and had built a little treadmill for cockroaches to see how their gait changed with speed. We thought it might be possible to use reinforcement learning to train a robotic or simulated insect to reproduce these complex behaviors. The problem we faced was that we didn’t know what reward signal to use. What were the flies and cockroaches optimizing? Without that information, we couldn’t apply reinforcement learning to train the virtual insect, so we were stuck.
One day, I was walking down the road that leads from our house in Berkeley to the local supermarket. The road has a downhill slope, and I noticed, as I am sure most people have, that the slope induced a slight change in the way I walked. Moreover, the uneven paving resulting from decades of minor earthquakes induced additional gait changes, including raising my feet a little higher and planting them less stiffly because of the unpredictable ground level. As I pondered these mundane observations, I realized we had got it backwards. While reinforcement learning generates behavior from rewards, we actually wanted the opposite: to learn the rewards given the behavior. We already had the behavior, as produced by the flies and cockroaches; we wanted to know the specific reward signal being optimized by this behavior. In other words, we needed algorithms for inverse reinforcement learning, or IRL.⁴ (I did not know at the time that a similar problem had been studied under the perhaps less wieldy name of structural estimation of Markov decision processes, a field pioneered by Nobel laureate Tom Sargent in the late 1970s.⁵) Such algorithms would not only be able to explain animal behavior but also to predict their behavior in new circumstances. For example, how would a cockroach run on a bumpy treadmill that sloped sideways?
The prospect of answering such fundamental questions was almost too exciting to bear, but even so it took some time to work out the first algorithms for IRL.⁶ Many different formulations and algorithms for IRL have been proposed since then. There are formal guarantees that the algorithms work, in the sense that they can acquire enough information about an entity’s preferences to be able to behave just as successfully as the entity they are observing.⁷
Perhaps the easiest way to understand IRL is this: the observer starts with some vague estimate of the true reward function and then refines this estimate, making it more precise, as more behavior is observed. Or, in Bayesian language:⁸ start with a prior probability over possible reward functions and then update the probability distribution on reward functions as evidence arrives.^(C) For example, suppose Robbie the robot is watching Harriet the human and wondering how much she prefers aisle seats to window seats. Initially, he is quite uncertain about this. Conceptually, Robbie’s reasoning might go like this: “If Harriet really cared about an aisle seat, she would have looked at the seat map to see if one was available rather than just accepting the window seat that the airline gave her, but she didn’t, even though she probably noticed it was a window seat and she probably wasn’t in a hurry; so now it’s considerably more likely that she either is roughly indifferent between window and aisle or even prefers a window seat.”
The most striking example of IRL in practice is the work of my colleague Pieter Abbeel on learning to do helicopter aerobatics.⁹ Expert human pilots can make model helicopters do amazing things—loops, spirals, pendulum swings, and so on. Trying to copy what the human does turns out not to work very well because conditions are not perfectly reproducible: repeating the same control sequences in different circumstances can lead to disaster. Instead, the algorithm learns what the human pilot wants, in the form of trajectory constraints that it can achieve. This approach actually produces results that are even better than the human expert’s, because the human has slower reactions and is constantly making small mistakes and correcting for them.
Assistance Games
IRL is already an important tool for building effective AI systems, but it makes some simplifying assumptions. The first is that the robot is going to adopt the reward function once it has learned it by observing the human, so that it can perform the same task. This is fine for driving or helicopter piloting, but it’s not fine for drinking coffee: a robot observing my morning routine should learn that I (sometimes) want coffee, but should not learn to want coffee itself. Fixing this issue is easy—we simply ensure that the robot associates the preferences with the human, not with itself.
The second simplifying assumption in IRL is that the robot is observing a human who is solving a single-agent decision problem. For example, suppose the robot is in medical school, learning to be a surgeon by watching a human expert. IRL algorithms assume that the human performs the surgery in the usual optimal way, as if the robot were not there. But that’s not what would happen: the human surgeon is motivated to have the robot (like any other medical student) learn quickly and well, and so she will modify her behavior considerably. She might explain what she is doing as she goes along; she might point out mistakes to avoid, such as making the incision too deep or the stitches too tight; she might describe the contingency plans in case something goes wrong during surgery. None of these behaviors make sense when performing surgery in isolation, so IRL algorithms will not be able to interpret the preferences they imply. For this reason, we will need to generalize IRL from the single-agent setting to the multi-agent setting—that is, we will need to devise learning algorithms that work when the human and robot are part of the same environment and interacting with each other.
With a human and a robot in the same environment, we are in the realm of game theory—just as in the penalty shoot-out between Alice and Bob on this page. We assume, in this first version of the theory, that the human has preferences and acts according to those preferences. The robot doesn’t know what preferences the human has, but it wants to satisfy them anyway. We’ll call any such situation an assistance game, because the robot is, by definition, supposed to be helpful to the human.¹⁰
Assistance games instantiate the three principles from the preceding chapter: the robot’s only objective is to satisfy human preferences, it doesn’t initially know what they are, and it can learn more by observing human behavior. Perhaps the most interesting property of assistance games is that, by solving the game, the robot can work out for itself how to interpret the human’s behavior as providing information about human preferences.
The paperclip game
The first example of an assistance game is the paperclip game. It’s a very simple game in which Harriet the human has an incentive to “signal” to Robbie the robot some information about her preferences. Robbie is able to interpret that signal because he can solve the game, and therefore he can understand what would have to be true about Harriet’s preferences in order for her to signal in that way.
[]
FIGURE 12: The paperclip game. Harriet the human can choose to make 2 paperclips, 2 staples, or 1 of each. Robbie the robot then has a choice to make 90 paperclips, 90 staples, or 50 of each.
The steps of the game are depicted in figure 12. It involves making paperclips and staples. Harriet’s preferences are expressed by a payoff function that depends on the number of paperclips and the number of staples produced, with a certain “exchange rate” between the two. For example, she might value paperclips at 45¢ and staples at 55¢ each. (We’ll assume the two values always add up to $1.00; it’s only the ratio that matters.) So, if 10 paperclips and 20 staples are produced, Harriet’s payoff will be 10 × 45¢ + 20 × 55¢ = $15.50. Robbie the robot is initially completely uncertain about Harriet’s preferences: he has a uniform distribution for the value of a paperclip (that is, it’s equally likely to be any value from 0¢ to $1.00). Harriet goes first and can choose to make two paperclips, two staples, or one of each. Then Robbie can choose to make 90 paperclips, 90 staples, or 50 of each.¹¹
Notice that if she were doing this by herself, Harriet would just make two staples, with a value of $1.10. But Robbie is watching, and he learns from her choice. What exactly does he learn? Well, that depends on how Harriet makes her choice. How does Harriet make her choice? That depends on how Robbie is going to interpret it. So, we seem to have a circular problem! That’s typical in game-theoretic problems, and that’s why Nash proposed the concept of equilibrium solutions.
To find an equilibrium solution, we need to identify strategies for Harriet and Robbie such that neither has an incentive to change their strategy, assuming the other remains fixed. A strategy for Harriet specifies how many paperclips and staples to make, given her preferences; a strategy for Robbie specifies how many paperclips and staples to make, given Harriet’s action.
It turns out there is only one equilibrium solution, and it looks like this:
- Harriet decides as follows based on her value for paperclips:
- If the value is less than 44.6¢, make 0 paperclips and 2 staples.
- If the value is between 44.6¢ and 55.4¢, make 1 of each.
- If the value is more than 55.4¢, make 2 paperclips and 0 staples.
- Robbie responds as follows:
- If Harriet makes 0 paperclips and 2 staples, make 90 staples.
- If Harriet makes 1 of each, make 50 of each.
- If Harriet makes 2 paperclips and 0 staples, make 90 paperclips.
(In case you are wondering exactly how the solution is obtained, the details are in the notes.¹²) With this strategy, Harriet is, in effect, teaching Robbie about her preferences using a simple code—a language, if you like—that emerges from the equilibrium analysis. As in the example of surgical teaching, a single-agent IRL algorithm wouldn’t understand this code. Note also that Robbie never learns Harriet’s preferences exactly, but he learns enough to act optimally on her behalf—that is, he acts just as he would if he did know her preferences exactly. He is provably beneficial to Harriet under the assumptions stated and under the assumption that Harriet is playing the game correctly.
One can also construct problems where, like a good student, Robbie will ask questions, and, like a good teacher, Harriet will show Robbie the pitfalls to avoid. These behaviors occur not because we write scripts for Harriet and Robbie to follow, but because they are the optimal solution to the assistance game in which Harriet and Robbie are participants.
The off-switch game
An instrumental goal is one that is generally useful as a subgoal of almost any original goal. Self-preservation is one of these instrumental goals, because very few original goals are better achieved when dead. This leads to the off-switch problem: a machine that has a fixed objective will not allow itself to be switched off and has an incentive to disable its own off-switch.
The off-switch problem is really the core of the problem of control for intelligent systems. If we cannot switch a machine off because it won’t let us, we’re really in trouble. If we can, then we may be able to control it in other ways too.
It turns out that uncertainty about the objective is essential for ensuring that we can switch the machine off—even when it’s more intelligent than us. We saw the informal argument in the previous chapter: by the first principle of beneficial machines, Robbie cares only about Harriet’s preferences, but, by the second principle, he’s unsure about what they are. He knows he doesn’t want to do the wrong thing, but he doesn’t know what that means. Harriet, on the other hand, does know (or so we assume, in this simple case). Therefore, if she switches Robbie off it’s to avoid him doing something wrong, so he’s happy to be switched off.
To make this argument more precise, we need a formal model of the problem.¹³ I’ll make it as simple as possible, but no simpler (see figure 13).
[]
FIGURE 13: The off-switch game. Robbie can choose to act now, with a highly uncertain payoff; to commit suicide; or to wait for Harriet. Harriet can switch Robbie off or let him go ahead. Robbie now has the same choice again. Acting still has an uncertain payoff to Harriet, but now Robbie knows the payoff is not negative.
Robbie, now working as Harriet’s personal assistant, has the first choice. He can act now—let’s say he can book Harriet into an expensive hotel. He’s quite unsure how much Harriet will like the hotel and its price—let’s say he has a uniform probability for its net value to Harriet between −40 and +60, with an average of +10. He could also “switch himself off”—less melodramatically, take himself out of the hotel booking process altogether—which we define to have value 0 to Harriet. If those were his two choices, he would go ahead and book the hotel, incurring a significant risk of making Harriet unhappy. (If the range were −60 to +40, with an average of −10, he’d switch himself off.) We’ll give Robbie a third choice, however: explain his plan, wait, and let Harriet switch him off. Harriet can either switch him off or let him go ahead and book the hotel. What possible good could this do, you may ask, given that he could make both of those choices himself?
The point is that Harriet’s choice—to switch Robbie off or let him go ahead—provides Robbie with new information about Harriet’s preferences. If Harriet lets Robbie go ahead, it’s because the value to Harriet is positive. Now Robbie’s belief is uniform between 0 and 60, with an average of 30.
So, if we evaluate Robbie’s initial choices from his point of view:
- Acting now and booking the hotel has an expected value of +10.
- Switching himself off has a value of 0.
- Waiting and letting Harriet switch him off (if she so desires) leads to two possible outcomes:
- There is a 40 percent chance (based on Robbie’s uncertainty about the hotel plan) that Harriet will hate it and will switch Robbie off, with value 0.
- There’s a 60 percent chance Harriet will like it and allow Robbie to go ahead, with expected value +30.
- Thus, waiting has expected value 40% × 0 + 60% × 30 = +18, which is better than acting now at +10.
The upshot is that Robbie has a positive incentive to allow himself to be switched off. This incentive comes directly from Robbie’s uncertainty about Harriet’s preferences. Robbie is aware that there’s a chance (40 percent in this example) that he might be about to do something that will make Harriet unhappy, in which case being switched off would be preferable to going ahead. Were Robbie already certain about Harriet’s preferences, he would just go ahead and make the decision (or switch himself off). There would be absolutely nothing to be gained from consulting Harriet, because, according to Robbie’s definite beliefs, he can already predict exactly what she is going to decide.
In fact, it is possible to prove the same result in the general case: as long as Robbie is not completely certain that he’s about to do what Harriet herself would do, he will prefer to allow her to switch him off.¹⁴ Her decision provides Robbie with information, and information is always useful for improving Robbie’s decisions. Conversely, if Robbie is certain about Harriet’s decision, her decision provides no new information, and so Robbie has no incentive to allow her to decide.
There are some obvious elaborations on the model that are worth exploring immediately. The first elaboration is to impose a cost for asking Harriet to make decisions or answer questions. (That is, we assume Robbie knows at least this much about Harriet’s preferences: her time is valuable.) In that case, Robbie is less inclined to bother Harriet if he is nearly certain about her preferences; the larger the cost, the more uncertain Robbie has to be before bothering Harriet. This is as it should be. And if Harriet is really grumpy about being interrupted, she shouldn’t be too surprised if Robbie occasionally does things she doesn’t like.
The second elaboration is to allow for some probability of human error—that is, Harriet might sometimes switch Robbie off even when his proposed action is reasonable, and she might sometimes let Robbie go ahead even when his proposed action is undesirable. We can put this probability of human error into the mathematical model of the assistance game and find the solution, as before. As one might expect, the solution to the game shows that Robbie is less inclined to defer to an irrational Harriet who sometimes acts against her own best interests. The more randomly she behaves, the more uncertain Robbie has to be about her preferences before deferring to her. Again, this is as it should be—for example, if Robbie is an autonomous car and Harriet is his naughty two-year-old passenger, Robbie should not allow himself to be switched off by Harriet in the middle of the freeway.
There are many more ways in which the model can be elaborated or embedded into complex decision problems.¹⁵ I am confident, however, that the core idea—the essential connection between helpful, deferential behavior and machine uncertainty about human preferences—will survive these elaborations and complications.
Learning preferences exactly in the long run
There is one important question that may have occurred to you in reading about the off-switch game. (Actually, you probably have loads of important questions, but I’m going to answer only this one.) What happens as Robbie acquires more and more information about Harriet’s preferences, becoming less and less uncertain? Does that mean he will eventually stop deferring to her altogether? This is a ticklish question, and there are two possible answers: yes and yes.
The first yes is benign: as a general matter, as long as Robbie’s initial beliefs about Harriet’s preferences ascribe some probability, however small, to the preferences that she actually has, then as Robbie becomes more and more certain, he will become more and more right. That is, he will eventually be certain that Harriet has the preferences that she does in fact have. For example, if Harriet values paperclips at 12¢ and staples at 88¢, Robbie will eventually learn these values. In that case, Harriet doesn’t care whether Robbie defers to her, because she knows he will always do exactly what she would have done in his place. There will never be an occasion where Harriet wants to switch Robbie off.
The second yes is less benign. If Robbie rules out, a priori, the true preferences that Harriet has, he will never learn those true preferences, but his beliefs may nonetheless converge to an incorrect assessment. In other words, over time, he becomes more and more certain about a false belief concerning Harriet’s preferences. Typically, that false belief will be whichever hypothesis is closest to Harriet’s true preferences, out of all the hypotheses that Robbie initially believes are possible. For example, if Robbie is absolutely certain that Harriet’s value for paperclips lies between 25¢ and 75¢, and Harriet’s true value is 12¢, then Robbie will eventually become certain that she values paperclips at 25¢.¹⁶
As he approaches certainty about Harriet’s preferences, Robbie will resemble more and more the bad old AI systems with fixed objectives: he won’t ask permission or give Harriet the option to turn him off, and he has the wrong objective. This is hardly dire if it’s just paperclips versus staples, but it might be quality of life versus length of life if Harriet is seriously ill, or population size versus resource consumption if Robbie is supposedly acting on behalf of the human race.
We have a problem, then, if Robbie rules out in advance preferences that Harriet might in fact have: he may converge to a definite but incorrect belief about her preferences. The solution to this problem seems obvious: don’t do it! Always allocate some probability, however small, to preferences that are logically possible. For example, it’s logically possible that Harriet actively wants to get rid of staples and would pay you to take them away. (Perhaps as a child she stapled her finger to the table, and now she cannot stand the sight of them.) So, we should allow for negative exchange rates, which makes things a bit more complicated but still perfectly manageable.¹⁷
But what if Harriet values paperclips at 12¢ on weekdays and 80¢ on weekends? This new preference is not describable by any single number, and so Robbie has, in effect, ruled it out in advance. It’s just not in his set of possible hypotheses about Harriet’s preferences. More generally, there might be many, many things besides paperclips and staples that Harriet cares about. (Really!) Suppose, for example, that Harriet is concerned about the climate, and suppose that Robbie’s initial belief allows for a whole laundry list of possible concerns including sea level, global temperatures, rainfall, hurricanes, ozone, invasive species, and deforestation. Then Robbie will observe Harriet’s behavior and choices and gradually refine his theory of her preferences to understand the weight she gives to each item on the list. But, just as in the paperclip case, Robbie won’t learn about things that aren’t on the laundry list. Let’s say that Harriet is also concerned about the color of the sky—something I guarantee you will not find in typical lists of stated concerns of climate scientists. If Robbie can do a slightly better job of optimizing sea level, global temperatures, rainfall, and so forth by turning the sky orange, he will not hesitate to do it.
There is, once again, a solution to this problem: don’t do it! Never rule out in advance possible attributes of the world that could be part of Harriet’s preference structure. That sounds fine, but actually making it work in practice is more difficult than dealing with a single number for Harriet’s preferences. Robbie’s initial uncertainty has to allow for an unbounded number of unknown attributes that might contribute to Harriet’s preferences. Then, when Harriet’s decisions are inexplicable in terms of the attributes Robbie knows about already, he can infer that one or more previously unknown attributes (for example, the color of the sky) may be playing a role, and he can try to work out what those attributes might be. In this way, Robbie avoids the problems caused by an overly restrictive prior belief. There are, as far as I know, no working examples of Robbies of this kind, but the general idea is encompassed within current thinking about machine learning.¹⁸
Prohibitions and the loophole principle
Uncertainty about human objectives may not be the only way to persuade a robot not to disable its off-switch while fetching the coffee. The distinguished logician Moshe Vardi has proposed a simpler solution based on a prohibition:¹⁹ instead of giving the robot the goal “fetch the coffee,” give it the goal “fetch the coffee while not disabling your off-switch.” Unfortunately, a robot with such a goal will satisfy the letter of the law while violating the spirit—for example by surrounding the off-switch with a piranha-infested moat or simply zapping anyone who comes near the switch. Writing such prohibitions in a foolproof way is like trying to write loophole-free tax law—something we have been trying and failing to do for thousands of years. A sufficiently intelligent entity with a strong incentive to avoid paying taxes is likely to find a way to do it. Let’s call this the loophole principle: if a sufficiently intelligent machine has an incentive to bring about some condition, then it is generally going to be impossible for mere humans to write prohibitions on its actions to prevent it from doing so or to prevent it from doing something effectively equivalent.
The best solution for preventing tax avoidance is to make sure that the entity in question wants to pay taxes. In the case of a potentially misbehaving AI system, the best solution is to make sure it wants to defer to humans.
Requests and Instructions
The moral of the story so far is that we should avoid “putting a purpose into the machine,” as Norbert Wiener put it. But suppose that the robot does receive a direct human order, such as “Fetch me a cup of coffee!” How should the robot understand this order?
Traditionally, it would become the robot’s goal. Any sequence of actions that satisfies the goal—that leads to the human having a cup of coffee—counts as a solution. Typically, the robot would also have a way of ranking solutions, perhaps based on the time taken, the distance traveled, and the cost and quality of the coffee.
This is a very literal-minded way of interpreting the instruction. It can lead to pathological behavior by the robot. For example, perhaps Harriet the human has stopped at a gas station in the middle of the desert; she sends Robbie the robot to fetch coffee, but the gas station has none and Robbie trundles off at three miles per hour to the nearest town, two hundred miles away, returning ten days later with the desiccated remains of a cup of coffee. Meanwhile, Harriet, waiting patiently, has been well supplied with iced tea and Coca-Cola by the gas station owner.
Were Robbie human (or a well-designed robot) he would not interpret Harriet’s command quite so literally. The command is not a goal to be achieved at all costs. It is a way of conveying some information about Harriet’s preferences with the intent of inducing some behavior on the part of Robbie. The question is, what information?
One proposal is that Harriet prefers coffee to no coffee, all other things being equal.²⁰ This means that if Robbie has a way to get coffee without changing anything else about the world, then it’s a good idea to do it even if he has no clue about Harriet’s preferences concerning other aspects of the environment state. As we expect that machines will be perennially uncertain about human preferences, it’s nice to know they can still be useful despite this uncertainty. It seems likely that the study of planning and decision making with partial and uncertain preference information will become a central part of AI research and product development.
On the other hand, all other things being equal means that no other changes are allowed—for example, adding coffee while subtracting money may or may not be a good idea if Robbie knows nothing about Harriet’s relative preferences for coffee and money.
Fortunately, Harriet’s instruction probably means more than a simple preference for coffee, all other things being equal. The extra meaning comes not just from what she said but also from the fact that she said it, the particular situation in which she said it, and the fact that she didn’t say anything else. The branch of linguistics called pragmatics studies exactly this extended notion of meaning. For example, it wouldn’t make sense for Harriet to say, “Fetch me a cup of coffee!” if Harriet believes there is no coffee available nearby or that it is exorbitantly expensive. Therefore, when Harriet says, “Fetch me a cup of coffee!” Robbie infers not just that Harriet wants coffee but also that Harriet believes there is coffee available nearby at a price she is willing to pay. Thus, if Robbie finds coffee at a price that seems reasonable (that is, a price that it would be reasonable for Harriet to expect to pay) he can go ahead and buy it. On the other hand, if Robbie finds that the nearest coffee is two hundred miles away or costs twenty-two dollars, it might be reasonable for him to report this fact rather than pursue his quest blindly.
This general style of analysis is often called Gricean, after H. Paul Grice, a Berkeley philosopher who proposed a set of maxims for inferring the extended meaning of utterances like Harriet’s.²¹ In the case of preferences, the analysis can become quite complicated. For example, it’s quite possible that Harriet doesn’t specifically want coffee; she needs perking up, but is operating under the false belief that the gas station has coffee, so she asks for coffee. She might be equally happy with tea, Coca-Cola, or even some luridly packaged energy drink.
These are just a few of the considerations that arise when interpreting requests and commands. The variations on this theme are endless because of the complexity of Harriet’s preferences, the huge range of circumstances in which Harriet and Robbie might find themselves, and the different states of knowledge and belief that Harriet and Robbie might occupy in those circumstances. While precomputed scripts might allow Robbie to handle a few common cases, flexible and robust behavior can emerge only from interactions between Harriet and Robbie that are, in effect, solutions of the assistance game in which they are engaged.
Wireheading
In Chapter 2, I described the brain’s reward system, based on dopamine, and its function in guiding behavior. The role of dopamine was discovered in the late 1950s, but even before that, by 1954, it was known that direct electrical stimulation of the brain in rats could produce a reward-like response.²² The next step was to give the rat access to a lever, connected to a battery and a wire, that produced the electrical stimulation in its own brain. The result was sobering: the rat pressed the lever over and over again, never stopping to eat or drink, until it collapsed.²³ Humans fare no better, self-stimulating thousands of times and neglecting food and personal hygiene.²⁴ (Fortunately, experiments with humans are usually terminated after one day.) The tendency of animals to short-circuit normal behavior in favor of direct stimulation of their own reward system is called wireheading.
Could something similar happen to machines that are running reinforcement learning algorithms, such as AlphaGo? Initially, one might think this is impossible, because the only way that AlphaGo can gain its +1 reward for winning is actually to win the simulated Go games that it is playing. Unfortunately, this is true only because of an enforced and artificial separation between AlphaGo and its external environment and the fact that AlphaGo is not very intelligent. Let me explain these two points in more detail, because they are important for understanding some of the ways that superintelligence can go wrong.
AlphaGo’s world consists only of the simulated Go board, composed of 361 locations that can be empty or contain a black or white stone. Although AlphaGo runs on a computer, it knows nothing of this computer. In particular, it knows nothing of the small section of code that computes whether it has won or lost each game; nor, during the learning process, does it have any idea about its opponent, which is actually a version of itself. AlphaGo’s only actions are to place a stone on an empty location, and these actions affect only the Go board and nothing else—because there is nothing else in AlphaGo’s model of the world. This setup corresponds to the abstract mathematical model of reinforcement learning, in which the reward signal arrives from outside the universe. Nothing AlphaGo can do, as far as it knows, has any effect on the code that generates the reward signal, so AlphaGo cannot indulge in wireheading.
Life for AlphaGo during the training period must be quite frustrating: the better it gets, the better its opponent gets—because its opponent is a near-exact copy of itself. Its win percentage hovers around 50 percent, no matter how good it becomes. If it were more intelligent—if it had a design closer to what one might expect of a human-level AI system—it would be able to fix this problem. This AlphaGo++ would not assume that the world is just the Go board, because that hypothesis leaves a lot of things unexplained. For example, it doesn’t explain what “physics” is supporting the operation of AlphaGo++’s own decisions or where the mysterious “opponent moves” are coming from. Just as we curious humans have gradually come to understand the workings of our cosmos, in a way that (to some extent) also explains the workings of our own minds, and just like the Oracle AI discussed in Chapter 6, AlphaGo++ will, by a process of experimentation, learn that there is more to the universe than the Go board. It will work out the laws of operation of the computer it runs on and of its own code, and it will realize that such a system cannot easily be explained without the existence of other entities in the universe. It will experiment with different patterns of stones on the board, wondering if those entities can interpret them. It will eventually communicate with those entities through a language of patterns and persuade them to reprogram its reward signal so that it always gets +1. The inevitable conclusion is that a sufficiently capable AlphaGo++ that is designed as a reward-signal maximizer will wirehead.
The AI safety community has discussed wireheading as a possibility for several years.²⁵ The concern is not just that a reinforcement learning system such as AlphaGo might learn to cheat instead of mastering its intended task. The real issue arises when humans are the source of the reward signal. If we propose that an AI system can be trained to behave well through reinforcement learning, with humans giving feedback signals that define the direction of improvement, the inevitable result is that the AI system works out how to control the humans and forces them to give maximal positive rewards at all times.
You might think that this would just be a form of pointless self-delusion on the part of the AI system, and you’d be right. But it’s a logical consequence of the way reinforcement learning is defined. The process works fine when the reward signal comes from “outside the universe” and is generated by some process that can never be modified by the AI system; but it fails if the reward-generating process (that is, the human) and the AI system inhabit the same universe.
How can we avoid this kind of self-delusion? The problem comes from confusing two distinct things: reward signals and actual rewards. In the standard approach to reinforcement learning, these are one and the same. That seems to be a mistake. Instead, they should be treated separately, just as they are in assistance games: reward signals provide information about the accumulation of actual reward, which is the thing to be maximized. The learning system is accumulating brownie points in heaven, so to speak, while the reward signal is, at best, just providing a tally of those brownie points. In other words, the reward signal reports on (rather than constitutes) reward accumulation. With this model, it’s clear that taking over control of the reward-signal mechanism simply loses information. Producing fictitious reward signals makes it impossible for the algorithm to learn about whether its actions are actually accumulating brownie points in heaven, and so a rational learner designed to make this distinction has an incentive to avoid any kind of wireheading.
Recursive Self-Improvement
I. J. Good’s prediction of an intelligence explosion (see this page) is one of the driving forces that have led to current concerns about the potential risks of superintelligent AI. If humans can design a machine that is a bit more intelligent than humans, then—the argument goes—that machine will be a bit better than humans at designing machines. It will design a new machine that is still more intelligent, and the process will repeat itself until, in Good’s words, “the intelligence of man would be left far behind.”
Researchers in AI safety, particularly at the Machine Intelligence Research Institute in Berkeley, have studied the question of whether intelligence explosions can occur safely.²⁶ Initially, this might seem quixotic—wouldn’t it just be “game over”?—but there is, perhaps, hope. Suppose the first machine in the series, Robbie Mark I, starts with perfect knowledge of Harriet’s preferences. Knowing that his cognitive limitations lead to imperfections in his attempts to make Harriet happy, he builds Robbie Mark II. Intuitively, it seems that Robbie Mark I has an incentive to build his knowledge of Harriet’s preferences into Robbie Mark II, since that leads to a future where Harriet’s preferences are better satisfied—which is precisely Robbie Mark I’s purpose in life according to the first principle. By the same argument, if Robbie Mark I is uncertain about Harriet’s preferences, that uncertainty should be transferred to Robbie Mark II. So perhaps explosions are safe after all.
The fly in the ointment, from a mathematical viewpoint, is that Robbie Mark I will not find it easy to reason about how Robbie Mark II is going to behave, given that Robbie Mark II is, by assumption, a more advanced version. There will be questions about Robbie Mark II’s behavior that Robbie Mark I cannot answer.²⁷ More serious still, we do not yet have a clear mathematical definition of what it means in reality for a machine to have a particular purpose, such as the purpose of satisfying Harriet’s preferences.
Let’s unpack this last concern a bit. Consider AlphaGo: What purpose does it have? That’s easy, one might think: AlphaGo has the purpose of winning at Go. Or does it? It’s certainly not the case that AlphaGo always makes moves that are guaranteed to win. (In fact, it nearly always loses to AlphaZero.) It’s true that when it’s only a few moves from the end of the game, AlphaGo will pick the winning move if there is one. On the other hand, when no move is guaranteed to win—in other words, when AlphaGo sees that the opponent has a winning strategy no matter what AlphaGo does—then AlphaGo will pick moves more or less at random. It won’t try the trickiest move in the hope that the opponent will make a mistake, because it assumes that its opponent will play perfectly. It acts as if it has lost the will to win. In other cases, when the truly optimal move is too hard to calculate, AlphaGo will sometimes make mistakes that lead to losing the game. In those instances, in what sense is it true that AlphaGo actually wants to win? Indeed, its behavior might be identical to that of a machine that just wants to give its opponent a really exciting game.
So, saying that AlphaGo “has the purpose of winning” is an oversimplification. A better description would be that AlphaGo is the result of an imperfect training process—reinforcement learning with self-play—for which winning was the reward. The training process is imperfect in the sense that it cannot produce a perfect Go player: AlphaGo learns an evaluation function for Go positions that is good but not perfect, and it combines that with a lookahead search that is good but not perfect.
The upshot of all this is that discussions beginning with “suppose that robot R has purpose P” are fine for gaining some intuition about how things might unfold, but they cannot lead to theorems about real machines. We need much more nuanced and precise definitions of purposes in machines before we can obtain guarantees of how they will behave over the long term. AI researchers are only just beginning to get a handle on how to analyze even the simplest kinds of real decision-making systems,²⁸ let alone machines intelligent enough to design their own successors. We have work to do.
9
COMPLICATIONS: US
If the world contained one perfectly rational Harriet and one helpful and deferential Robbie, we’d be in good shape. Robbie would gradually learn Harriet’s preferences as unobtrusively as possible and would become her perfect helper. We might hope to extrapolate from this promising beginning, perhaps viewing Harriet and Robbie’s relationship as a model for the relationship between the human race and its machines, each construed monolithically.
Alas, the human race is not a single, rational entity. It is composed of nasty, envy-driven, irrational, inconsistent, unstable, computationally limited, complex, evolving, heterogeneous entities. Loads and loads of them. These issues are the staple diet—perhaps even the raisons d’être—of the social sciences. To AI we will need to add ideas from psychology, economics, political theory, and moral philosophy.¹ We need to melt, re-form, and hammer those ideas into a structure that will be strong enough to resist the enormous strain that increasingly intelligent AI systems will place on it. Work on this task has barely started.
Different Humans
I will start with what is probably the easiest of the issues: the fact that humans are heterogeneous. When first exposed to the idea that machines should learn to satisfy human preferences, people often object that different cultures, even different individuals, have widely different value systems, so there cannot be one correct value system for the machine. But of course, that’s not a problem for the machine: we don’t want it to have one correct value system of its own; we just want it to predict the preferences of others.
The confusion about machines having difficulty with heterogeneous human preferences may come from the mistaken idea that the machine is adopting the preferences it learns—for example, the idea that a domestic robot in a vegetarian household is going to adopt vegetarian preferences. It won’t. It just needs to learn to predict what the dietary preferences of vegetarians are. By the first principle, it will then avoid cooking meat for that household. But the robot also learns about the dietary preferences of the rabid carnivores next door, and, with its owner’s permission, will happily cook meat for them if they borrow it for the weekend to help out with a dinner party. The robot doesn’t have a single set of preferences of its own, beyond the preference for helping humans achieve their preferences.
In a sense, this is no different from a restaurant chef who learns to cook several different dishes to please the varied palates of her clients, or the multinational car company that makes left-hand-drive cars for the US market and right-hand-drive cars for the UK market.
In principle, a machine could learn eight billion preference models, one for each person on Earth. In practice, this isn’t as hopeless as it sounds. For one thing, it’s easy for machines to share what they learn with each other. For another, the preference structures of humans have a great deal in common, so the machine will usually not be learning each model from scratch.
Imagine, for example, the domestic robots that may one day be purchased by the inhabitants of Berkeley, California. The robots come out of the box with a fairly broad prior belief, perhaps tailored for the US market but not for any particular city, political viewpoint, or socioeconomic class. The robots begin to encounter members of the Berkeley Green Party, who turn out, compared to the average American, to have a much higher probability of being vegetarian, of using recycling and composting bins, of using public transportation whenever possible, and so on. Whenever a newly commissioned robot finds itself in a Green household, it can immediately adjust its expectations accordingly. It does not need to begin learning about these particular humans as if it had never seen a human, let alone a Green Party member, before. This adjustment is not irreversible—there may be Green Party members in Berkeley who feast on endangered whale meat and drive gas-guzzling monster trucks—but it allows the robot to be more useful more quickly. The same argument applies to a vast range of other personal characteristics that are, to some degree, predictive of aspects of an individual’s preference structures.
Many Humans
The other obvious consequence of the existence of more than one human being is the need for machines to make trade-offs among the preferences of different people. The issue of trade-offs among humans has been the main focus of large parts of the social sciences for centuries. It would be naïve for AI researchers to expect that they can simply alight on the correct solutions without understanding what is already known. The literature on the topic is, alas, vast and I cannot possibly do justice to it here—not just because there isn’t space but also because I haven’t read most of it. I should also point out that almost all the literature is concerned with decisions made by humans, whereas I am concerned here with decisions made by machines. This makes all the difference in the world, because humans have individual rights that may conflict with any supposed obligation to act on behalf of others, whereas machines do not. For example, we do not expect or require typical humans to sacrifice their lives to save others, whereas we will certainly require robots to sacrifice their existence to save the lives of humans.
Several thousand years of work by philosophers, economists, legal scholars, and political scientists have produced constitutions, laws, economic systems, and social norms that serve to help (or hinder, depending on who’s in charge) the process of reaching satisfactory solutions to the problem of trade-offs. Moral philosophers in particular have been analyzing the notion of rightness of actions in terms of their effects, beneficial or otherwise, on other people. They have studied quantitative models of trade-offs since the eighteenth century under the heading of utilitarianism. This work is directly relevant to our present concerns, because it attempts to define a formula by which moral decisions can be made on behalf of many individuals.
The need to make trade-offs arises even if everyone has the same preference structure, because it’s usually impossible to maximally satisfy everyone’s preferences. For example, if everyone wants to be All-Powerful Ruler of the Universe, most people are going to be disappointed. On the other hand, heterogeneity does make some problems more difficult: if everyone is happy with the sky being blue, the robot that handles atmospheric matters can work on keeping it that way; but if many people are agitating for a color change, the robot will need to think about possible compromises such as an orange sky on the third Friday of each month.
The presence of more than one person in the world has another important consequence: it means that, for each person, there are other people to care about. This means that satisfying the preferences of an individual has implications for other people, depending on the individual’s preferences about the well-being of others.
Loyal AI
Let’s begin with a very simple proposal for how machines should deal with the presence of multiple humans: they should ignore it. That is, if Harriet owns Robbie, then Robbie should pay attention only to Harriet’s preferences. This loyal form of AI bypasses the issue of trade-offs, but it leads to problems:
ROBBIE: Your husband called to remind you about dinner tonight.
HARRIET: Wait! What? What dinner?
ROBBIE: For your twentieth anniversary, at seven.
HARRIET: I can’t! I’m meeting the secretary-general at seven thirty! How did this happen?
ROBBIE: I did warn you, but you overrode my recommendation. . . .
HARRIET: OK, sorry—but what am I going to do now? I can’t just tell the SG I’m too busy!
ROBBIE: Don’t worry. I arranged for her plane to be delayed—some kind of computer malfunction.
HARRIET: Really? You can do that?!
ROBBIE: The secretary-general sends her profound apologies and is happy to meet you for lunch tomorrow.
Here, Robbie has found an ingenious solution to Harriet’s problem, but his actions have had a negative impact on other people. If Harriet is a morally scrupulous and altruistic person, then Robbie, who aims to satisfy Harriet’s preferences, will never dream of carrying out such a dubious scheme. But what if Harriet doesn’t give a fig for the preferences of others? In that case, Robbie won’t mind delaying planes. And might he not spend his time pilfering money from online bank accounts to swell indifferent Harriet’s coffers, or worse?
Obviously, the actions of loyal machines will need to be constrained by rules and prohibitions, just as the actions of humans are constrained by laws and social norms. Some have proposed strict liability as a solution:² Harriet (or Robbie’s manufacturer, depending on where you prefer to place the liability) is financially and legally responsible for any act carried out by Robbie, just as a dog’s owner is liable in most states if the dog bites a small child in a public park. This idea sounds promising because Robbie would then have an incentive to avoid doing anything that would land Harriet in trouble. Unfortunately, strict liability doesn’t work: it simply ensures that Robbie will act undetectably when he delays planes and steals money on Harriet’s behalf. This is another example of the loophole principle in operation. If Robbie is loyal to an unscrupulous Harriet, attempts to contain his behavior with rules will probably fail.
Even if we can somehow prevent the outright crimes, a loyal Robbie working for an indifferent Harriet will exhibit other unpleasant behaviors. If he is buying groceries at the supermarket, he will cut in line at the checkout whenever possible. If he is bringing the groceries home and a passerby suffers a heart attack, he will carry on regardless, lest Harriet’s ice cream melt. In summary, he will find innumerable ways to benefit Harriet at the expense of others—ways that are strictly legal but become intolerable when carried out on a large scale. Societies will find themselves passing hundreds of new laws every day to counteract all the loopholes that machines will find in existing laws. Humans tend not to take advantage of these loopholes, either because they have a general understanding of the underlying moral principles or because they lack the ingenuity required to find the loopholes in the first place.
A Harriet who is indifferent to the well-being of others is bad enough. A sadistic Harriet who actively prefers the suffering of others is far worse. A Robbie designed to satisfy the preferences of such a Harriet would be a serious problem, because he would look for—and find—ways to harm others for Harriet’s pleasure, either legally or illegally but undetectably. He would of course need to report back to Harriet so she could derive enjoyment from the knowledge of his evil deeds.
It seems difficult, then, to make the idea of a loyal AI work, unless the idea is extended to include consideration of the preferences of other humans, in addition to the preferences of the owner.
Utilitarian AI
The reason we have moral philosophy is that there is more than one person on Earth. The approach that is most relevant for understanding how AI systems should be designed is often called consequentialism: the idea that choices should be judged according to expected consequences. The two other principal approaches are deontological ethics and virtue ethics, which are, very roughly, concerned with the moral character of actions and individuals, respectively, quite apart from the consequences of choices.³ Absent any evidence of self-awareness on the part of machines, I think it makes little sense to build machines that are virtuous or that choose actions in accordance with moral rules if the consequences are highly undesirable for humanity. Put another way, we build machines to bring about consequences, and we should prefer to build machines that bring about consequences that we prefer.
This is not to say that moral rules and virtues are irrelevant; it’s just that, for the utilitarian, they are justified in terms of consequences and the more practical achievement of those consequences. This point is made by John Stuart Mill in Utilitarianism:
The proposition that happiness is the end and aim of morality doesn’t mean that no road ought to be laid down to that goal, or that people going to it shouldn’t be advised to take one direction rather than another. . . . Nobody argues that the art of navigation is not based on astronomy because sailors can’t wait to calculate the Nautical Almanack. Because they are rational creatures, sailors go to sea with the calculations already done; and all rational creatures go out on the sea of life with their minds made up on the common questions of right and wrong, as well as on many of the much harder questions of wise and foolish.
This view is entirely consistent with the idea that a finite machine facing the immense complexity of the real world may produce better consequences by following moral rules and adopting a virtuous attitude rather than trying to calculate the optimal course of action from scratch. In the same way, a chess program achieves checkmate more often using a catalog of standard opening move sequences, endgame algorithms, and an evaluation function, rather than trying to reason its way to checkmate with no “moral” guideposts. A consequentialist approach also gives some weight to the preferences of those who believe strongly in preserving a given deontological rule, because unhappiness that a rule has been broken is a real consequence. However, it is not a consequence of infinite weight.
Consequentialism is a difficult principle to argue against—although many have tried!—because it’s incoherent to object to consequentialism on the grounds that it would have undesirable consequences. One cannot say, “But if you follow the consequentialist approach in such-and-such case, then this really terrible thing will happen!” Any such failings would simply be evidence that the theory had been misapplied.
For example, suppose Harriet wants to climb Everest. One might worry that a consequentialist Robbie would simply pick her up and deposit her on top of Everest, since that is her desired consequence. In all probability Harriet would strenuously object to this plan, because it would deprive her of the challenge and therefore of the exultation that results from succeeding in a difficult task through one’s own efforts. Now, obviously, a properly designed consequentialist Robbie would understand that the consequences include all of Harriet’s experiences, not just the end goal. He might want to be available in case of an accident and to make sure she was properly equipped and trained, but he might also have to accept Harriet’s right to expose herself to an appreciable risk of death.
If we plan to build consequentialist machines, the next question is how to evaluate consequences that affect multiple people. One plausible answer is to give equal weight to everyone’s preferences—in other words, to maximize the sum of everyone’s utilities. This answer is usually attributed to the eighteenth-century British philosopher Jeremy Bentham⁴ and his pupil John Stuart Mill,⁵ who developed the philosophical approach of utilitarianism. The underlying idea can be traced to the works of the ancient Greek philosopher Epicurus and appears explicitly in Mozi, a book of writings attributed to the Chinese philosopher of the same name. Mozi was active at the end of the fifth century BCE and promoted the idea of jian ai, variously translated as “inclusive care” or “universal love,” as the defining characteristic of moral actions.
Utilitarianism has something of a bad name, partly because of simple misunderstandings about what it advocates. (It certainly doesn’t help that the word utilitarian means “designed to be useful or practical rather than attractive.”) Utilitarianism is often thought to be incompatible with individual rights, because a utilitarian would, supposedly, think nothing of removing a living person’s organs without permission to save the lives of five others; of course, such a policy would render life intolerably insecure for everyone on Earth, so a utilitarian wouldn’t even consider it. Utilitarianism is also incorrectly identified with a rather unattractive maximization of total wealth and is thought to give little weight to poetry or suffering. In fact, Bentham’s version focused specifically on human happiness, while Mill confidently asserted the far greater value of intellectual pleasures over mere sensations. (“It is better to be a human being dissatisfied than a pig satisfied.”) The ideal utilitarianism of G. E. Moore went even further: he advocated the maximization of mental states of intrinsic worth, epitomized by the aesthetic contemplation of beauty.
I think there is no need for utilitarian philosophers to stipulate the ideal content of human utility or human preferences. (And even less reason for AI researchers to do so.) Humans can do that for themselves. The economist John Harsanyi propounded this view with his principle of preference autonomy:⁶
In deciding what is good and what is bad for a given individual, the ultimate criterion can only be his own wants and his own preferences.
Harsanyi’s preference utilitarianism is therefore roughly consistent with the first principle of beneficial AI, which says that a machine’s only purpose is the realization of human preferences. AI researchers should definitely not be in the business of deciding what human preferences should be! Like Bentham, Harsanyi views such principles as a guide for public decisions; he does not expect individuals to be so selfless. Nor does he expect individuals to be perfectly rational—for example, they might have short-term desires that contradict their “deeper preferences.” Finally, he proposes to ignore the preferences of those who, like the sadistic Harriet mentioned earlier, actively wish to reduce the well-being of others.
Harsanyi also gives a kind of proof that optimal moral decisions should maximize the average utility across a population of humans.⁷ He assumes fairly weak postulates similar to those that underlie utility theory for individuals. (The primary additional postulate is that if everyone in a population is indifferent between two outcomes, then an agent acting on behalf of the population should be indifferent between those outcomes.) From these postulates, he proves what became known as the social aggregation theorem: an agent acting on behalf of a population of individuals must maximize a weighted linear combination of the utilities of the individuals. He further argues that an “impersonal” agent should use equal weights.
The theorem requires one crucial additional (and unstated) assumption: each individual has the same prior factual beliefs about the world and how it will evolve. Now, any parent knows that this isn’t even true for siblings, let alone individuals from different social backgrounds and cultures. So, what happens when individuals differ in their beliefs? Something rather strange:⁸ the weight assigned to each individual’s utility has to change over time, in proportion to how well that individual’s prior beliefs accord with unfolding reality.
This rather inegalitarian-sounding formula is quite familiar to any parent. Let’s say that Robbie the robot has been tasked with looking after two children, Alice and Bob. Alice wants to go to the movies and is sure it’s going to rain today; Bob, on the other hand, wants to go to the beach and is sure it’s going to be sunny. Robbie could announce, “We’re going to the movies,” making Bob unhappy; or he could announce, “We’re going to the beach,” making Alice unhappy; or he could announce, “If it rains, we’re going to the movies, but if it’s sunny, we’ll go to the beach.” This last plan makes both Alice and Bob happy, because both believe in their own beliefs.
Challenges to utilitarianism
Utilitarianism is one proposal to emerge from humanity’s long-standing search for a moral guide; among many such proposals, it is the most clearly specified—and therefore the most susceptible to loopholes. Philosophers have been finding these loopholes for more than a hundred years. For example, G. E. Moore, objecting to Bentham’s emphasis on maximizing pleasure, imagined a “world in which absolutely nothing except pleasure existed—no knowledge, no love, no enjoyment of beauty, no moral qualities.”⁹ This finds its modern echo in Stuart Armstrong’s point that superintelligent machines tasked with maximizing pleasure might “entomb everyone in concrete coffins on heroin drips.”¹⁰ Another example: in 1945, Karl Popper proposed the laudable goal of minimizing human suffering,¹¹ arguing that it was immoral to trade one person’s pain for another person’s pleasure; R. N. Smart responded that this could best be achieved by rendering the human race extinct.¹² Nowadays, the idea that a machine might end human suffering by ending our existence is a staple of debates over the existential risk from AI.¹³ A third example is G. E. Moore’s emphasis on the reality of the source of happiness, amending earlier definitions that seemed to have a loophole allowing maximization of happiness through self-delusion. The modern analogs of this point include The Matrix (in which present-day reality turns out to be an illusion produced by a computer simulation) and recent work on the self-delusion problem in reinforcement learning.¹⁴
These examples, and more, convince me that the AI community should pay careful attention to the thrusts and counterthrusts of philosophical and economic debates on utilitarianism because they are directly relevant to the task at hand. Two of the most important, from the point of view of designing AI systems that will benefit multiple individuals, concern interpersonal comparisons of utilities and comparisons of utilities across different population sizes. Both of these debates have been raging for 150 years or more, which leads one to suspect their satisfactory resolution may not be entirely straightforward.
The debate on interpersonal comparisons of utilities matters because Robbie cannot maximize the sum of Alice’s and Bob’s utilities unless those utilities can be added; and they can be added only if they are measurable on the same scale. The nineteenth-century British logician and economist William Stanley Jevons (also the inventor of an early mechanical computer called the logical piano) argued in 1871 that interpersonal comparisons are impossible:¹⁵
The susceptibility of one mind may, for what we know, be a thousand times greater than that of another. But, provided that the susceptibility was different in a like ratio in all directions, we should never be able to discover the profoundest difference. Every mind is thus inscrutable to every other mind, and no common denominator of feeling is possible.
The American economist Kenneth Arrow, founder of modern social choice theory and 1972 Nobel laureate, was equally adamant:
The viewpoint will be taken here that interpersonal comparison of utilities has no meaning and, in fact, there is no meaning relevant to welfare comparisons in the measurability of individual utility.
The difficulty to which Jevons and Arrow are referring is that there is no obvious way to tell if Alice values pinpricks and lollipops at −1 and +1 or −1000 and +1000 in terms of her subjective experience of happiness. In either case, she will pay up to one lollipop to avoid one pinprick. Indeed, if Alice is a humanoid automaton, her external behavior might be the same even though there is no subjective experience of happiness whatsoever.
In 1974, the American philosopher Robert Nozick suggested that even if interpersonal comparisons of utility could be made, maximizing the sum of utilities would still be a bad idea because it would fall foul of the utility monster—a person whose experiences of pleasure and pain are many times more intense than those of ordinary people.¹⁶ Such a person could assert that any additional unit of resources would yield a greater increment to the sum total of human happiness if given to him rather than to others; indeed, removing resources from others to benefit the utility monster would also be a good idea.
This might seem to be an obviously undesirable consequence, but consequentialism by itself cannot come to the rescue: the problem lies in how we measure the desirability of consequences. One possible response is that the utility monster is merely theoretical—there are no such people. But this response probably won’t do: in a sense, all humans are utility monsters relative to, say, rats and bacteria, which is why we pay little attention to the preferences of rats and bacteria in setting public policy.
If the idea that different entities have different utility scales is already built into our way of thinking, then it seems entirely possible that different people have different scales too.
Another response is to say “Tough luck!” and operate on the assumption that everyone has the same scale, even if they don’t.¹⁷ One could also try to investigate the issue by scientific means unavailable to Jevons, such as measuring dopamine levels or the degree of electrical excitation of neurons related to pleasure and pain, happiness and misery. If Alice’s and Bob’s chemical and neural responses to a lollipop are pretty much identical, as well as their behavioral responses (smiling, making lip-smacking noises, and so on), it seems odd to insist that, nevertheless, their subjective degrees of enjoyment differ by a factor of a thousand or a million. Finally, one could use common currencies such as time (of which we all have, very roughly, the same amount)—for example, by comparing lollipops and pinpricks against, say, five minutes extra waiting time in the airport departure lounge.
I am far less pessimistic than Jevons and Arrow. I suspect that it is indeed meaningful to compare utilities across individuals, that scales may differ but typically not by very large factors, and that machines can begin with reasonably broad prior beliefs about human preference scales and learn more about the scales of individuals by observation over time, perhaps correlating natural observations with the findings of neuroscience research.
The second debate—about utility comparisons across populations of different sizes—matters when decisions have an impact on who will exist in the future. In the movie Avengers: Infinity War, for example, the character Thanos develops and implements the theory that if there were half as many people, everyone who remained would be more than twice as happy. This is the kind of naïve calculation that gives utilitarianism a bad name.¹⁸
The same question—minus the Infinity Stones and the gargantuan budget—was discussed in 1874 by the British philosopher Henry Sidgwick in his famous treatise, The Methods of Ethics.¹⁹ Sidgwick, in apparent agreement with Thanos, concluded that the right choice was to adjust the population size until the maximum total happiness was reached. (Obviously, this does not mean increasing the population without limit, because at some point everyone would be starving to death and hence rather unhappy.) In 1984, the British philosopher Derek Parfit took up the issue again in his groundbreaking work Reasons and Persons.²⁰ Parfit argues that for any situation with a population of N very happy people, there is (according to utilitarian principles) a preferable situation with 2N people who are ever so slightly less happy. This seems highly plausible. Unfortunately, it’s also a slippery slope. By repeating the process, we reach the so-called Repugnant Conclusion (usually capitalized thus, perhaps to emphasize its Victorian roots): that the most desirable situation is one with a vast population, all of whom have a life barely worth living.
As you can imagine, such a conclusion is controversial. Parfit himself struggled for over thirty years to find a solution to his own conundrum, without success. I suspect we are missing some fundamental axioms, analogous to those for individually rational preferences, to handle choices between populations of different sizes and happiness levels.²¹
It is important that we solve this problem, because machines with sufficient foresight may be able to consider courses of action leading to different population sizes, just as the Chinese government did with its one-child policy in 1979. It’s quite likely, for example, that we will be asking AI systems for help in devising solutions for global climate change—and those solutions may well involve policies that tend to limit or even reduce population size.²² On the other hand, if we decide that larger populations really are better and if we give significant weight to the well-being of potentially vast human populations centuries from now, then we will need to work much harder on finding ways to move beyond the confines of Earth. If the machines’ calculations lead to the Repugnant Conclusion or to its opposite—a tiny population of optimally happy people—we may have reason to regret our lack of progress on the question.
Some philosophers have argued that we may need to make decisions in a state of moral uncertainty—that is, uncertainty about the appropriate moral theory to employ in making decisions.²³ One solution is to allocate some probability to each moral theory and make decisions using an “expected moral value.” It’s not clear, however, that it makes sense to ascribe probabilities to moral theories in the same way one applies probabilities to tomorrow’s weather. (What’s the probability that Thanos is exactly right?) And even if it does make sense, the potentially vast differences between the recommendations of competing moral theories mean that resolving the moral uncertainty—working out which moral theory avoids unacceptable consequences—has to happen before we make such momentous decisions or entrust them to machines.
Let’s be optimistic and suppose that Harriet eventually solves this and other problems arising from the existence of more than one person on Earth. Suitably altruistic and egalitarian algorithms are downloaded into robots all over the world. Cue the high fives and happy-sounding music. Then Harriet goes home. . . .
ROBBIE: Welcome home! Long day?
HARRIET: Yes, worked really hard, not even time for lunch.
ROBBIE: So you must be quite hungry!
HARRIET: Starving! Can you make me some dinner?
ROBBIE: There’s something I need to tell you. . . .
HARRIET: What? Don’t tell me the fridge is empty!
ROBBIE: No, there are humans in Somalia in more urgent need of help. I am leaving now. Please make your own dinner.
While Harriet might be quite proud of Robbie and of her own contributions towards making him such an upstanding and decent machine, she cannot help but wonder why she shelled out a small fortune to buy a robot whose first significant act is to disappear. In practice, of course, no one would buy such a robot, so no such robots would be built and there would be no benefit to humanity. Let’s call this the Somalia problem. For the whole utilitarian-robot scheme to work, we have to find a solution to this problem. Robbie will need to have some amount of loyalty to Harriet in particular—perhaps an amount related to the amount Harriet paid for Robbie. Possibly, if society wants Robbie to help people besides Harriet, society will need to compensate Harriet for its claim on Robbie’s services. It’s quite likely that robots will coordinate with one another so that they don’t all descend on Somalia at once—in which case, Robbie might not need to go after all. Or perhaps some completely new kinds of economic relationships will emerge to handle the (certainly unprecedented) presence of billions of purely altruistic agents in the world.
Nice, Nasty, and Envious Humans
Human preferences go far beyond pleasure and pizza. They certainly extend to the well-being of others. Even Adam Smith, the father of economics who is often cited when a justification for selfishness is required, began his first book by emphasizing the crucial importance of concern for others:²⁴
How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it. Of this kind is pity or compassion, the emotion which we feel for the misery of others, when we either see it, or are made to conceive it in a very lively manner. That we often derive sorrow from the sorrow of others, is a matter of fact too obvious to require any instances to prove it.
In modern economic parlance, concern for others usually goes under the heading of altruism.²⁵ The theory of altruism is fairly well developed and has significant implications for tax policy among other matters. Some economists, it must be said, treat altruism as another form of selfishness designed to provide the giver with a “warm glow.”²⁶ This is certainly a possibility that robots need to be aware of as they interpret human behavior, but for now let’s give humans the benefit of the doubt and assume they do actually care.
The easiest way to think about altruism is to divide one’s preferences into two kinds: preferences for one’s own intrinsic well-being and preferences concerning the well-being of others. (There is considerable dispute about whether these can be neatly separated, but I’ll put that dispute to one side.) Intrinsic well-being refers to qualities of one’s own life, such as shelter, warmth, sustenance, safety, and so on, that are desirable in themselves rather than by reference to qualities of the lives of others.
To make this notion more concrete, let’s suppose that the world contains two people, Alice and Bob. Alice’s overall utility is composed of her own intrinsic well-being plus some factor C\_(AB) times Bob’s intrinsic well-being. The caring factor C\_(AB) indicates how much Alice cares about Bob. Similarly, Bob’s overall utility is composed of his intrinsic well-being plus some caring factor C\_(BA) times Alice’s intrinsic well-being, where C\_(BA) indicates how much Bob cares about Alice.²⁷ Robbie is trying to help both Alice and Bob, which means (let’s say) maximizing the sum of their two utilities. Thus, Robbie needs to pay attention not just to the individual well-being of each but also to how much each cares about the well-being of the other.²⁸
The signs of the caring factors C\_(AB) and C\_(BA) matter a lot. For example, if C\_(AB) is positive, Alice is “nice”: she derives some happiness from Bob’s well-being. The more positive C\_(AB) is, the more Alice is willing to sacrifice some of her own well-being to help Bob. If C\_(AB) is zero, then Alice is completely selfish: if she can get away with it, she will divert any amount of resources away from Bob and towards herself, even if Bob is left destitute and starving. Faced with selfish Alice and nice Bob, a utilitarian Robbie will obviously protect Bob from Alice’s worst depredations. It’s interesting that the final equilibrium will typically leave Bob with less intrinsic well-being than Alice, but he may have greater overall happiness because he cares about her well-being. You might feel that Robbie’s decisions are grossly unfair if they leave Bob with less well-being than Alice merely because he is nicer than she is: Wouldn’t he resent the outcome and be unhappy?²⁹ Well, he might, but that would be a different model—one that includes a term for resentment over differences in well-being. In our simple model Bob would be at peace with the outcome. Indeed, in the equilibrium situation, he would resist any attempt to transfer resources from Alice to himself, since that would reduce his overall happiness. If you think this is completely unrealistic, consider the case where Alice is Bob’s newborn daughter.
The really problematic case for Robbie to deal with is when C\_(AB) is negative: in that case, Alice is truly nasty. I’ll use the phrase negative altruism to refer to such preferences. As with the sadistic Harriet mentioned earlier, this is not about garden-variety greed and selfishness, whereby Alice is content to reduce Bob’s share of the pie in order to enhance her own. Negative altruism means that Alice derives happiness purely from the reduced well-being of others, even if her own intrinsic well-being is unchanged.
In his paper that introduced preference utilitarianism, Harsanyi attributes negative altruism to “sadism, envy, resentment, and malice” and argues that they should be ignored in calculating the sum total of human utility in a population:
No amount of goodwill to individual X can impose the moral obligation on me to help him in hurting a third person, individual Y.
This seems to be one area in which it is reasonable for the designers of intelligent machines to put a (cautious) thumb on the scales of justice, so to speak.
Unfortunately, negative altruism is far more common than one might expect. It arises not so much from sadism and malice³⁰ but from envy and resentment and their converse emotion, which I will call pride (for want of a better word). If Bob envies Alice, he derives unhappiness from the difference between Alice’s well-being and his own; the greater the difference, the more unhappy he is. Conversely, if Alice is proud of her superiority over Bob, she derives happiness not just from her own intrinsic well-being but also from the fact that it is higher than Bob’s. It is easy to show that, in a mathematical sense, pride and envy work in roughly the same way as sadism; they lead Alice and Bob to derive happiness purely from reducing each other’s well-being, because a reduction in Bob’s well-being increases Alice’s pride, while a reduction in Alice’s well-being reduces Bob’s envy.³¹
Jeffrey Sachs, the renowned development economist, once told me a story that illustrated the power of these kinds of preferences in people’s thinking. He was in Bangladesh soon after a major flood had devastated one region of the country. He was speaking to a farmer who had lost his house, his fields, all his animals, and one of his children. “I’m so sorry—you must be terribly sad,” Sachs ventured. “Not at all,” replied the farmer. “I’m pretty happy because my damned neighbor has lost his wife and all his children too!”
The economic analysis of pride and envy—particularly in the context of social status and conspicuous consumption—came to the fore in the work of the American sociologist Thorstein Veblen, whose 1899 book, The Theory of the Leisure Class, explained the toxic consequences of these attitudes.³² In 1977, the British economist Fred Hirsch published The Social Limits to Growth,³³ in which he introduced the idea of positional goods. A positional good is anything—it could be a car, a house, an Olympic medal, an education, an income, or an accent—that derives its perceived value not just from its intrinsic benefits but also from its relative properties, including the properties of scarcity and being superior to someone else’s. The pursuit of positional goods, driven by pride and envy, has the character of a zero-sum game, in the sense that Alice cannot improve her relative position without worsening the relative position of Bob, and vice versa. (This doesn’t seem to prevent vast sums being squandered in this pursuit.) Positional goods seem to be ubiquitous in modern life, so machines will need to understand their overall importance in the preferences of individuals. Moreover, social identity theorists propose that membership and standing within a group and the overall status of the group relative to other groups are essential constituents of human self-esteem.³⁴ Thus, it is difficult to understand human behavior without understanding how individuals perceive themselves as members of groups—whether those groups are species, nations, ethnic groups, political parties, professions, families, or supporters of a particular football team.
As with sadism and malice, we might propose that Robbie should give little or no weight to pride and envy in his plans for helping Alice and Bob. There are some difficulties with this proposal, however. Because pride and envy counteract caring in Alice’s attitude to Bob’s well-being, it may not be easy to tease them apart. It may be that Alice cares a lot, but also suffers from envy; it is hard to distinguish this Alice from a different Alice who cares only a little bit but has no envy at all. Moreover, given the prevalence of pride and envy in human preferences, it’s essential to consider very carefully the ramifications of ignoring them. It might be that they are essential for self-esteem, especially in their positive forms—self-respect and admiration for others.
Let me reemphasize a point made earlier: suitably designed machines will not behave like those they observe, even if those machines are learning about the preferences of sadistic demons. It’s possible, in fact, that if we humans find ourselves in the unfamiliar situation of dealing with purely altruistic entities on a daily basis, we may learn to be better people ourselves—more altruistic and less driven by pride and envy.
Stupid, Emotional Humans
The title of this section is not meant to refer to some particular subset of humans. It refers to all of us. We are all incredibly stupid compared to the unreachable standard set by perfect rationality, and we are all subject to the ebb and flow of the varied emotions that, to a large extent, govern our behavior.
Let’s begin with stupidity. A perfectly rational entity maximizes the expected satisfaction of its preferences over all possible future lives it could choose to lead. I cannot begin to write down a number that describes the complexity of this decision problem, but I find the following thought experiment helpful. First, note that the number of motor control choices that a human makes in a lifetime is about twenty trillion. (See Appendix A for the detailed calculations.) Next, let’s see how far brute force will get us with the aid of Seth Lloyd’s ultimate-physics laptop, which is one billion trillion trillion times faster than the world’s fastest computer. We’ll give it the task of enumerating all possible sequences of English words (perhaps as a warmup for Jorge Luis Borges’s Library of Babel), and we’ll let it run for a year. How long are the sequences that it can enumerate in that time? A thousand pages of text? A million pages? No. Eleven words. This tells you something about the difficulty of designing the best possible life of twenty trillion actions. In short, we are much further from being rational than a slug is from overtaking the starship Enterprise traveling at warp nine. We have absolutely no idea what a rationally chosen life would be like.
The implication of this is that humans will often act in ways that are contrary to their own preferences. For example, when Lee Sedol lost his Go match to AlphaGo, he played one or more moves that guaranteed he would lose, and AlphaGo could (in some cases at least) detect that he had done this. It would be incorrect, however, for AlphaGo to infer that Lee Sedol has a preference for losing. Instead, it would be reasonable to infer that Lee Sedol has a preference for winning but has some computational limitations that prevent him from choosing the right move in all cases. Thus, in order to understand Lee Sedol’s behavior and learn about his preferences, a robot following the third principle (“the ultimate source of information about human preferences is human behavior”) has to understand something about the cognitive processes that generate his behavior. It cannot assume he is rational.
This gives the AI, cognitive science, psychology, and neuroscience communities a very serious research problem: to understand enough about human cognition³⁵ that we (or rather, our beneficial machines) can “reverse-engineer” human behavior to get at the deep underlying preferences, to the extent that they exist. Humans manage to do some of this, learning their values from others with a little bit of guidance from biology, so it seems possible. Humans have an advantage: they can use their own cognitive architecture to simulate that of other humans, without knowing what that architecture is—“If I wanted X, I’d do just the same thing as Mum does, so Mum must want X.”
Machines do not have this advantage. They can simulate other machines easily, but not people. It’s unlikely that they will soon have access to a complete model of human cognition, whether generic or tailored to specific individuals. Instead, it makes sense from a practical point of view to look at the major ways in which humans deviate from rationality and to study how to learn preferences from behavior that exhibits such deviations.
One obvious difference between humans and rational entities is that, at any given moment, we are not choosing among all possible first steps of all possible future lives. Not even close. Instead, we are typically embedded in a deeply nested hierarchy of “subroutines.” Generally speaking, we are pursuing near-term goals rather than maximizing preferences over future lives, and we can act only according to the constraints of the subroutine we’re in at present. Right now, for example, I’m typing this sentence: I can choose how to continue after the colon, but it never occurs to me to wonder if I should stop writing the sentence and take an online rap course or burn down the house and claim the insurance or any other of a gazillion things I could do next. Many of these other things might actually be better than what I’m doing, but, given my hierarchy of commitments, it’s as if those other things didn’t exist.
Understanding human action, then, seems to require understanding this subroutine hierarchy (which may be quite individual): which subroutine the person is executing at present, which near-term objectives are being pursued within this subroutine, and how they relate to deeper, long-term preferences. More generally, learning about human preferences seems to require learning about the actual structure of human lives. What are all the things that we humans can be engaged in, either singly or jointly? What activities are characteristic of different cultures and types of individuals? These are tremendously interesting and demanding research questions. Obviously, they do not have a fixed answer because we humans are adding new activities and behavioral structures to our repertoires all the time. But even partial and provisional answers would be very useful for all kinds of intelligent systems designed to help humans in their daily lives.
Another obvious property of human actions is that they are often driven by emotion. In some cases, this is a good thing—emotions such as love and gratitude are of course partially constitutive of our preferences, and actions guided by them can be rational even if not fully deliberated. In other cases, emotional responses lead to actions that even we stupid humans recognize as less than rational—after the fact, of course. For example, an angry and frustrated Harriet who slaps a recalcitrant ten-year-old Alice may regret the action immediately. Robbie, observing the action, should (typically, although not in all cases) attribute the action to anger and frustration and a lack of self-control rather than deliberate sadism for its own sake. For this to work, Robbie has to have some understanding of human emotional states, including their causes, how they evolve over time in response to external stimuli, and the effects they have on action. Neuroscientists are beginning to get a handle on the mechanics of some emotional states and their connections to other cognitive processes,³⁶ and there is some useful work on computational methods for detecting, predicting, and manipulating human emotional states,³⁷ but there is much more to be learned. Again, machines are at a disadvantage when it comes to emotions: they cannot generate an internal simulation of an experience to see what emotional state it would engender.
As well as affecting our actions, emotions reveal useful information about our underlying preferences. For example, little Alice may be refusing to do her homework, and Harriet is angry and frustrated because she really wants Alice to do well in school and have a better chance in life than Harriet herself did. If Robbie is equipped to understand this—even if he cannot experience it himself—he may learn a great deal from Harriet’s less-than-rational actions. It ought to be possible, then, to create rudimentary models of human emotional states that suffice to avoid the most egregious errors in inferring human preferences from behavior.
Do Humans Really Have Preferences?
The entire premise of this book is that there are futures that we would like and futures we would prefer to avoid, such as near-term extinction or being turned into human battery farms à la The Matrix. In this sense, yes, of course humans have preferences. Once we get into the details of how humans would prefer their lives to play out, however, things become much murkier.
Uncertainty and error
One obvious property of humans, if you think about it, is that they don’t always know what they want. For example, the durian fruit elicits different responses from different people: some find that “it surpasses in flavour all other fruits of the world”³⁸ while others liken it to “sewage, stale vomit, skunk spray and used surgical swabs.”³⁹ I have deliberately refrained from trying durian prior to publication, so that I can maintain neutrality on this point: I simply don’t know which camp I will be in. The same might be said for many people considering future careers, future life partners, future post-retirement activities, and so on.
There are at least two kinds of preference uncertainty. The first is real, epistemic uncertainty, such as I experience about my durian preference.⁴⁰ No amount of thought is going to resolve this uncertainty. There is an empirical fact of the matter, and I can find out more by trying some durian, by comparing my DNA with that of durian lovers and haters, and so on. The second arises from computational limitations: looking at two Go positions, I am not sure which I prefer because the ramifications of each are beyond my ability to resolve completely.
Uncertainty also arises from the fact that the choices we are presented with are usually incompletely specified—sometimes so incompletely that they barely qualify as choices at all. When Alice is about to graduate from high school, a career counselor might offer her a choice between “librarian” and “coal miner”; she may, quite reasonably, say, “I’m uncertain about which I prefer.” Here, the uncertainty comes from epistemic uncertainty about her own preferences for, say, coal dust versus book dust; from computational uncertainty as she struggles to work out how she might make the best of each career choice; and from ordinary uncertainty about the world, such as her doubts about the long-term viability of her local coal mine.
For these reasons, it’s a bad idea to identify human preferences with simple choices between incompletely described options that are intractable to evaluate and include elements of unknown desirability. Such choices provide indirect evidence of underlying preferences, but they are not constitutive of those preferences. That’s why I have couched the notion of preferences in terms of future lives—for example by imagining that you could experience, in a compressed form, two different movies of your future life and then express a preference between them (see this page). The thought experiment is of course impossible to carry out in practice, but one can imagine that in many cases a clear preference would emerge long before all the details of each movie had been filled in and fully experienced. You may not know in advance which you will prefer, even given a plot summary; but there is an answer to the actual question, based on who you are now, just as there is an answer to the question of whether you will like durian when you try it.
The fact that you might be uncertain about your own preferences does not cause any particular problems for the preference-based approach to provably beneficial AI. Indeed, there are already some algorithms that take into account both Robbie’s and Harriet’s uncertainty about Harriet’s preferences and allow for the possibility that Harriet may be learning about her preferences while Robbie is.⁴¹ Just as Robbie’s uncertainty about Harriet’s preferences can be reduced by observing Harriet’s behavior, Harriet’s uncertainty about her own preferences can be reduced by observing her own reactions to experiences. The two kinds of uncertainty need not be directly related; nor is Robbie necessarily more uncertain than Harriet about Harriet’s preferences. For example, Robbie might be able to detect that Harriet has a strong genetic predisposition to despise the flavor of durian. In that case, he would have very little uncertainty about her durian preference, even while she remains completely in the dark.
If Harriet can be uncertain about her preferences over future events, then, quite probably, she can also be wrong. For example, she might be convinced that she will not like durian (or, say, green eggs and ham) and so she avoids it at all costs, but it may turn out—if someone slips some into her fruit salad one day—that she finds it sublime after all. Thus, Robbie cannot assume that Harriet’s actions reflect accurate knowledge of her own preferences: some may be thoroughly grounded in experience, while others may be based primarily on supposition, prejudice, fear of the unknown, or weakly supported generalizations.⁴² A suitably tactful Robbie could be very helpful to Harriet in alerting her to such situations.
Experience and memory
Some psychologists have called into question the very notion that there is one self whose preferences are sovereign in the way that Harsanyi’s principle of preference autonomy suggests. Most prominent among these psychologists is my former Berkeley colleague Daniel Kahneman. Kahneman, who won the 2002 Nobel Prize for his work in behavioral economics, is one of the most influential thinkers on the topic of human preferences. His recent book, Thinking, Fast and Slow,⁴³ recounts in some detail a series of experiments that convinced him that there are two selves—the experiencing self and the remembering self—whose preferences are in conflict.
The experiencing self is the one being measured by the hedonimeter, which the nineteenth-century British economist Francis Edgeworth imagined to be “an ideally perfect instrument, a psychophysical machine, continually registering the height of pleasure experienced by an individual, exactly according to the verdict of consciousness.”⁴⁴ According to hedonic utilitarianism, the overall value of any experience to an individual is simply the sum of the hedonic values of each instant during the experience. This notion applies equally well to eating an ice cream or living an entire life.
The remembering self, on the other hand, is the one who is “in charge” when there is any decision to be made. This self chooses new experiences based on memories of previous experiences and their desirability. Kahneman’s experiments suggest that the remembering self has very different ideas from the experiencing self.
The simplest experiment to understand involves plunging a subject’s hand into cold water. There are two different regimes: in the first, the immersion is for 60 seconds in water at 14 degrees Celsius; in the second, the immersion is for 60 seconds in water at 14 degrees followed by 30 seconds at 15 degrees. (These temperatures are similar to ocean temperatures in Northern California—cold enough that almost everyone wears a wetsuit in the water.) All subjects report the experience as unpleasant. After experiencing both regimes (in either order, with a 7-minute gap in between), the subject is asked to choose which one they would like to repeat. The great majority of subjects prefer to repeat the 60 + 30 rather than just the 60-second immersion.
Kahneman posits that, from the point of view of the experiencing self, 60 + 30 has to be strictly worse than 60, because it includes 60 and another unpleasant experience. Yet the remembering self chooses 60 + 30. Why?
Kahneman’s explanation is that the remembering self looks back with rather weirdly tinted spectacles, paying attention mainly to the “peak” value (the highest or lowest hedonic value) and the “end” value (the hedonic value at the end of the experience). The durations of different parts of the experience are mostly neglected. The peak discomfort levels for 60 and 60 + 30 are the same, but the end levels are different: in the 60 + 30 case, the water is one degree warmer. If the remembering self evaluates experiences by the peak and end values, rather than by summing up hedonic values over time, then 60 + 30 is better, and this is what is found. The peak-end model seems to explain many other equally weird findings in the literature on preferences.
Kahneman seems (perhaps appropriately) to be of two minds about his findings. He asserts that the remembering self “simply made a mistake” and chose the wrong experience because its memory is faulty and incomplete; he regards this as “bad news for believers in the rationality of choice.” On the other hand, he writes, “A theory of well-being that ignores what people want cannot be sustained.” Suppose, for example, that Harriet has tried Pepsi and Coke and now strongly prefers Pepsi; it would be absurd to force her to drink Coke based on adding up secret hedonimeter readings taken during each trial.
The fact is that no law requires our preferences between experiences to be defined by the sum of hedonic values over instants of time. It is true that standard mathematical models focus on maximizing a sum of rewards,⁴⁵ but the original motivation for this was mathematical convenience. Justifications came later in the form of technical assumptions under which it is rational to decide based on adding up rewards,⁴⁶ but those technical assumptions need not hold in reality. Suppose, for example, that Harriet is choosing between two sequences of hedonic values: [10,10,10,10,10] and [0,0,40,0,0]. It’s entirely possible that she just prefers the second sequence; no mathematical law can force her to make choices based on the sum rather than, say, the maximum.
Kahneman acknowledges that the situation is complicated still further by the crucial role of anticipation and memory in well-being. The memory of a single, delightful experience—one’s wedding day, the birth of a child, an afternoon spent picking blackberries and making jam—can carry one through years of drudgery and disappointment. Perhaps the remembering self is evaluating not just the experience per se but its total effect on life’s future value through its effect on future memories. And presumably it’s the remembering self and not the experiencing self that is the best judge of what will be remembered.
Time and change
It goes almost without saying that sensible people in the twenty-first century would not want to emulate the preferences of, say, Roman society in the second century, replete with gladiatorial slaughter for public entertainment, an economy based on slavery, and brutal massacres of defeated peoples. (We need not dwell on the obvious parallels to these characteristics in modern society.) Standards of morality clearly evolve over time as our civilization progresses—or drifts, if you prefer. This suggests, in turn, that future generations might find utterly repulsive our current attitudes to, say, the well-being of animals. For this reason, it is important that machines charged with implementing human preferences be able to respond to changes in those preferences over time rather than fixing them in stone. The three principles from Chapter 7 accommodate such changes in a natural way, because they require machines to learn and implement the current preferences of current humans—lots of them, all different—rather than a single idealized set of preferences or the preferences of machine designers who may be long dead.⁴⁷
The possibility of changes in the typical preferences of human populations over historical time naturally focuses attention on the question of how each individual’s preferences are formed and the plasticity of adult preferences. Our preferences are certainly influenced by our biology: we usually avoid pain, hunger, and thirst, for example. Our biology has remained fairly constant, however, so the remaining preferences must arise from cultural and family influences. Quite possibly, children are constantly running some form of inverse reinforcement learning to identify the preferences of parents and peers in order to explain their behavior; children then adopt these preferences as their own. Even as adults, our preferences evolve through the influence of the media, government, friends, employers, and our own direct experiences. It may be the case, for example, that many supporters of the Third Reich did not start out as genocidal sadists thirsting for racial purity.
Preference change presents a challenge for theories of rationality at both the individual and societal level. For example, Harsanyi’s principle of preference autonomy seems to say that everyone is entitled to whatever preferences they have and no one else should touch them. Far from being untouchable, however, preferences are touched and modified all the time, by every experience a person has. Machines cannot help but modify human preferences, because machines modify human experiences.
It’s important, although sometimes difficult, to separate preference change from preference update, which occurs when an initially uncertain Harriet learns more about her own preferences through experience. Preference update can fill in gaps in self-knowledge and perhaps add definiteness to preferences that were previously weakly held and provisional. Preference change, on the other hand, is not a process that results from additional evidence about what one’s preferences actually are. In the extreme case, you can imagine it as resulting from drug administration or even brain surgery—it occurs from processes we may not understand or agree with.
Preference change is problematic for at least two reasons. The first reason is that it’s not clear which preferences should hold sway when making a decision: the preferences that Harriet has at the time of the decision or the preferences that she will have during and after the events that result from her decision. In bioethics, for example, this is a very real dilemma because people’s preferences about medical interventions and end-of-life care do change, often dramatically, after they become seriously ill.⁴⁸ Assuming these changes do not result from diminished intellectual capacity, whose preferences should be respected?⁴⁹
The second reason that preference change is problematic is that there seems to be no obvious rational basis for changing (as opposed to updating) one’s preferences. If Harriet prefers A to B, but could choose to undergo an experience that she knows will result in her preferring B to A, why would she ever do that? The outcome would be that she would then choose B, which she currently does not want.
The issue of preference change appears in dramatic form in the legend of Ulysses and the Sirens. The Sirens were mythical beings whose singing lured sailors to their doom on the rocks of certain islands in the Mediterranean. Ulysses, wishing to hear the song, ordered his sailors to plug their ears with wax and to bind him to the mast; under no circumstances were they to obey his subsequent entreaties to release him. Obviously, he wanted the sailors to respect the preferences he had initially, not the preferences he would have after the Sirens bewitch him. This legend became the title of a book by the Norwegian philosopher Jon Elster,⁵⁰ dealing with weakness of will and other challenges to the theoretical idea of rationality.
Why might an intelligent machine deliberately set out to modify the preferences of humans? The answer is quite simple: to make the preferences easier to satisfy. We saw this in Chapter 1 with the case of social-media click-through optimization. One response might be to say that machines must treat human preferences as sacrosanct: nothing can be allowed to change the human’s preferences. Unfortunately, this is completely impossible. The very existence of a useful robot aide is likely to have an effect on human preferences.
One possible solution is for machines to learn about human meta-preferences—that is, preferences about what kinds of preference change processes might be acceptable or unacceptable. Notice the use of “preference change processes” rather than “preference changes” here. That’s because wanting one’s preferences to change in a specific direction often amounts to having that preference already; what’s really wanted in such a case is the ability to be better at implementing the preference. For example, if Harriet says, “I want my preferences to change so that I don’t want cake as much as I do now,” then she already has a preference for a future with less cake consumption; what she really wants is to alter her cognitive architecture so that her behavior more closely reflects that preference.
By “preferences about what kinds of preference change processes might be acceptable or unacceptable,” I mean, for example, a view that one may end up with “better” preferences by traveling the world and experiencing a wide variety of cultures, or by participating in a vibrant intellectual community that thoroughly explores a wide range of moral traditions, or by setting aside some hermit time for introspection and hard thinking about life and its meaning. I’ll call these processes preference-neutral, in the sense that one does not anticipate that the process will change one’s preferences in any particular direction, while recognizing that some may strongly disagree with that characterization.
Of course, not all preference-neutral processes are desirable— for example, few people expect to develop “better” preferences by whacking themselves on the head. Subjecting oneself to an acceptable process of preference change is analogous to running an experiment to find out something about how the world works: you never know in advance how the experiment will turn out, but you expect, nonetheless, to be better off in your new mental state.
The idea that there are acceptable routes to preference modification seems related to the idea that there are acceptable methods of behavior modification whereby, for example, an employer engineers the choice situation so that people make “better” choices about saving for retirement. Often this can be done by manipulating the “non-rational” factors that influence choice, rather than by restricting choices or taxing “bad” choices. Nudge, a book by economist Richard Thaler and legal scholar Cass Sunstein, lays out a wide range of supposedly acceptable methods and opportunities to “influence people’s behavior in order to make their lives longer, healthier, and better.”
It’s unclear whether behavior modification methods are really just modifying behavior. If, when the nudge is removed, the modified behavior persists—which is presumably the desired outcome of such interventions—then something has changed in the individual’s cognitive architecture (the thing that turns underlying preferences into behavior) or in the individual’s underlying preferences. It’s quite likely to be a bit of both. What is clear, however, is that the nudge strategy is assuming that everyone shares a preference for “longer, healthier, and better” lives; each nudge is based on a particular definition of a “better” life, which seems to go against the grain of preference autonomy. It might be better, instead, to design preference-neutral assistive processes that help people bring their decisions and their cognitive architectures into better alignment with their underlying preferences. For example, it’s possible to design cognitive aides that highlight the longer-term consequences of decisions and teach people to recognize the seeds of those consequences in the present.⁵¹
That we need a better understanding of the processes whereby human preferences are formed and shaped seems obvious, not least because such an understanding would help us design machines that avoid accidental and undesirable changes in human preferences of the kind wrought by social-media content selection algorithms. Armed with such an understanding, of course, we will be tempted to engineer changes that would result in a “better” world.
Some might argue that we should provide much greater opportunities for preference-neutral “improving” experiences such as travel, debate, and training in analytical and critical thinking. We might, for example, provide opportunities for every high-school student to live for a few months in at least two other cultures distinct from his or her own.
Almost certainly, however, we will want to go further—for example, by instituting social and educational reforms that increase the coefficient of altruism—the weight that each individual places on the welfare of others—while decreasing the coefficients of sadism, pride, and envy. Would this be a good idea? Should we recruit our machines to help in the process? It’s certainly tempting. Indeed, Aristotle himself wrote, “The main concern of politics is to engender a certain character in the citizens and to make them good and disposed to perform noble actions.” Let’s just say that there are risks associated with intentional preference engineering on a global scale. We should proceed with extreme caution.
10
PROBLEM SOLVED?
If we succeed in creating provably beneficial AI systems, we would eliminate the risk that we might lose control over superintelligent machines. Humanity could proceed with their development and reap the almost unimaginable benefits that would flow from the ability to wield far greater intelligence in advancing our civilization. We would be released from millennia of servitude as agricultural, industrial, and clerical robots and we would be free to make the best of life’s potential. From the vantage point of this golden age, we would look back on our lives in the present time much as Thomas Hobbes imagined life without government: solitary, poor, nasty, brutish, and short.
Or perhaps not. Bondian villains may circumvent our safeguards and unleash uncontrollable superintelligences against which humanity has no defense. And if we survive that, we may find ourselves gradually enfeebled as we entrust more and more of our knowledge and skills to machines. The machines may advise us not to do this, understanding the long-term value of human autonomy, but we may overrule them.
Beneficial Machines
The standard model underlying a good deal of twentieth-century technology relies on machinery that optimizes a fixed, exogenously supplied objective. As we have seen, this model is fundamentally flawed. It works only if the objective is guaranteed to be complete and correct, or if the machinery can easily be reset. Neither condition will hold as AI becomes increasingly powerful.
If the exogenously supplied objective can be wrong, then it makes no sense for the machine to act as if it is always correct. Hence my proposal for beneficial machines: machines whose actions can be expected to achieve our objectives. Because these objectives are in us, and not in them, the machines will need to learn more about what we really want from observations of the choices we make and how we make them. Machines designed in this way will defer to humans: they will ask permission; they will act cautiously when guidance is unclear; and they will allow themselves to be switched off.
While these initial results are for a simplified and idealized setting, I believe they will survive the transition to more realistic settings. Already, my colleagues have successfully applied the same approach to practical problems such as self-driving cars interacting with human drivers.¹ For example, self-driving cars are notoriously bad at handling four-way stop signs when it’s not clear who has the right of way. By formulating this as an assistance game, however, the car comes up with a novel solution: it actually backs up a little bit to show that it’s definitely not planning to go first. The human understands this signal and goes ahead, confident that there will be no collision. Obviously, we human experts could have thought of this solution and programmed it into the vehicle, but that’s not what happened; this is a form of communication that the vehicle invented entirely by itself.
As we gain more experience in other settings, I expect that we will be surprised by the range and fluency of machine behaviors as they interact with humans. We are so used to the stupidity of machines that execute inflexible, preprogrammed behaviors or pursue definite but incorrect objectives that we may be shocked by how sensible they become. The technology of provably beneficial machines is the core of a new approach to AI and the basis for a new relationship between humans and machines.
It seems possible, also, to apply similar ideas to the redesign of other “machines” that ought to be serving humans, beginning with ordinary software systems. We are taught to build software by composing subroutines, each of which has a well-defined specification that says what the output should be for any given input—just like the square-root button on a calculator. This specification is the direct analog of the objective given to an AI system. The subroutine is not supposed to terminate and return control to the higher layers of the software system until it has produced an output that meets the specification. (This should remind you of the AI system that persists in its single-minded pursuit of its given objective.) A better approach would be to allow for uncertainty in the specification. For example, a subroutine that carries out some fearsomely complicated mathematical computation is typically given an error bound that defines the required precision for the answer and has to return a solution that is correct within that error bound. Sometimes, this may require weeks of computation. Instead, it might be better to be less precise about the allowed error, so that the subroutine could come back after twenty seconds and say, “I’ve found a solution that’s this good. Is that OK or do you want me to continue?” In some cases, the question may percolate all the way to the top level of the software system, so that the human user can provide further guidance to the system. The human’s answers would then help in refining the specifications at all levels.
The same kind of thinking can be applied to entities such as governments and corporations. The obvious failings of government include paying too much attention to the preferences (financial as well as political) of those in government and too little attention to the preferences of the governed. Elections are supposed to communicate preferences to the government, but they seem to have a remarkably small bandwidth (on the order of one byte of information every few years) for such a complex task. In far too many countries, government is simply a means for one group of people to impose its will on others. Corporations go to greater lengths to learn the preferences of customers, whether through market research or direct feedback in the form of purchase decisions. On the other hand, the molding of human preferences through advertising, cultural influences, and even chemical addiction is an accepted way of doing business.
Governance of AI
AI has the power to reshape the world, and the process of reshaping will have to be managed and guided in some way. If the sheer number of initiatives to develop effective governance of AI is any guide, then we are in excellent shape. Everyone and their uncle is setting up a Board or a Council or an International Panel. The World Economic Forum has identified nearly three hundred separate efforts to develop ethical principles for AI. My email inbox can be summarized as one long invitation to the Global World Summit Conference Forum on the Future of International Governance of the Social and Ethical Impacts of Emerging Artificial Intelligence Technologies.
This is all very different from what happened with nuclear technology. After World War II, the United States held all the nuclear cards. In 1953, US president Dwight Eisenhower proposed to the UN an international body to regulate nuclear technology. In 1957, the International Atomic Energy Agency started work; it is the sole global overseer for the safe and beneficial development of nuclear energy.
In contrast, many hands hold AI cards. To be sure, the United States, China, and the EU fund a lot of AI research, but almost all of it occurs outside secure national laboratories. AI researchers in universities are part of a broad, cooperative international community, glued together by shared interests, conferences, cooperative agreements, and professional societies such as AAAI (the Association for the Advancement of Artificial Intelligence) and IEEE (the Institute of Electrical and Electronics Engineers, which includes tens of thousands of AI researchers and practitioners). Probably the majority of investment in AI research and development is now occurring within corporations, large and small; the leading players as of 2019 are Google (including DeepMind), Facebook, Amazon, Microsoft, and IBM in the United States and Tencent, Baidu, and, to some extent, Alibaba in China—all among the largest corporations in the world.² All but Tencent and Alibaba are members of the Partnership on AI, an industry consortium that includes among its tenets a promise of cooperation on AI safety. Finally, although the vast majority of humans possess little in the way of AI expertise, there is at least a superficial willingness among other players to take the interests of humanity into account.
These, then, are the players who hold the majority of the cards. Their interests are not in perfect alignment but all share a desire to maintain control over AI systems as they become more powerful. (Other goals, such as avoiding mass unemployment, are shared by governments and university researchers, but not necessarily by corporations that expect to profit in the short term from the widest possible deployment of AI.) To cement this shared interest and achieve coordinated action, there are organizations with convening power, which means, roughly, that if the organization sets up a meeting, people accept the invitation to participate. In addition to the professional societies, which can bring AI researchers together, and the Partnership on AI, which combines corporations and nonprofit institutes, the canonical conveners are the UN (for governments and researchers) and the World Economic Forum (for governments and corporations). In addition, the G7 has proposed an International Panel on Artificial Intelligence, hoping that it will grow into something like the UN’s Intergovernmental Panel on Climate Change. Important-sounding reports are multiplying like rabbits.
With all this activity, is there any prospect of actual progress on governance occurring? Perhaps surprisingly, the answer is yes, at least around the edges. Many governments around the world are equipping themselves with advisory bodies to help with the process of developing regulations; perhaps the most prominent example is the EU’s High-Level Expert Group on Artificial Intelligence. Agreements, rules, and standards are beginning to emerge for issues such as user privacy, data exchange, and avoiding racial bias. Governments and corporations are working hard to sort out the rules for self-driving cars—rules that will inevitably have cross-border elements. There is a consensus that AI decisions must be explainable if AI systems are to be trusted, and that consensus is already partially implemented in the EU’s GDPR legislation. In California, a new law forbids AI systems to impersonate humans in certain circumstances. These last two items—explainability and impersonation—certainly have some bearing on issues of AI safety and control.
At present, there are no implementable recommendations that can be made to governments or other organizations considering the issue of maintaining control over AI systems. A regulation such as “AI systems must be safe and controllable” would carry no weight, because these terms do not yet have precise meanings and because there is no widely known engineering methodology for ensuring safety and controllability. But let’s be optimistic and imagine that, a few years down the line, the validity of the “provably beneficial” approach to AI has been established through both mathematical analysis and practical realization in the form of useful applications. We might, for example, have personal digital assistants that we can trust to use our credit cards, screen our calls and emails, and manage our finances because they have adapted to our individual preferences and know when it’s OK to go ahead and when it’s better to ask for guidance. Our self-driving cars may have learned good manners for interacting with one another and with human drivers, and our domestic robots should be interacting smoothly with even the most recalcitrant toddler. With luck, no cats will have been roasted for dinner and no whale meat will have been served to members of the Green Party.
At that point, it might be feasible to specify software design templates to which various kinds of applications must conform in order to be sold or connected to the Internet, just as applications have to pass a number of software tests before they can be sold on Apple’s App Store or Google Play. Software vendors could propose additional templates, as long as they come with proofs that the templates satisfy the (by then well-defined) requirements of safety and controllability. There would be mechanisms for reporting problems and for updating software systems that produce undesirable behavior. It would make sense also to create professional codes of conduct around the idea of provably safe AI programs and to integrate the corresponding theorems and methods into the curriculum for aspiring AI and machine learning practitioners.
To a seasoned observer of Silicon Valley, this may sound rather naïve. Regulation of any kind is strenuously opposed in the Valley. Whereas we are accustomed to the idea that pharmaceutical companies have to show safety and (beneficial) efficacy through clinical trials before they can release a product to the general public, the software industry operates by a different set of rules—namely, the empty set. A “bunch of dudes chugging Red Bull”³ at a software company can unleash a product or an upgrade that affects literally billions of people with no third-party oversight whatsoever.
Inevitably, however, the tech industry is going to have to acknowledge that its products matter; and, if they matter, then it matters that the products not have harmful effects. This means that there will be rules governing the nature of interactions with humans, prohibiting designs that, say, consistently manipulate preferences or produce addictive behavior. I have no doubt that the transition from an unregulated to a regulated world will be a painful one. Let’s hope it doesn’t require a Chernobyl-sized disaster (or worse) to overcome the industry’s resistance.
Misuse
Regulation might be painful for the software industry, but it would be intolerable for Dr. Evil, plotting world domination in his secret underground bunker. There is no doubt that criminal elements, terrorists, and rogue nations would have an incentive to circumvent any constraints on the design of intelligent machines so that they could be used to control weapons or to devise and carry out criminal activities. The danger is not so much that the evil schemes would succeed; it is that they would fail by losing control over poorly designed intelligent systems—particularly ones imbued with evil objectives and granted access to weapons.
This is not a reason to avoid regulation—after all, we have laws against murder even though they are often circumvented. It does, however, create a very serious policing problem. Already, we are losing the battle against malware and cybercrime. (A recent report estimates over two billion victims and an annual cost of around $600 billion.⁴) Malware in the form of highly intelligent programs would be much harder to defeat.
Some, including Nick Bostrom, have proposed that we use our own, beneficial superintelligent AI systems to detect and destroy any malicious or otherwise misbehaving AI systems. Certainly, we should use the tools at our disposal, while minimizing the impact on personal freedom, but the image of humans huddling in bunkers, defenseless against the titanic forces unleashed by battling superintelligences, is hardly reassuring even if some of them are on our side. It would be far better to find ways to nip the malicious AI in the bud.
A good first step would be a successful, coordinated, international campaign against cybercrime, including expansion of the Budapest Convention on Cybercrime. This would form an organizational template for possible future efforts to prevent the emergence of uncontrolled AI programs. At the same time, it would engender a broad cultural understanding that creating such programs, either deliberately or inadvertently, is in the long run a suicidal act comparable to creating pandemic organisms.
Enfeeblement and Human Autonomy
E. M. Forster’s most famous novels, including Howards End and A Passage to India, examined British society and its class system in the early part of the twentieth century. In 1909, he wrote one notable science-fiction story: “The Machine Stops.” The story is remarkable for its prescience, including depictions of (what we would now call) the Internet, videoconferencing, iPads, massive open online courses (MOOCs), widespread obesity, and avoidance of face-to-face contact. The Machine of the title is an all-encompassing intelligent infrastructure that meets all human needs. Humans become increasingly dependent on it, but they understand less and less about how it works. Engineering knowledge gives way to ritualized incantations that eventually fail to stem the gradual deterioration of the Machine’s workings. Kuno, the main character, sees what is unfolding but is powerless to stop it:
Cannot you see . . . that it is we that are dying, and that down here the only thing that really lives is the Machine? We created the Machine to do our will, but we cannot make it do our will now. It has robbed us of the sense of space and of the sense of touch, it has blurred every human relation, it has paralysed our bodies and our wills. . . . We only exist as the blood corpuscles that course through its arteries, and if it could work without us, it would let us die. Oh, I have no remedy—or, at least, only one—to tell men again and again that I have seen the hills of Wessex as Aelfrid saw them when he overthrew the Danes.
More than one hundred billion people have lived on Earth. They (we) have spent on the order of one trillion person-years learning and teaching, in order that our civilization may continue. Up to now, its only possibility for continuation has been through re-creation in the minds of new generations. (Paper is fine as a method of transmission, but paper does nothing until the knowledge recorded thereon reaches the next person’s mind.) That is now changing: increasingly, it is possible to place our knowledge into machines that, by themselves, can run our civilization for us.
Once the practical incentive to pass our civilization on to the next generation disappears, it will be very hard to reverse the process. One trillion years of cumulative learning would, in a real sense, be lost. We would become passengers in a cruise ship run by machines, on a cruise that goes on forever—exactly as envisaged in the film WALL-E.
A good consequentialist would say, “Obviously this is an undesirable consequence of the overuse of automation! Suitably designed machines would never do this!” True, but think what this means. Machines may well understand that human autonomy and competence are important aspects of how we prefer to conduct our lives. They may well insist that humans retain control and responsibility for their own well-being—in other words, machines will say no. But we myopic, lazy humans may disagree. There is a tragedy of the commons at work here: for any individual human, it may seem pointless to engage in years of arduous learning to acquire knowledge and skills that machines already have; but if everyone thinks that way, the human race will, collectively, lose its autonomy.
The solution to this problem seems to be cultural, not technical. We will need a cultural movement to reshape our ideals and preferences towards autonomy, agency, and ability and away from self-indulgence and dependency—if you like, a modern, cultural version of ancient Sparta’s military ethos. This would mean human preference engineering on a global scale along with radical changes in how our society works. To avoid making a bad situation worse, we might need the help of superintelligent machines, both in shaping the solution and in the actual process of achieving a balance for each individual.
Any parent of a small child is familiar with this process. Once the child is beyond the helpless stage, parenting requires an ever-evolving balance between doing everything for the child and leaving the child entirely to his or her own devices. At a certain stage, the child comes to understand that the parent is perfectly capable of tying the child’s shoelaces but is choosing not to. Is that the future for the human race—to be treated like a child, forever, by far superior machines? I suspect not. For one thing, children cannot switch their parents off. (Thank goodness!) Nor will we be pets or zoo animals. There is really no analog in our present world to the relationship we will have with beneficial intelligent machines in the future. It remains to be seen how the endgame turns out.
Appendix A
SEARCHING FOR SOLUTIONS
Choosing an action by looking ahead and considering the outcomes of different possible action sequences is a fundamental capability for intelligent systems. It’s something your cell phone does whenever you ask it for directions. Figure 14 shows a typical example: getting from the current location, Pier 19, to the goal, Coit Tower. The algorithm needs to know what actions are available to it; typically, for map navigation, each action traverses a road segment connecting two adjacent intersections. In the example, from Pier 19 there is just one action: turn right and drive along the Embarcadero to the next intersection. Then there is a choice: continue on or take a sharp left onto Battery Street. The algorithm systematically explores all these possibilities until it eventually finds a route. Typically we add a little bit of commonsense guidance, such as a preference for exploring streets that head towards the goal rather than away from it. With this guidance and a few other tricks, the algorithm can find optimal solutions very quickly—usually in a few milliseconds, even for a cross-country trip.
Searching for routes on maps is a natural and familiar example, but it may be a bit misleading because the number of distinct locations is so small. In the United States, for example, there are only about ten million intersections. That may seem like a large number, but it is tiny compared to the number of distinct states in the 15-puzzle. The 15-puzzle is a toy with a four-by-four grid containing fifteen numbered tiles and a blank space. The goal is to move the tiles around to achieve a goal configuration, such as having all the tiles in numerical order. The 15-puzzle has about ten trillion states (a million times bigger than the United States!); the 24-puzzle has about eight trillion trillion states. This is an example of what mathematicians call combinatorial complexity—the rapid explosion in the number of combinations as the number of “moving parts” of a problem increases. Returning to the map of the United States: if a trucking company wants to optimize the movements of its one hundred trucks across the United States, the number of possible states to consider would be ten million to the power of one hundred (i.e., 10⁷⁰⁰).
[]
FIGURE 14: A map of part of San Francisco, showing the initial location at Pier 19 and the destination at Coit Tower.
Giving up on rational decisions
Many games have this property of combinatorial complexity, including chess, checkers, backgammon, and Go. Because the rules of Go are simple and elegant (figure 15), I’ll use it as a running example. The objective is clear enough: win the game by surrounding more territory than your opponent. The possible actions are clear too: put a stone in an empty location. Just as with navigation on a map, the obvious way to decide what to do is to imagine different futures that result from different sequences of actions and choose the best one. You ask, “If I do this, what might my opponent do? And what do I do then?” This idea is illustrated in figure 16 for 3×3 Go. Even for 3×3 Go, I can show only a small part of the tree of possible futures, but I hope the idea is clear enough. Indeed, this way of making decisions seems to be just straightforward common sense.
[]
FIGURE 15: A Go board, partway through Game 5 of the 2002 LG Cup final between Lee Sedol (black) and Choe Myeong-hun (white). Black and White take turns placing a single stone on any unoccupied location on the board. Here, it is Black’s turn to move and there are 343 possible moves. Each side attempts to surround as much territory as possible. For example, White has good chances to win territory at the left-hand edge and on the left side of the bottom edge, while Black may win territory in the top-right and bottom-right corners. A key concept in Go is that of a group—that is, a set of stones of the same color that are connected to one another by vertical or horizontal adjacency. A group remains alive as long as there is at least one empty space next to it; if it is completely surrounded, with no empty spaces, it dies and is removed from the board.
[]
FIGURE 16: Part of the game tree for 3×3 Go. Starting from the empty initial state, sometimes called the root of the tree, Black can choose one of three possible distinct moves. (The others are symmetric with these.) It would then be White’s turn to move. If Black chooses to play in the center, White has two distinct moves—corner or side—then Black would get to play again. By imagining these possible futures, Black can choose which move to play in the initial state. If Black is unable to follow every possible line of play to the end of the game, then an evaluation function can be used to estimate how good the positions are at the leaves of the tree. Here, the evaluation function assigns +5 and +3 to two of the leaves.
The problem is that Go has more than 10¹⁷⁰ possible positions for the full 19×19 board. Whereas finding a guaranteed shortest route on a map is relatively easy, finding a guaranteed win in Go is utterly infeasible. Even if the algorithm ponders for the next billion years, it can explore only a tiny fraction of the whole tree of possibilities. This leads to two questions. First, which part of the tree should the program explore? And second, which move should the program make, given the partial tree that it has explored?
To answer the second question first: the basic idea used by almost all lookahead programs is to assign an estimated value to the “leaves” of the tree—those states furthest in the future—and then “work back” to find out how good the choices are at the root.¹ For example, looking at the two positions at the bottom of figure 16, one might guess a value of +5 (from Black’s viewpoint) for the position on the left and +3 for the position on the right, because White’s stone in the corner is much more vulnerable than the one on the side. If these values are right, then Black can expect that White will play on the side, leading to the right-hand position; hence, it seems reasonable to assign a value of +3 to Black’s initial move in the center. With slight variations, this is the scheme used by Arthur Samuel’s checker-playing program to beat its creator in 1955,² by Deep Blue to beat the then world chess champion, Garry Kasparov, in 1997, and by AlphaGo to beat former world Go champion Lee Sedol in 2016. For Deep Blue, humans wrote the piece of the program that evaluates positions at the leaves of the tree, based largely on their knowledge of chess. For Samuel’s program and for AlphaGo, the programs learned it from thousands or millions of practice games.
The first question—which part of the tree should the program explore?—is an example of one of the most important questions in AI: What computations should an agent do? For game-playing programs, it is vitally important because they have only a small, fixed allocation of time, and using it on pointless computations is a sure way to lose. For humans and other agents operating in the real world, it is even more important because the real world is so much more complex: unless chosen well, no amount of computation is going to make the smallest dent in the problem of deciding what to do. If you are driving and a moose walks into the middle of the road, it’s no use thinking about whether to trade euros for pounds or whether Black should make its first move in the center of the Go board.
The ability of humans to manage their computational activity so that reasonable decisions get made reasonably quickly is at least as remarkable as their ability to perceive and to reason correctly. And it seems to be something we acquire naturally and effortlessly: when my father taught me to play chess, he taught me the rules, but he did not also teach me such-and-such clever algorithm for choosing which parts of the game tree to explore and which parts to ignore.
How does this happen? On what basis can we direct our thoughts? The answer is that a computation has value to the extent that it can improve your decision quality. The process of choosing computations is called metareasoning, which means reasoning about reasoning. Just as actions can be chosen rationally, on the basis of expected value, so can computations. This is called rational metareasoning.³ The basic idea is very simple:
Do the computations that will give the highest expected improvement in decision quality, and stop when the cost (in terms of time) exceeds the expected improvement.
That’s it. No fancy algorithm needed! This simple principle generates effective computational behavior in a wide range of problems, including chess and Go. It seems likely that our brains implement something similar, which explains why we don’t need to learn new, game-specific algorithms for thinking with each new game we learn to play.
Exploring a tree of possibilities that stretches forward into the future from the current state is not the only way to reach decisions, of course. Often, it makes more sense to work backwards from the goal. For example, the presence of the moose in the road suggests the goal of avoid hitting the moose, which in turn suggests three possible actions: swerve left, swerve right, or slam on the brakes. It does not suggest the action of trading euros for pounds or putting a black stone in the center. Thus, goals have a wonderful focusing effect on one’s thinking. No current game-playing programs take advantage of this idea; in fact, they typically consider all possible legal actions. This is one of the (many) reasons why I am not worried about AlphaZero taking over the world.
Looking further ahead
Let’s suppose you have decided to make a specific move on the Go board. Great! Now you have to actually do it. In the real world, this involves reaching into the bowl of unplayed stones to pick up a stone, moving your hand above the intended location, and placing the stone neatly on the spot, either quietly or emphatically according to Go etiquette.
Each of these stages, in turn, consists of a complex dance of perception and motor control commands involving the muscles and nerves of the hand, arm, shoulder, and eyes. And while reaching for a stone, you’re making sure the rest of your body doesn’t topple over thanks to the shift in your center of gravity. The fact that you may not be consciously aware of selecting these actions does not mean that they aren’t being selected by your brain. For example, there may be many stones in the bowl, but your “hand”—really, your brain processing sensory information—still has to choose one of them to pick up.
Almost everything we do is like this. While driving, we might choose to change lanes to the left; but this action involves looking in the mirror and over your shoulder, perhaps adjusting speed, and moving the steering wheel while monitoring progress until the maneuver is complete. In conversation, a routine response such as “OK, let me check my calendar and get back to you” involves articulating fourteen syllables, each of which requires hundreds of precisely coordinated motor control commands to the muscles of the tongue, lips, jaw, throat, and breathing apparatus. For your native language, this process is automatic; it closely resembles the idea of running a subroutine in a computer program (see this page). The fact that complex action sequences can become routine and automatic, thereby functioning as single actions in still more complex processes, is absolutely fundamental to human cognition. Saying words in a less familiar language—perhaps asking directions to Szczebrzeszyn in Poland—is a useful reminder that there was a time in your life when reading and speaking words were difficult tasks requiring mental effort and lots of practice.
So, the real problem that your brain faces is not choosing a move on the Go board but sending motor control commands to your muscles. If we shift our attention from the level of Go moves to the level of motor control commands, the problem looks very different. Very roughly, your brain can send out commands about every one hundred milliseconds. We have about six hundred muscles, so that’s a theoretical maximum of about six thousand actuations per second, twenty million per hour, two hundred billion per year, twenty trillion per lifetime. Use them wisely!
Now, suppose we tried to apply an AlphaZero-like algorithm to solve the decision problem at this level. In Go, AlphaZero looks ahead perhaps fifty steps. But fifty steps of motor control commands get you only a few seconds into the future! Not enough for the twenty million motor control commands in an hour-long game of Go, and certainly not enough for the trillion (1,000,000,000,000) steps involved in doing a PhD. So, even though AlphaZero looks further ahead in Go than any human can, that ability doesn’t seem to help in the real world. It’s the wrong kind of lookahead.
I’m not saying, of course, that doing a PhD actually requires planning out a trillion muscle actuations in advance. Only quite abstract plans are made initially—perhaps choosing Berkeley or some other place, choosing a PhD supervisor or research topic, applying for funding, getting a student visa, traveling to the chosen city, doing some research, and so on. To make your choices, you do just enough thinking, about just the right things, so that the decision becomes clear. If the feasibility of some abstract step such as getting the visa is unclear, you do some more thinking and perhaps information gathering, which means making the plan more concrete in certain aspects: maybe choosing a visa type for which you are eligible, collecting the necessary documents, and submitting the application. Figure 17 shows the abstract plan and the refinement of the GetVisa step into a three-step subplan. When the time comes to begin carrying out the plan, its initial steps have to be refined all the way down to the primitive level so that your body can execute them.
[]
FIGURE 17: An abstract plan for an overseas student who has chosen to get a PhD at Berkeley. The GetVisa step, whose feasibility is uncertain, has been expanded out into an abstract plan of its own.
AlphaGo simply cannot do this kind of thinking: the only actions it ever considers are primitive actions occurring in a sequence from the initial state. It has no notion of abstract plan. Trying to apply AlphaGo in the real world is like trying to write a novel by wondering whether the first letter should be A, B, C, and so on.
In 1962, Herbert Simon emphasized the importance of hierarchical organization in a famous paper, “The Architecture of Complexity.”⁴ AI researchers since the early 1970s have developed a variety of methods that construct and refine hierarchically organized plans.⁵ Some of the resulting systems are able to construct plans with tens of millions of steps—for example, to organize manufacturing activities in a large factory.
We now have a pretty good theoretical understanding of the meaning of abstract actions—that is, of how to define the effects they have on the world.⁶ Consider, for example, the abstract action GoToBerkeley in figure 17. It can be implemented in many different ways, each of which produces different effects on the world: you could sail there, stow away on a ship, fly to Canada and walk across the border, hire a private jet, and so on. But you need not consider any of these choices for now. As long as you are sure there is a way to do it that doesn’t consume so much time and money or incur so much risk as to imperil the rest of the plan, you can just put the abstract step GoToBerkeley into the plan and rest assured that the plan will work. In this way, we can build high-level plans that will eventually turn into billions or trillions of primitive steps without ever worrying about what those steps are until it’s time to actually do them.
Of course, none of this is possible without the hierarchy. Without high-level actions such as getting a visa and writing a thesis, we cannot make an abstract plan to get a PhD; without still-higher-level actions such as getting a PhD and starting a company, we cannot plan to get a PhD and then start a company. In the real world, we would be lost without a vast library of actions at dozens of levels of abstraction. (In the game of Go, there is no obvious hierarchy of actions, so most of us are lost.) At present, however, all existing methods for hierarchical planning rely on a human-generated hierarchy of abstract and concrete actions; we do not yet understand how such hierarchies can be learned from experience.
Appendix B
KNOWLEDGE AND LOGIC
Logic is the study of reasoning with definite knowledge. It is fully general with regard to subject matter—that is, the knowledge can be about anything at all. Logic is therefore an indispensable part of our understanding of general purpose intelligence.
Logic’s main requirement is a formal language with precise meanings for the sentences in the language, so that there is an unambiguous process for determining whether a sentence is true or false in a given situation. That’s it. Once we have that, we can write sound reasoning algorithms that produce new sentences from sentences that are already known. Those new sentences are guaranteed to follow from the sentences that the system already knows, meaning that the new sentences are necessarily true in any situation where the original sentences are true. This allows a machine to answer questions, prove mathematical theorems, or construct plans that are guaranteed to succeed.
High-school algebra provides a good example (albeit one that may evoke painful memories). The formal language includes sentences such as 4x + 1 = 2y − 5. This sentence is true in the situation where x = 5 and y = 13, and false when x = 5 and y = 6. From this sentence one can derive another sentence such as y = 2x + 3, and whenever the first sentence is true, the second is guaranteed to be true too.
The core idea of logic, developed independently in ancient India, China, and Greece, is that the same notions of precise meaning and sound reasoning can be applied to sentences about anything at all, not just numbers. The canonical example starts with “Socrates is a man” and “All men are mortal” and derives “Socrates is mortal.”¹ This derivation is strictly formal in the sense that it does not rely on any further information about who Socrates is or what man and mortal mean. The fact that logical reasoning is strictly formal means that it is possible to write algorithms that do it.
Propositional logic
For our purposes in understanding the capabilities and prospects for AI, there are two important kinds of logic that really matter: propositional logic and first-order logic. The difference between the two is fundamental to understanding the current situation in AI and how it is likely to evolve.
Let’s start with propositional logic, which is the simpler of the two. Sentences are made of just two kinds of things: symbols that stand for propositions that can be true or false, and logical connectives such as and, or, not, and if . . . then. (We’ll see an example shortly.) These logical connectives are sometimes called Boolean, after George Boole, a nineteenth-century logician who reinvigorated his field with new mathematical ideas. They are just the same as the logic gates used in computer chips.
Practical algorithms for reasoning in propositional logic have been known since the early 1960s.²^(,)³ Although the general reasoning task may require exponential time in the worst case,⁴ modern propositional reasoning algorithms handle problems with millions of proposition symbols and tens of millions of sentences. They are a core tool for constructing guaranteed logistical plans, verifying chip designs before they are manufactured, and checking the correctness of software applications and security protocols before they are deployed. The amazing thing is that a single algorithm—a reasoning algorithm for propositional logic—solves all these tasks once they have been formulated as reasoning tasks. Clearly, this is a step towards the goal of generality in intelligent systems.
Unfortunately, it’s not a very big step because the language of propositional logic is not very expressive. Let’s see what this means in practice when we try to express the basic rule for legal moves in Go: “The player whose turn it is to move can play a stone on any unoccupied intersection.”⁵ The first step is to decide what the proposition symbols are going to be for talking about Go moves and Go board positions. The fundamental proposition that matters is whether a stone of a particular color is on a particular location at a particular time. So, we’ll need symbols such as White\_Stone\_On\_5\_5\_At\_Move\_38 and Black\_Stone\_On\_5\_5\_At\_Move\_38. (Remember that, as with man, mortal, and Socrates, the reasoning algorithm doesn’t need to know what the symbols mean.) Then the logical condition for White to be able to play at the 5,5 intersection at move 38 would be
(not White\_Stone\_On\_5\_5\_At\_Move\_38) and
(not Black\_Stone\_On\_5\_5\_At\_Move\_38)
In other words: there’s no white stone and there’s no black stone. That seems simple enough. Unfortunately, in propositional logic it would have to be written out separately for each location and for each move in the game. Because there are 361 locations and around 300 moves per game, this means over 100,000 copies of the rule! For the rules concerning captures and repetitions, which involve multiple stones and locations, the situation is even worse, and we quickly fill up millions of pages.
The real world is, obviously, much bigger than the Go board: there are far more than 361 locations and 300 time steps, and there are many kinds of things besides stones; so, the prospect of using a propositional language for knowledge of the real world is utterly hopeless.
It’s not just the ridiculous size of the rulebook that’s a problem: it’s also the ridiculous amount of experience a learning system would need to acquire the rules from examples. While a human needs just one or two examples to get the basic ideas of placing a stone, capturing stones, and so on, an intelligent system based on propositional logic has to be shown examples of moving and capturing separately for each location and time step. The system cannot generalize from a few examples, as a human does, because it has no way to express the general rule. This limitation applies not just to systems based on propositional logic but also to any system with comparable expressive power. That includes Bayesian networks, which are probabilistic cousins of propositional logic, and neural networks, which are the basis for the “deep learning” approach to AI.
First-order logic
So, the next question is, can we devise a more expressive logical language? We’d like one in which it is possible to tell the rules of Go to the knowledge-based system in the following way:
for all locations on the board, and for all time steps, here are the rules . . .
First-order logic, introduced by the German mathematician Gottlob Frege in 1879, allows one to write the rules this way.⁶ The key difference between propositional and first-order logic is this: whereas propositional logic assumes the world is made of propositions that are true or false, first-order logic assumes the world is made of objects that can be related to each other in various ways. For example, there could be locations that are adjacent to each other, times that follow each other consecutively, stones that are on locations at particular times, and moves that are legal at particular times. First-order logic allows one to assert that some property is true for all objects in the world; so, one can write
for all time steps t, and for all locations l, and for all colors c,
if it is c’s turn to move at time t and l is unoccupied at time t,
then it is legal for c to play a stone at location l at time t.
With some extra caveats and some additional sentences that define the board locations, the two colors, and what unoccupied means, we have the beginnings of the complete rules of Go. The rules take up about as much space in first-order logic as they do in English.
The development of logic programming in the late 1970s provided elegant and efficient technology for logical reasoning embodied in a programming language called Prolog. Computer scientists worked out how to make logical reasoning in Prolog run at millions of reasoning steps per second, making many applications of logic practical. In 1982, the Japanese government announced a huge investment in Prolog-based AI called the Fifth Generation project,⁷ and the United States and UK responded with similar efforts.⁸^(,)⁹
Unfortunately, the Fifth Generation project and others like it ran out of steam in the late 1980s and early 1990s, partly because of the inability of logic to handle uncertain information. They epitomized what soon became a pejorative term: Good Old-Fashioned AI, or GOFAI.¹⁰ It became fashionable to dismiss logic as irrelevant to AI; indeed, many AI researchers working now in the area of deep learning don’t know anything about logic. This fashion seems likely to fade: if you accept that the world has objects in it that are related to each other in various ways, then first-order logic is going to be relevant, because it provides the basic mathematics of objects and relations. This view is shared by Demis Hassabis, CEO of Google DeepMind:¹¹
You can think about deep learning as it currently is today as the equivalent in the brain to our sensory cortices: our visual cortex or auditory cortex. But, of course, true intelligence is a lot more than just that, you have to recombine it into higher-level thinking and symbolic reasoning, a lot of the things classical AI tried to deal with in the 80s.
. . . We would like [these systems] to build up to this symbolic level of reasoning—maths, language, and logic. So that’s a big part of our work.
Thus, one of the most important lessons from the first thirty years of AI research is that a program that knows things, in any useful sense, will need a capacity for representation and reasoning that is at least comparable to that offered by first-order logic. As yet, we do not know the exact form this will take: it may be incorporated into probabilistic reasoning systems, into deep learning systems, or into some still-to-be-invented hybrid design.
Appendix C
UNCERTAINTY AND PROBABILITY
Whereas logic provides a general basis for reasoning with definite knowledge, probability theory encompasses reasoning with uncertain information (of which definite knowledge is a special case). Uncertainty is the normal epistemic situation of an agent in the real world. Although the basic ideas of probability were developed in the seventeenth century, only recently has it become possible to represent and reason with large probability models in a formal way.
The basics of probability
Probability theory shares with logic the idea that there are possible worlds. One usually starts out by defining what they are—for example, if I am rolling one ordinary six-sided die, there are six worlds (sometimes called outcomes): 1, 2, 3, 4, 5, 6. Exactly one of them will be the case, but a priori I don’t know which. Probability theory assumes that it is possible to attach a probability to each world; for my die roll, I’ll attach 1/6 to each world. (These probabilities happen to be equal, but it need not be that way; the only requirement is that the probabilities have to add up to 1.) Now I can ask a question such as “What’s the probability I’ll roll an even number?” To find this, I simply add up the probabilities for the three worlds where the number is even: 1/6 + 1/6 + 1/6 = ½.
It’s also straightforward to take new evidence into account. Suppose an oracle tells me that the roll is a prime number (that is, 2, 3, or 5). This rules out the worlds 1, 4, and 6. I simply take the probabilities associated with the remaining possible worlds and scale them up so the total remains 1. Now the probabilities of 2, 3, and 5 are each ⅓, and the probability that my roll is an even number is now just ⅓, since 2 is the only remaining even roll. This process of updating probabilities as new evidence arrives is an example of Bayesian updating.
So, this probability stuff seems quite simple! Even a computer can add up numbers, so what’s the problem? The problem comes when there are more than a few worlds. For example, if I roll the die one hundred times, there are 6¹⁰⁰ outcomes. It’s infeasible to begin the process of probabilistic reasoning by attaching a number to each of these outcomes individually. A clue for dealing with this complexity comes from the fact that the die rolls are independent if the die is known to be fair—that is, the outcome of any single roll does not affect the probabilities for the outcomes of any other roll. Thus, independence is helpful in structuring the probabilities for complex sets of events.
Suppose I am playing Monopoly with my son George. My piece is on Just Visiting, and George owns the yellow set whose properties are sixteen, seventeen, and nineteen squares away from Just Visiting. Should he buy houses for the yellow set now, so that I have to pay him some exorbitant rent if I land on those squares, or should he wait until the next turn? That depends on the probability of landing on the yellow set in my current turn.
Here are the rules for rolling the dice in Monopoly: two dice are rolled and the piece is moved according to the total shown; if doubles are rolled, the player rolls again and moves again; if the second roll is doubles, the player rolls a third time and moves again (but if the third roll is doubles, the player goes to jail instead). So, for example, I might roll 4-4 followed by 5-4, totaling 17; or 2-2, then 2-2, then 6-2, totaling 16. As before, I simply add up the probabilities of all worlds where I land on the yellow set. Unfortunately, there are a lot of worlds. As many as six dice could be rolled altogether, so the number of worlds runs into the thousands. Furthermore, the rolls are no longer independent, because the second roll won’t exist unless the first roll is doubles. On the other hand, if we fix the values of the first pair of dice, then the values of the second pair of dice are independent. Is there a way to capture this kind of dependency?
Bayesian networks
In the early 1980s, Judea Pearl proposed a formal language called Bayesian networks (often abbreviated to Bayes nets) that makes it possible, in many real-world situations, to represent the probabilities of a very large number of outcomes in a very concise form.¹
Figure 18 shows a Bayesian network that describes the rolling of dice in Monopoly. The only probabilities that have to be supplied are the 1/6 probabilities of the values 1, 2, 3, 4, 5, 6 for the individual die rolls (D₁, D₂, etc.)—that is, thirty-six numbers instead of thousands. Explaining the exact meaning of the network requires a little bit of mathematics,² but the basic idea is that the arrows denote dependency relationships—for example, the value of Doubles₁₂ depends on the values of D₁ and D₂. Similarly, the values of D₃ and D₄ (the next roll of the two dice) depend on Doubles₁₂ because if Doubles₁₂ has value false, then D₃ and D₄ have value 0 (that is, there is no next roll).
Just as with propositional logic, there are algorithms that can answer any question for any Bayesian network with any evidence. For example, we can ask for the probability of LandsOnYellowSet, which turns out to be about 3.88 percent. (This means that George can wait before buying houses for the yellow set.) Slightly more ambitiously, we can ask for the probability of LandsOnYellowSet given that the second roll is a double-3. The algorithm works out for itself that, in that case, the first roll must have been a double and concludes that the answer is about 36.1 percent. This is an example of Bayesian updating: when the new evidence (that the second roll is a double-3) is added, the probability of LandsOnYellowSet changes from 3.88 percent to 36.1 percent. Similarly, the probability that I roll three times (Doubles₃₄ is true) is 2.78 percent, while the probability that I roll three times given that I land on the yellow set is 20.44 percent.
[]
FIGURE 18: A Bayesian network that represents the rules for rolling dice in Monopoly and enables an algorithm to calculate the probability of landing on a particular set of squares (such as the yellow set) starting from some other square (such as Just Visiting). (For simplicity, the network omits the possibility of landing on a Chance or Community Chest square and being diverted to a different location.) D₁ and D₂ represent the initial roll of two dice and they are independent (no link between them). If doubles are rolled (Doubles₁₂), then the player rolls again, so D₃ and D₄ have non-zero values, and so on. In the situation described, the player lands on the yellow set if any of the three totals is 16, 17, or 19.
Bayesian networks provide a way to build knowledge-based systems that avoids the failures that plagued the rule-based expert systems of the 1980s. (Indeed, had the AI community been less resistant to probability in the early 1980s, it might have avoided the AI winter that followed the rule-based expert system bubble.) Thousands of applications have been fielded, in areas ranging from medical diagnosis to terrorism prevention.³
Bayesian networks provide machinery for representing the necessary probabilities and performing the calculations to implement Bayesian updating for many complex tasks. Like propositional logic, however, they are quite limited in their ability to represent general knowledge. In many applications, the Bayesian network representation becomes very large and repetitive—for example, just as the rules of Go have to be repeated for every square in propositional logic, the probability-based rules of Monopoly have to be repeated for every player, for every location a player might be on, and for every move in the game. Such huge networks are virtually impossible to create by hand; instead, one would have to resort to code written in a traditional language such as C++ to generate and piece together multiple Bayes net fragments. While this is practical as an engineering solution for a specific problem, it is an obstacle to generality because the C++ code has to be written anew by a human expert for each application.
First-order probabilistic languages
It turns out, fortunately, that we can combine the expressiveness of first-order logic with the ability of Bayesian networks to capture probabilistic information concisely. This combination gives us the best of both worlds: probabilistic knowledge-based systems are able to handle a much wider range of real-world situations than either logical methods or Bayesian networks. For example, we can easily capture probabilistic knowledge about genetic inheritance:
for all persons c, f, and m,
if f is the father of c and m is the mother of c
and both f and m have blood type AB,
then c has blood type AB with probability 0.5.
The combination of first-order logic and probability actually gives us much more than just a way to express uncertain information about lots of objects. The reason is that when we add uncertainty to worlds containing objects, we get two new kinds of uncertainty: not just uncertainty about which facts are true or false but also uncertainty about what objects exist and uncertainty about which objects are which. These kinds of uncertainty are completely pervasive. The world does not come with a list of characters, like a Victorian play; instead, you gradually learn about the existence of objects from observation.
Sometimes the knowledge of new objects can be fairly definite, as when you open your hotel window and see the basilica of Sacré-Cœur for the first time; or it can be quite indefinite, as when you feel a gentle rumble that might be an earthquake or a passing subway train. And while the identity of Sacré-Cœur is quite unambiguous, the identity of subway trains is not: you might ride the same physical train hundreds of times without ever realizing it’s the same one. Sometimes we don’t need to resolve the uncertainty: I don’t usually name all the tomatoes in a bag of cherry tomatoes and keep track of how well each one is doing, unless perhaps I am recording the progress of a tomato putrefaction experiment. For a class full of graduate students, on the other hand, I try my best to keep track of their identities. (Once, there were two research assistants in my group who had the same first and last names and were of very similar appearance and worked on closely related topics; at least, I am fairly sure there were two.) The problem is that we directly perceive not the identity of objects but (aspects of) their appearance; objects do not usually have little license plates that uniquely identify them. Identity is something our minds sometimes attach to objects for our own purposes.
The combination of probability theory with an expressive formal language is a fairly new subfield of AI, often called probabilistic programming.⁴ Several dozen probabilistic programming languages, or PPLs, have been developed, many of them deriving their expressive power from ordinary programming languages rather than first-order logic. All PPL systems have the capacity to represent and reason with complex, uncertain knowledge. Applications include Microsoft’s TrueSkill system, which rates millions of video game players every day; models for aspects of human cognition that were previously inexplicable by any mechanistic hypothesis, such as the ability to learn new visual categories of objects from single examples;⁵ and the global seismic monitoring for the Comprehensive Nuclear-Test-Ban Treaty (CTBT), which is responsible for detecting clandestine nuclear explosions.⁶
The CTBT monitoring system collects real-time ground movement data from a global network of over 150 seismometers and aims to identify all the seismic events occurring on Earth above a certain magnitude and to flag the suspicious ones. Clearly there is plenty of existence uncertainty in this problem, because we don’t know in advance the events that will occur; moreover, the vast majority of signals in the data are just noise. There is also lots of identity uncertainty: a blip of seismic energy detected at station A in Antarctica may or may not come from the same event as another blip detected at station B in Brazil. Listening to the Earth is like listening to thousands of simultaneous conversations that have been scrambled by transmission delays and echoes and drowned out by crashing waves.
How do we solve this problem using probabilistic programming? One might think we need some very clever algorithms to sort out all the possibilities. In fact, by following the methodology of knowledge-based systems, we don’t have to devise any new algorithms at all. We simply use a PPL to express what we know of geophysics: how often events tend to occur in areas of natural seismicity, how fast seismic waves travel through the Earth and how quickly they decay, how sensitive the detectors are, and how much noise there is. Then we add the data and run a probabilistic reasoning algorithm. The resulting monitoring system, called NET-VISA, has been operating as part of the treaty verification regime since 2018. Figure 19 shows NET-VISA’s detection of a 2013 nuclear test in North Korea.
[]
FIGURE 19: Location estimates for the February 12, 2013, nuclear test carried out by the government of North Korea. The tunnel entrance (black cross at lower center) was identified in satellite photographs. The NET-VISA location estimate is approximately 700 meters from the tunnel entrance and is based primarily on detections at stations 4,000 to 10,000 kilometers away. The CTBTO LEB location is the consensus estimate from expert geophysicists.
Keeping track of the world
One of the most important roles for probabilistic reasoning is in keeping track of parts of the world that are not directly observable. In most video and board games, this is unnecessary because all the relevant information is observable, but in the real world this is seldom the case.
[]
FIGURE 20: (left) Diagram of the situation leading up to the accident. The self-driving Volvo, marked V, is approaching an intersection, driving in the rightmost lane at thirty-eight miles per hour. Traffic in the other two lanes is stopped and the traffic light (L) is turning yellow. Invisible to the Volvo, a Honda (H) is making a left turn; (right) aftermath of the accident.
An example is given by one of the first serious accidents involving a self-driving car. It occurred on South McClintock Drive at East Don Carlos Avenue in Tempe, Arizona, on March 24, 2017.⁷ As shown in figure 20, a self-driving Volvo (V), going south on McClintock, is approaching an intersection where the traffic light is just turning yellow. The Volvo’s lane is clear, so it proceeds at the same speed through the intersection. Then a currently invisible vehicle—the Honda (H) in figure 20—appears from behind the queue of stopped traffic and a collision ensues.
To infer the possible presence of the invisible Honda, the Volvo could gather clues as it approaches the intersection. In particular, the traffic in the other two lanes is stopped even though the light is green; the cars at the front of the queue are not inching forward into the intersection and have their brake lights on. This is not conclusive evidence of an invisible left turner but it doesn’t need to be; even a small probability is enough to suggest slowing down and entering the intersection more cautiously.
The moral of this story is that intelligent agents operating in partially observable environments have to keep track of what they can’t see—to the extent possible—based on clues from what they can see.
Here’s another example closer to home: Where are your keys? Unless you happen to be driving while reading this book—not recommended—you probably cannot see them right now. On the other hand, you probably know where they are: in your pocket, in your bag, on the bedside table, in the pocket of your coat which is hanging up, or maybe on the hook in the kitchen. You know this because you put them there and they haven’t moved since. This is a simple example of using knowledge and reasoning to keep track of the state of the world.
Without this capability, we would be lost—often quite literally. For example, as I write this, I am looking at the white wall of a nondescript hotel room. Where am I? If I had to rely on my current perceptual input, I would indeed be lost. In fact, I know that I am in Zürich, because I arrived in Zürich yesterday and I haven’t left. Like humans, robots need to know where they are so that they can navigate successfully through rooms, buildings, streets, forests, and deserts.
In AI we use the term belief state to refer to an agent’s current knowledge of the state of the world—however incomplete and uncertain it may be. Generally, the belief state—rather than the current perceptual input—is the proper basis for making decisions about what to do. Keeping the belief state up to date is a core activity for any intelligent agent. For some parts of the belief state, this happens automatically—for example, I just seem to know that I’m in Zürich, without having to think about it. For other parts, it happens on demand, so to speak. For example, when I wake up in a new city with severe jet lag, halfway through a long trip, I may have to make a conscious effort to reconstruct where I am, what I am supposed to be doing, and why—a bit like a laptop rebooting itself, I suppose. Keeping track doesn’t mean always knowing exactly the state of everything in the world. Obviously this is impossible—for example, I have no idea who is occupying the other rooms in my nondescript hotel in Zürich, let alone the present locations and activities of most of the eight billion people on Earth. I haven’t the faintest idea what’s happening in the rest of the universe beyond the solar system. My uncertainty about the current state of affairs is both massive and inevitable.
The basic method for keeping track of an uncertain world is Bayesian updating. Algorithms for doing this usually have two steps: a prediction step, where the agent predicts the current state of the world given its most recent action, and then an update step, where it receives new perceptual input and updates its beliefs accordingly. To illustrate how this works, consider the problem a robot faces in figuring out where it is. Figure 21(a) illustrates a typical case: The robot is in the middle of a room, with some uncertainty about its exact location, and wants to go through the door. It commands its wheels to move 1.5 meters towards the door; unfortunately, its wheels are old and wobbly, so the robot’s prediction about where it ends up is quite uncertain, as shown in figure 21(b). If it tried to keep moving now, it might well crash. Fortunately, it has a sonar device to measure the distance to the doorposts. As figure 21(c) shows, the measurements suggest the robot is about 70 centimeters from the left doorpost and 85 centimeters from the right. Finally, the robot updates its belief state by combining the prediction in (b) with the measurements in (c) to obtain the new belief state in figure 21(d).
The algorithm for keeping track of the belief state can be applied to handle not just uncertainty about location but also uncertainty about the map itself. This results in a technique called SLAM (simultaneous localization and mapping). SLAM is a core component of many AI applications, ranging from augmented reality systems to self-driving cars and planetary rovers.
[]
FIGURE 21: A robot trying to move through a doorway. (a) The initial belief state: the robot is somewhat uncertain of its location; it tries to move 1.5 meters towards the door. (b) The prediction step: the robot estimates that it is closer to the door but is quite uncertain about the direction it actually moved because its motors are old and its wheels wobbly. (c) The robot measures the distance to each doorpost using a poor-quality sonar device; the estimates are 70 centimeters from the left doorpost and 85 centimeters from the right. (d) The update step: combining the prediction in (b) with the observation in (c) gives the new belief state. Now the robot has a pretty good idea of where it is and will need to correct its course a bit to get through the door.
Appendix D
LEARNING FROM EXPERIENCE
Learning means improving performance based on experience. For a visual perception system, that might mean learning to recognize more categories of objects based on seeing examples of those categories; for a knowledge-based system, simply acquiring more knowledge is a form of learning, because it means the system can answer more questions; for a lookahead decision-making system such as AlphaGo, learning could mean improving its ability to evaluate positions or improving its ability to explore useful parts of the tree of possibilities.
Learning from examples
The most common form of machine learning is called supervised learning. A supervised learning algorithm is given a collection of training examples, each labeled with the correct output, and must produce a hypothesis as to what the correct rule is. Typically, a supervised learning system seeks to optimize the agreement between the hypothesis and the training examples. Often there is also a penalty for hypotheses that are more complicated than necessary—as recommended by Ockham’s razor.
[]
FIGURE 22: Legal and illegal moves in Go: moves A, B, and C are legal for Black, while moves D, E, and F are illegal. Move G might or might not be legal, depending on what has happened previously in the game.
Let’s illustrate this for the problem of learning the legal moves in Go. (If you already know the rules of Go, then at least this will be easy to follow; if not, then you’ll be better able to sympathize with the learning program.) Suppose the algorithm starts with the hypothesis
for all time steps t, and for all locations l,
it is legal to play a stone at location l at time t.
It is Black’s turn to move in the position shown in figure 22. The algorithm tries A: that’s fine. B and C too. Then it tries D, on top of an existing white piece: that’s illegal. (In chess or backgammon, it would be fine—that’s how pieces are captured.) The move at E, on top of a black piece, is also illegal. (Illegal in chess too, but legal in backgammon.) Now, from these five training examples, the algorithm might propose the following hypothesis:
for all time steps t, and for all locations l,
if l is unoccupied at time t,
then it is legal to play a stone at location l at time t.
Then it tries F and finds to its surprise that F is illegal. After a few false starts, it settles on the following:
for all time steps t, and for all locations l,
if l is unoccupied at time t and
l is not surrounded by opponent stones,
then it is legal to play a stone at location l at time t.
(This is sometimes called the no suicide rule.) Finally, it tries G, which in this case turns out to be legal. After scratching its head for a while and perhaps trying a few more experiments, it settles on the hypothesis that G is OK, even though it is surrounded, because it captures the white stone at D and therefore becomes un-surrounded immediately.
As you can see from the gradual progression of rules, learning takes place by a sequence of modifications to the hypothesis so as to fit the observed examples. This is something a learning algorithm can do easily. Machine learning researchers have designed all sorts of ingenious algorithms for finding good hypotheses quickly. Here the algorithm is searching in the space of logical expressions representing Go rules, but the hypotheses could also be algebraic expressions representing physical laws, probabilistic Bayesian networks representing diseases and symptoms, or even computer programs representing the complicated behavior of some other machine.
A second important point is that even good hypotheses can be wrong: in fact, the hypothesis given above is wrong, even after fixing it to ensure that G is legal. It needs to include the ko or no-repetition rule—for example, if White had just captured a black stone at G by playing at D, Black may not recapture by playing at G, since that produces the same position again. Notice that this rule is a radical departure from what the program has learned so far, because it means that legality cannot be determined from the current position; instead, one also has to remember previous positions.
The Scottish philosopher David Hume pointed out in 1748 that inductive reasoning—that is, reasoning from particular observations to general principles—can never be guaranteed.¹ In the modern theory of statistical learning, we ask not for guarantees of perfect correctness but only for a guarantee that the hypothesis found is probably approximately correct.² A learning algorithm can be “unlucky” and see an unrepresentative sample—for example, it might never try a move like G, thinking it to be illegal. It can also fail to predict some weird edge cases, such as the ones covered by some of the more complicated and rarely invoked forms of the no-repetition rule.³ But, as long as the universe exhibits some degree of regularity, it’s very unlikely that the algorithm could produce a seriously bad hypothesis, because such a hypothesis would very probably have been “found out” by one of the experiments.
Deep learning—the technology causing all the hullabaloo about AI in the media—is primarily a form of supervised learning. It represents one of the most significant advances in AI in recent decades, so it’s worth understanding how it works. Moreover, some researchers believe it will lead to human-level AI systems within a few years, so it’s a good idea to assess whether that’s likely to be true.
It’s easiest to understand deep learning in the context of a particular task, such as learning to distinguish giraffes and llamas. Given some labeled photographs of each, the learning algorithm has to form a hypothesis that allows it to classify unlabeled images. An image is, from the computer’s point of view, nothing but a large table of numbers, with each number corresponding to one of three RGB values for one pixel of the image. So, instead of a Go hypothesis that takes a board position and a move as input and decides whether the move is legal, we need a giraffe–llama hypothesis that takes a table of numbers as input and predicts a category (giraffe or llama).
Now the question is, what sort of hypothesis? Over the last fifty-odd years of computer vision research, many approaches have been tried. The current favorite is a deep convolutional network. Let me unpack this: It’s called a network because it represents a complex mathematical expression composed in a regular way from many smaller subexpressions, and the compositional structure has the form of a network. (Such networks are often called neural networks because their designers draw inspiration from the networks of neurons in the brain.) It’s called convolutional because that’s a fancy mathematical way to say that the network structure repeats itself in a fixed pattern across the whole input image. And it’s called deep because such networks typically have many layers, and also because it sounds impressive and slightly spooky.
[]
FIGURE 23: (left) A simplified depiction of a deep convolutional network for recognizing objects in images. The image pixel values are fed in at the left and the network outputs values at the two rightmost nodes, indicating how likely the image is to be a llama or a giraffe. Notice how the pattern of local connections, indicated by the dark lines in the first layer, repeats across the whole layer; (right) one of the nodes in the network. There is an adjustable weight on each incoming value so that the node pays more or less attention to it. Then the total incoming signal goes through a gating function that allows large signals through but suppresses small ones.
A simplified example (simplified because real networks may have hundreds of layers and millions of nodes) is shown in figure 23. The network is really a picture of a complex, adjustable mathematical expression. Each node in the network corresponds to a simple adjustable expression, as illustrated in the figure. Adjustments are made by changing the weights on each input, as indicated by the “volume controls.” The weighted sum of the inputs is then passed through a gating function before reaching the output side of the node; typically, the gating function suppresses small values and allows larger ones through.
Learning takes place in the network simply by adjusting all the volume control knobs to reduce the prediction error on the labeled examples. It’s as simple as that: no magic, no especially ingenious algorithms. Working out which way to turn the knobs to decrease the error is a straightforward application of calculus to compute how changing each weight would change the error at the output layer. This leads to a simple formula for propagating the error backwards from the output layer to the input layer, tweaking knobs along the way.
Miraculously, the process works. For the task of recognizing objects in photographs, deep learning algorithms have demonstrated remarkable performance. The first inkling of this came in the 2012 ImageNet competition, which provides training data consisting of 1.2 million labeled images in one thousand categories, and then requires the algorithm to label one hundred thousand new images.⁴ Geoff Hinton, a British computational psychologist who was at the forefront of the first neural network revolution in the 1980s, had been experimenting with a very large deep convolutional network: 650,000 nodes and 60 million parameters. He and his group at the University of Toronto achieved an ImageNet error rate of 15 percent, a dramatic improvement on the previous best of 26 percent.⁵ By 2015, dozens of teams were using deep learning methods and the error rate was down to 5 percent, comparable to that of a human who had spent weeks learning to recognize the thousand categories in the test.⁶ By 2017, the machine error rate was 2 percent.
Over roughly the same period, there have been comparable improvements in speech recognition and machine translation based on similar methods. Taken together, these are three of the most important application areas for AI. Deep learning has also played an important role in applications of reinforcement learning—for example, in learning the evaluation function that AlphaGo uses to estimate the desirability of possible future positions, and in learning controllers for complex robotic behaviors.
As yet, we have very little understanding as to why deep learning works as well as it does. Possibly the best explanation is that deep networks are deep: because they have many layers, each layer can learn a fairly simple transformation from its inputs to its outputs, while many such simple transformations add up to the complex transformation required to go from a photograph to a category label. In addition, deep networks for vision have built-in structure that enforces translation invariance and scale invariance—meaning that a dog is a dog no matter where it appears in the image and no matter how big it appears in the image.
Another important property of deep networks is that they often seem to discover internal representations that capture elementary features of images, such as eyes, stripes, and simple shapes. None of these features are built in. We know they are there because we can experiment with the trained network and see what kinds of data cause the internal nodes (typically those close to the output layer) to light up. In fact, it is possible to run the learning algorithm a different way so that it adjusts the image itself to produce a stronger response at chosen internal nodes. Repeating this process many times produces what are now known as deep dreaming or inceptionism images, such as the one in figure 24.⁷ Inceptionism has become an art form in itself, producing images unlike any human art.
For all their remarkable achievements, deep learning systems as we currently understand them are far from providing a basis for generally intelligent systems. Their principal weakness is that they are circuits; they are cousins of propositional logic and Bayesian networks, which, for all their wonderful properties, also lack the ability to express complex forms of knowledge in a concise way. This means that deep networks operating in “native mode” require vast amounts of circuitry to represent fairly simple kinds of general knowledge. That, in turn, implies vast numbers of weights to learn and hence a need for unreasonable numbers of examples—more than the universe could ever supply.
[]
FIGURE 24: An image generated by Google’s DeepDream software.
Some argue that the brain is also made of circuits, with neurons as the circuit elements; therefore, circuits can support human-level intelligence. This is true, but only in the same sense that brains are made of atoms: atoms can indeed support human-level intelligence, but that doesn’t mean that just collecting together lots of atoms will produce intelligence. The atoms have to be arranged in certain ways. By the same token, the circuits have to be arranged in certain ways. Computers are also made of circuits, both in their memories and in their processing units; but those circuits have to be arranged in certain ways, and layers of software have to be added, before the computer can support the operation of high-level programming languages and logical reasoning systems. At present, however, there is no sign that deep learning systems can develop such capabilities by themselves—nor does it make scientific sense to require them to do so.
There are further reasons to think that deep learning may reach a plateau well short of general intelligence, but it’s not my purpose here to diagnose all the problems: others, both inside⁸ and outside⁹ the deep learning community, have noted many of them. The point is that simply creating larger and deeper networks and larger data sets and bigger machines is not enough to create human-level AI. We have already seen (in Appendix B) DeepMind CEO Demis Hassabis’s view that “higher-level thinking and symbolic reasoning” are essential for AI. Another prominent deep learning expert, François Chollet, put it this way:¹⁰ “Many more applications are completely out of reach for current deep learning techniques—even given vast amounts of human-annotated data. . . . We need to move away from straightforward input-to-output mappings, and on to reasoning and abstraction.”
Learning from thinking
Whenever you find yourself having to think about something, it’s because you don’t already know the answer. When someone asks for the number of your brand-new cell phone, you probably don’t know it. You think to yourself, “OK, I don’t know it; so how do I find it?” Not being a slave to the cell phone, you don’t know how to find it. You think to yourself, “How do I figure out how to find it?” You have a generic answer to this: “Probably they put it somewhere that’s easy for users to find.” (Of course, you could be wrong about this.) Obvious places would be at the top of the home screen (not there), inside the Phone app, or in Settings for that app. You try Settings>Phone, and there it is.
The next time you are asked for your number, you either know it or you know exactly how to get it. You remember the procedure, not just for this phone on this occasion but for all similar phones on all occasions—that is, you store and reuse a generalized solution to the problem. The generalization is justified because you understand that the specifics of this particular phone and this particular occasion are irrelevant. You would be shocked if the method worked only on Tuesdays for phone numbers ending in 17.
[]
FIGURE 25: The concept of a ladder in Go. (a) Black threatens to capture White’s piece. (b) White tries to escape. (c) Black blocks that direction of escape. (d) White tries the other direction. (e) Play continues in the sequence indicated by the numbers. The ladder eventually reaches the edge of the board, where White has nowhere to run. The coup de grâce is administered by move 7: White’s group is completely surrounded and dies.
Go offers a beautiful example of the same kind of learning. In figure 25(a), we see a common situation where Black threatens to capture White’s stone by surrounding it. White attempts to escape by adding stones connected to the original one, but Black continues to cut off the routes of escape. This pattern of moves forms a ladder of stones diagonally across the board, until it runs into the edge; then White has nowhere to go. If you are White, you probably won’t make the same mistake again: you realize that the ladder pattern always results in eventual capture, for any initial location and any direction, at any stage of the game, whether you are playing White or Black. The only exception occurs when the ladder runs into some additional stones belonging to the escapee. The generality of the ladder pattern follows straightforwardly from the rules of Go.
The case of the missing phone number and the case of the Go ladder illustrate the possibility of learning effective, general rules from a single example—a far cry from the millions of examples needed for deep learning. In AI, this kind of learning is called explanation-based learning: on seeing the example, the agent can explain to itself why it came out that way and can extract the general principle by seeing what factors were essential for the explanation.
Strictly speaking, the process does not, by itself, add new knowledge—for example, White could have simply derived the existence and outcome of the general ladder pattern from the rules of Go, without ever seeing an example.¹¹ Chances are, however, that White wouldn’t ever discover the ladder concept without seeing an example of it; so, we can understand explanation-based learning as a powerful method for saving the results of computation in a generalized way, so as to avoid having to recapitulate the same reasoning process (or making the same mistake with an imperfect reasoning process) in the future.
Research in cognitive science has stressed the importance of this type of learning in human cognition. Under the name of chunking, it forms a central pillar of Allen Newell’s highly influential theory of cognition.¹² (Newell was one of the attendees of the 1956 Dartmouth workshop and co-winner of the 1975 Turing Award with Herb Simon.) It explains how humans become more fluent at cognitive tasks with practice, as various subtasks that originally required thinking become automatic. Without it, human conversations would be limited to one- or two-word responses and mathematicians would still be counting on their fingers.
Acknowledgments
Many people have helped in the creation of this book. They include my excellent editors at Viking (Paul Slovak) and Penguin (Laura Stickney); my agent, John Brockman, who encouraged me to write something; Jill Leovy and Rob Reid, who provided reams of useful feedback; and other readers of early drafts, especially Ziyad Marar, Nick Hay, Toby Ord, David Duvenaud, Max Tegmark, and Grace Cassy. Caroline Jeanmaire was immensely helpful in collating the innumerable suggestions for improvements made by the early readers, and Martin Fukui handled the collecting of permissions for images.
The main technical ideas in the book have been developed in collaboration with the members of the Center for Human-Compatible AI at Berkeley, especially Tom Griffiths, Anca Dragan, Andrew Critch, Dylan Hadfield-Menell, Rohin Shah, and Smitha Milli. The Center has been admirably piloted by executive director Mark Nitzberg and assistant director Rosie Campbell, and generously funded by the Open Philanthropy Foundation.
Ramona Alvarez and Carine Verdeau helped to keep things running throughout the process, and my incredible wife, Loy, and our children—Gordon, Lucy, George, and Isaac—supplied copious and necessary amounts of love, forbearance, and encouragement to finish, not always in that order.
Notes
CHAPTER 1
1. The first edition of my textbook on AI, co-authored with Peter Norvig, currently director of research at Google: Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 1st ed. (Prentice Hall, 1995).
2. Robinson developed the resolution algorithm, which can, given enough time, prove any logical consequence of a set of first-order logical assertions. Unlike previous algorithms, it did not require conversion to propositional logic. J. Alan Robinson, “A machine-oriented logic based on the resolution principle,” Journal of the ACM 12 (1965): 23–41.
3. Arthur Samuel, an American pioneer of the computer era, did his early work at IBM. The paper describing his work on checkers was the first to use the term machine learning, although Alan Turing had already talked about “a machine that can learn from experience” as early as 1947. Arthur Samuel, “Some studies in machine learning using the game of checkers,” IBM Journal of Research and Development 3 (1959): 210–29.
4. The “Lighthill Report,” as it became known, led to the termination of research funding for AI except at the universities of Edinburgh and Sussex: Michael James Lighthill, “Artificial intelligence: A general survey,” in Artificial Intelligence: A Paper Symposium (Science Research Council of Great Britain, 1973).
5. The CDC 6600 filled an entire room and cost the equivalent of $20 million. For its era it was incredibly powerful, albeit a million times less powerful than an iPhone.
6. Following Deep Blue’s victory over Kasparov, at least one commentator predicted that it would take one hundred years before the same thing happened in Go: George Johnson, “To test a powerful computer, play an ancient game,” The New York Times, July 29, 1997.
7. For a highly readable history of the development of nuclear technology, see Richard Rhodes, The Making of the Atomic Bomb (Simon & Schuster, 1987).
8. A simple supervised learning algorithm may not have this effect, unless it is wrapped within an A/B testing framework (as is common in online marketing settings). Bandit algorithms and reinforcement learning algorithms will have this effect if they operate with an explicit representation of user state or an implicit representation in terms of the history of interactions with the user.
9. Some have argued that profit-maximizing corporations are already out-of-control artificial entities. See, for example, Charles Stross, “Dude, you broke the future!” (keynote, 34th Chaos Communications Congress, 2017). See also Ted Chiang, “Silicon Valley is turning into its own worst fear,” Buzzfeed, December 18, 2017. The idea is explored further by Daniel Hillis, “The first machine intelligences,” in Possible Minds: Twenty-Five Ways of Looking at AI, ed. John Brockman (Penguin Press, 2019).
10. For its time, Wiener’s paper was a rare exception to the prevailing view that all technological progress was a good thing: Norbert Wiener, “Some moral and technical consequences of automation,” Science 131 (1960): 1355–58.
CHAPTER 2
1. Santiago Ramón y Cajal proposed synaptic changes as the site of learning in 1894, but it was not until the late 1960s that this hypothesis was confirmed experimentally. See Timothy Bliss and Terje Lomo, “Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path,” Journal of Physiology 232 (1973): 331–56.
2. For a brief introduction, see James Gorman, “Learning how little we know about the brain,” The New York Times, November 10, 2014. See also Tom Siegfried, “There’s a long way to go in understanding the brain,” ScienceNews, July 25, 2017. A special 2017 issue of the journal Neuron (vol. 94, pp. 933–1040) provides a good overview of many different approaches to understanding the brain.
3. The presence or absence of consciousness—actual subjective experience—certainly makes a difference in our moral consideration for machines. If ever we gain enough understanding to design conscious machines or to detect that we have done so, we would face many important moral issues for which we are largely unprepared.
4. The following paper was among the first to make a clear connection between reinforcement learning algorithms and neurophysiological recordings: Wolfram Schultz, Peter Dayan, and P. Read Montague, “A neural substrate of prediction and reward,” Science 275 (1997): 1593–99.
5. Studies of intracranial stimulation were carried out with the hope of finding cures for various mental illnesses. See, for example, Robert Heath, “Electrical self-stimulation of the brain in man,” American Journal of Psychiatry 120 (1963): 571–77.
6. An example of a species that may be facing self-extinction via addiction: Bryson Voirin, “Biology and conservation of the pygmy sloth, Bradypus pygmaeus,” Journal of Mammalogy 96 (2015): 703–7.
7. The Baldwin effect in evolution is usually attributed to the following paper: James Baldwin, “A new factor in evolution,” American Naturalist 30 (1896): 441–51.
8. The core idea of the Baldwin effect also appears in the following work: Conwy Lloyd Morgan, Habit and Instinct (Edward Arnold, 1896).
9. A modern analysis and computer implementation demonstrating the Baldwin effect: Geoffrey Hinton and Steven Nowlan, “How learning can guide evolution,” Complex Systems 1 (1987): 495–502.
10. Further elucidation of the Baldwin effect by a computer model that includes the evolution of the internal reward-signaling circuitry: David Ackley and Michael Littman, “Interactions between learning and evolution,” in Artificial Life II, ed. Christopher Langton et al. (Addison-Wesley, 1991).
11. Here I am pointing to the roots of our present-day concept of intelligence, rather than describing the ancient Greek concept of nous, which had a variety of related meanings.
12. The quotation is taken from Aristotle, Nicomachean Ethics, Book III, 3, 1112b.
13. Cardano, one of the first European mathematicians to consider negative numbers, developed an early mathematical treatment of probability in games. He died in 1576, eighty-seven years before his work appeared in print: Gerolamo Cardano, Liber de ludo aleae (Lyons, 1663).
14. Arnauld’s work, initially published anonymously, is often called The Port-Royal Logic: Antoine Arnauld, La logique, ou l’art de penser (Chez Charles Savreux, 1662). See also Blaise Pascal, Pensées (Chez Guillaume Desprez, 1670).
15. The concept of utility: Daniel Bernoulli, “Specimen theoriae novae de mensura sortis,” Proceedings of the St. Petersburg Imperial Academy of Sciences 5 (1738): 175–92. Bernoulli’s idea of utility arises from considering a merchant, Sempronius, choosing whether to transport a valuable cargo in one ship or to split it between two, assuming that each ship has a 50 percent probability of sinking on the journey. The expected monetary value of the two solutions is the same, but Sempronius clearly prefers the two-ship solution.
16. By most accounts, von Neumann did not himself invent this architecture but his name was on an early draft of an influential report describing the EDVAC stored-program computer.
17. The work of von Neumann and Morgenstern is in many ways the foundation of modern economic theory: John von Neumann and Oskar Morgenstern, Theory of Games and Economic Behavior (Princeton University Press, 1944).
18. The proposal that utility is a sum of discounted rewards was put forward as a mathematically convenient hypothesis by Paul Samuelson, “A note on measurement of utility,” Review of Economic Studies 4 (1937): 155–61. If s₀, s₁, . . . is a sequence of states, then its utility in this model is U(s₀, s₁, . . .) = ∑\_(t)γ^(t)R(s\_(t)), where γ is a discount factor and R is a reward function describing the desirability of a state. Naïve application of this model seldom agrees with the judgment of real individuals about the desirability of present and future rewards. For a thorough analysis, see Shane Frederick, George Loewenstein, and Ted O’Donoghue, “Time discounting and time preference: A critical review,” Journal of Economic Literature 40 (2002): 351–401.
19. Maurice Allais, a French economist, proposed a decision scenario in which humans appear consistently to violate the von Neumann–Morgenstern axioms: Maurice Allais, “Le comportement de l’homme rationnel devant le risque: Critique des postulats et axiomes de l’école américaine,” Econometrica 21 (1953): 503–46.
20. For an introduction to non-quantitative decision analysis, see Michael Wellman, “Fundamental concepts of qualitative probabilistic networks,” Artificial Intelligence 44 (1990): 257–303.
21. I will discuss the evidence for human irrationality further in Chapter 9. The standard references include the following: Allais, “Le comportement”; Daniel Ellsberg, Risk, Ambiguity, and Decision (PhD thesis, Harvard University, 1962); Amos Tversky and Daniel Kahneman, “Judgment under uncertainty: Heuristics and biases,” Science 185 (1974): 1124–31.
22. It should be clear that this is a thought experiment that cannot be realized in practice. Choices about different futures are never presented in full detail, and humans never have the luxury of minutely examining and savoring those futures before choosing. Instead, one is given only brief summaries, such as “librarian” or “coal miner.” In making such a choice, one is really being asked to compare two probability distributions over complete futures, one beginning with the choice “librarian” and the other “coal miner,” with each distribution assuming optimal actions on one’s own part within each future. Needless to say, this is not easy.
23. The first mention of a randomized strategy for games appears in Pierre Rémond de Montmort, Essay d’analyse sur les jeux de hazard, 2nd ed. (Chez Jacques Quillau, 1713). The book identifies a certain Monsieur de Waldegrave as the source of an optimal randomized solution for the card game Le Her. Details of Waldegrave’s identity are revealed by David Bellhouse, “The problem of Waldegrave,” Electronic Journal for History of Probability and Statistics 3 (2007).
24. The problem is fully defined by specifying the probability that Alice scores in each of four cases: when she shoots to Bob’s right and he dives right or left, and when she shoots to his left and he dives right or left. In this case, these probabilities are 25 percent, 70 percent, 65 percent, and 10 percent respectively. Now suppose that Alice’s strategy is to shoot to Bob’s right with probability p and his left with probability 1 − p, while Bob dives to his right with probability q and left with probability 1 − q. The payoff to Alice is U\_(A) = 0.25pq + 0.70 p(1 − q) + 0.65 (1 − p)q + 0.10(1 − p) (1 − q), while Bob’s payoff is U\_(B) = −U\_(A). At equilibrium, ∂U\_(A)/∂p = 0 and ∂U\_(B)/∂q = 0, giving p = 0.55 and q = 0.60.
25. The original game-theoretic problem was introduced by Merrill Flood and Melvin Dresher at the RAND Corporation; Tucker saw the payoff matrix on a visit to their offices and proposed a “story” to go along with it.
26. Game theorists typically say that Alice and Bob could cooperate with each other (refuse to talk) or defect and rat on their accomplice. I find this language confusing, because “cooperate with each other” is not a choice that each agent can make separately, and because in common parlance one often talks about cooperating with the police, receiving a lighter sentence in return for cooperating, and so on.
27. For an interesting trust-based solution to the prisoner’s dilemma and other games, see Joshua Letchford, Vincent Conitzer, and Kamal Jain, “An ‘ethical’ game-theoretic solution concept for two-player perfect-information games,” in Proceedings of the 4th International Workshop on Web and Internet Economics, ed. Christos Papadimitriou and Shuzhong Zhang (Springer, 2008).
28. Origin of the tragedy of the commons: William Forster Lloyd, Two Lectures on the Checks to Population (Oxford University, 1833).
29. Modern revival of the topic in the context of global ecology: Garrett Hardin, “The tragedy of the commons,” Science 162 (1968): 1243–48.
30. It’s quite possible that even if we had tried to build intelligent machines from chemical reactions or biological cells, those assemblages would have turned out to be implementations of Turing machines in nontraditional materials. Whether an object is a general-purpose computer has nothing to do with what it’s made of.
31. Turing’s breakthrough paper defined what is now known as the Turing machine, the basis for modern computer science. The Entscheidungsproblem, or decision problem, in the title is the problem of deciding entailment in first-order logic: Alan Turing, “On computable numbers, with an application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, 2nd ser., 42 (1936): 230–65.
32. A good survey of research on negative capacitance by one of its inventors: Sayeef Salahuddin, “Review of negative capacitance transistors,” in International Symposium on VLSI Technology, Systems and Application (IEEE Press, 2016).
33. For a much better explanation of quantum computation, see Scott Aaronson, Quantum Computing since Democritus (Cambridge University Press, 2013).
34. The paper that established a clear complexity-theoretic distinction between classical and quantum computation: Ethan Bernstein and Umesh Vazirani, “Quantum complexity theory,” SIAM Journal on Computing 26 (1997): 1411–73.
35. The following article by a renowned physicist provides a good introduction to the current state of understanding and technology: John Preskill, “Quantum computing in the NISQ era and beyond,” arXiv:1801.00862 (2018).
36. On the maximum computational ability of a one-kilogram object: Seth Lloyd, “Ultimate physical limits to computation,” Nature 406 (2000): 1047–54.
37. For an example of the suggestion that humans may be the pinnacle of physically achievable intelligence, see Kevin Kelly, “The myth of a superhuman AI,” Wired, April 25, 2017: “We tend to believe that the limit is way beyond us, way ‘above’ us, as we are ‘above’ an ant. . . . What evidence do we have that the limit is not us?”
38. In case you are wondering about a simple trick to solve the halting problem: the obvious method of just running the program to see if it finishes doesn’t work, because that method doesn’t necessarily finish. You might wait a million years and still not know if the program is really stuck in an infinite loop or just taking its time.
39. The proof that the halting problem is undecidable is an elegant piece of trickery. The question: Is there a LoopChecker(P,X) program that, for any program P and any input X, decides correctly, in finite time, whether P applied to input X will halt and produce a result or keep chugging away forever? Suppose that LoopChecker exists. Now write a program Q that calls LoopChecker as a subroutine, with Q itself and X as inputs, and then does the opposite of what LoopChecker(Q,X) predicts. So, if LoopChecker says that Q halts, Q doesn’t halt, and vice versa. Thus, the assumption that LoopChecker exists leads to a contradiction, so LoopChecker cannot exist.
40. I say “appear” because, as yet, the claim that the class of NP-complete problems requires superpolynomial time (usually referred to as P ≠ NP) is still an unproven conjecture. After almost fifty years of research, however, nearly all mathematicians and computer scientists are convinced the claim is true.
41. Lovelace’s writings on computation appear mainly in her notes attached to her translation of an Italian engineer’s commentary on Babbage’s engine: L. F. Menabrea, “Sketch of the Analytical Engine invented by Charles Babbage,” trans. Ada, Countess of Lovelace, in Scientific Memoirs, vol. III, ed. R. Taylor (R. and J. E. Taylor, 1843). Menabrea’s original article, written in French and based on lectures given by Babbage in 1840, appears in Bibliothèque Universelle de Genève 82 (1842).
42. One of the seminal early papers on the possibility of artificial intelligence: Alan Turing, “Computing machinery and intelligence,” Mind 59 (1950): 433–60.
43. The Shakey project at SRI is summarized in a retrospective by one of its leaders: Nils Nilsson, “Shakey the robot,” technical note 323 (SRI International, 1984). A twenty-four-minute film, SHAKEY: Experimentation in Robot Learning and Planning, was made in 1969 and garnered national attention.
44. The book that marked the beginning of modern, probability-based AI: Judea Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference (Morgan Kaufmann, 1988).
45. Technically, chess is not fully observable. A program does need to remember a small amount of information to determine the legality of castling and en passant moves and to define draws by repetition or by the fifty-move rule.
46. For a complete exposition, see Chapter 2 of Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 3rd ed. (Pearson, 2010).
47. The size of the state space for StarCraft is discussed by Santiago Ontañon et al., “A survey of real-time strategy game AI research and competition in StarCraft,” IEEE Transactions on Computational Intelligence and AI in Games 5 (2013): 293–311. Vast numbers of moves are possible because a player can move all units simultaneously. The numbers go down as restrictions are imposed on how many units or groups of units can be moved at once.
48. On human–machine competition in StarCraft: Tom Simonite, “DeepMind beats pros at StarCraft in another triumph for bots,” Wired, January 25, 2019.
49. AlphaZero is described by David Silver et al., “Mastering chess and shogi by self-play with a general reinforcement learning algorithm,” arXiv:1712.01815 (2017).
50. Optimal paths in graphs are found using the A\* algorithm and its many descendants: Peter Hart, Nils Nilsson, and Bertram Raphael, “A formal basis for the heuristic determination of minimum cost paths,” IEEE Transactions on Systems Science and Cybernetics SSC-4 (1968): 100–107.
51. The paper that introduced the Advice Taker program and logic-based knowledge systems: John McCarthy, “Programs with common sense,” in Proceedings of the Symposium on Mechanisation of Thought Processes (Her Majesty’s Stationery Office, 1958).
52. To get some sense of the significance of knowledge-based systems, consider database systems. A database contains concrete, individual facts, such as the location of my keys and the identities of your Facebook friends. Database systems cannot store general rules, such as the rules of chess or the legal definition of British citizenship. They can count how many people called Alice have friends called Bob, but they cannot determine whether a particular Alice meets the conditions for British citizenship or whether a particular sequence of moves on a chessboard will lead to checkmate. Database systems cannot combine two pieces of knowledge to produce a third: they support memory but not reasoning. (It is true that many modern database systems provide a way to add rules and a way to use those rules to derive new facts; to the extent that they do, they are really knowledge-based systems.) Despite being highly constricted versions of knowledge-based systems, database systems underlie most of present-day commercial activity and generate hundreds of billions of dollars in value every year.
53. The original paper describing the completeness theorem for first-order logic: Kurt Gödel, “Die Vollständigkeit der Axiome des logischen Funktionenkalküls,” Monatshefte für Mathematik 37 (1930): 349–60.
54. The reasoning algorithm for first-order logic does have a gap: if there is no answer—that is, if the available knowledge is insufficient to give an answer either way—then the algorithm may never finish. This is unavoidable: it is mathematically impossible for a correct algorithm always to terminate with “don’t know,” for essentially the same reason that no algorithm can solve the halting problem (this page).
55. The first algorithm for theorem-proving in first-order logic worked by reducing first-order sentences to (very large numbers of) propositional sentences: Martin Davis and Hilary Putnam, “A computing procedure for quantification theory,” Journal of the ACM 7 (1960): 201–15. Robinson’s resolution algorithm operated directly on first-order logical sentences, using “unification” to match complex expressions containing logical variables: J. Alan Robinson, “A machine-oriented logic based on the resolution principle,” Journal of the ACM 12 (1965): 23–41.
56. One might wonder how Shakey the logical robot ever reached any definite conclusions about what to do. The answer is simple: Shakey’s knowledge base contained false assertions. For example, Shakey believed that by executing “push object A through door D into room B,” object A would end up in room B. This belief was false because Shakey could get stuck in the doorway or miss the doorway altogether or someone might sneakily remove object A from Shakey’s grasp. Shakey’s plan execution module could detect plan failure and replan accordingly, so Shakey was not, strictly speaking, a purely logical system.
57. An early commentary on the role of probability in human thinking: Pierre-Simon Laplace, Essai philosophique sur les probabilités (Mme. Ve. Courcier, 1814).
58. Bayesian logic described in a fairly nontechnical way: Stuart Russell, “Unifying logic and probability,” Communications of the ACM 58 (2015): 88–97. The paper draws heavily on the PhD thesis research of my former student Brian Milch.
59. The original source for Bayes’ theorem: Thomas Bayes and Richard Price, “An essay towards solving a problem in the doctrine of chances,” Philosophical Transactions of the Royal Society of London 53 (1763): 370–418.
60. Technically, Samuel’s program did not treat winning and losing as absolute rewards; by fixing the value of material to be positive; however, the program generally tended to work towards winning.
61. The application of reinforcement learning to produce a world-class backgammon program: Gerald Tesauro, “Temporal difference learning and TD-Gammon,” Communications of the ACM 38 (1995): 58–68.
62. The DQN system that learns to play a wide variety of video games using deep RL: Volodymyr Mnih et al., “Human-level control through deep reinforcement learning,” Nature 518 (2015): 529–33.
63. Bill Gates’s remarks on Dota 2 AI: Catherine Clifford, “Bill Gates says gamer bots from Elon Musk-backed nonprofit are ‘huge milestone’ in A.I.,” CNBC, June 28, 2018.
64. An account of OpenAI Five’s victory over the human world champions at Dota 2: Kelsey Piper, “AI triumphs against the world’s top pro team in strategy game Dota 2,” Vox, April 13, 2019.
65. A compendium of cases in the literature where misspecification of reward functions led to unexpected behavior: Victoria Krakovna, “Specification gaming examples in AI,” Deep Safety (blog), April 2, 2018.
66. A case where an evolutionary fitness function defined in terms of maximum velocity led to very unexpected results: Karl Sims, “Evolving virtual creatures,” in Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques (ACM, 1994).
67. For a fascinating exposition of the possibilities of reflex agents, see Valentino Braitenberg, Vehicles: Experiments in Synthetic Psychology (MIT Press, 1984).
68. News article on a fatal accident involving a vehicle in autonomous mode that hit a pedestrian: Devin Coldewey, “Uber in fatal crash detected pedestrian but had emergency braking disabled,” TechCrunch, May 24, 2018.
69. On steering control algorithms, see, for example, Jarrod Snider, “Automatic steering methods for autonomous automobile path tracking,” technical report CMU-RI-TR-09-08, Robotics Institute, Carnegie Mellon University, 2009.
70. Norfolk and Norwich terriers are two categories in the ImageNet database. They are notoriously hard to tell apart and were viewed as a single breed until 1964.
71. A very unfortunate incident with image labeling: Daniel Howley, “Google Photos mislabels 2 black Americans as gorillas,” Yahoo Tech, June 29, 2015.
72. Follow-up article on Google and gorillas: Tom Simonite, “When it comes to gorillas, Google Photos remains blind,” Wired, January 11, 2018.
CHAPTER 3
1. The basic plan for game-playing algorithms was laid out by Claude Shannon, “Programming a computer for playing chess,” Philosophical Magazine, 7th ser., 41 (1950): 256–75.
2. See figure 5.12 of Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 1st ed. (Prentice Hall, 1995). Note that the rating of chess players and chess programs is not an exact science. Kasparov’s highest-ever Elo rating was 2851, achieved in 1999, but current chess engines such as Stockfish are rated at 3300 or more.
3. The earliest reported autonomous vehicle on a public road: Ernst Dickmanns and Alfred Zapp, “Autonomous high speed road vehicle guidance by computer vision,” IFAC Proceedings Volumes 20 (1987): 221–26.
4. The safety record for Google (subsequently Waymo) vehicles: “Waymo safety report: On the road to fully self-driving,” 2018.
5. So far there have been at least two driver fatalities and one pedestrian fatality. Some references follow, along with brief quotes describing what happened. Danny Yadron and Dan Tynan, “Tesla driver dies in first fatal crash while using autopilot mode,” Guardian, June 30, 2016: “The autopilot sensors on the Model S failed to distinguish a white tractor-trailer crossing the highway against a bright sky.” Megan Rose Dickey, “Tesla Model X sped up in Autopilot mode seconds before fatal crash, according to NTSB,” TechCrunch, June 7, 2018: “At 3 seconds prior to the crash and up to the time of impact with the crash attenuator, the Tesla’s speed increased from 62 to 70.8 mph, with no precrash braking or evasive steering movement detected.” Devin Coldewey, “Uber in fatal crash detected pedestrian but had emergency braking disabled,” TechCrunch, May 24, 2018: “Emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior.”
6. The Society of Automotive Engineers (SAE) defines six levels of automation, where Level 0 is none at all and Level 5 is full automation: “The full-time performance by an automatic driving system of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver.”
7. Forecast of economic effects of automation on transportation costs: Adele Peters, “It could be 10 times cheaper to take electric robo-taxis than to own a car by 2030,” Fast Company, May 30, 2017.
8. The impact of accidents on the prospects for regulatory action on autonomous vehicles: Richard Waters, “Self-driving car death poses dilemma for regulators,” Financial Times, March 20, 2018.
9. The impact of accidents on public perception of autonomous vehicles: Cox Automotive, “Autonomous vehicle awareness rising, acceptance declining, according to Cox Automotive mobility study,” August 16, 2018.
10. The original chatbot: Joseph Weizenbaum, “ELIZA—a computer program for the study of natural language communication between man and machine,” Communications of the ACM 9 (1966): 36–45.
11. See physiome.org for current activities in physiological modeling. Work in the 1960s assembled models with thousands of differential equations: Arthur Guyton, Thomas Coleman, and Harris Granger, “Circulation: Overall regulation,” Annual Review of Physiology 34 (1972): 13–44.
12. Some of the earliest work on tutoring systems was done by Pat Suppes and colleagues at Stanford: Patrick Suppes and Mona Morningstar, “Computer-assisted instruction,” Science 166 (1969): 343–50.
13. Michael Yudelson, Kenneth Koedinger, and Geoffrey Gordon, “Individualized Bayesian knowledge tracing models,” in Artificial Intelligence in Education: 16th International Conference, ed. H. Chad Lane et al. (Springer, 2013).
14. For an example of machine learning on encrypted data, see, for example, Reza Shokri and Vitaly Shmatikov, “Privacy-preserving deep learning,” in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (ACM, 2015).
15. A retrospective on the first smart home, based on a lecture by its inventor, James Sutherland: James E. Tomayko, “Electronic Computer for Home Operation (ECHO): The first home computer,” IEEE Annals of the History of Computing 16 (1994): 59–61.
16. Summary of a smart-home project based on machine learning and automated decisions: Diane Cook et al., “MavHome: An agent-based smart home,” in Proceedings of the 1st IEEE International Conference on Pervasive Computing and Communications (IEEE, 2003).
17. For the beginnings of an analysis of user experiences in smart homes, see Scott Davidoff et al., “Principles of smart home control,” in Ubicomp 2006: Ubiquitous Computing, ed. Paul Dourish and Adrian Friday (Springer, 2006).
18. Commercial announcement of AI-based smart homes: “The Wolff Company unveils revolutionary smart home technology at new Annadel Apartments in Santa Rosa, California,” Business Insider, March 12, 2018.
19. Article on robot chefs as commercial products: Eustacia Huen, “The world’s first home robotic chef can cook over 100 meals,” Forbes, October 31, 2016.
20. Report from my Berkeley colleagues on deep RL for robotic motor control: Sergey Levine et al., “End-to-end training of deep visuomotor policies,” Journal of Machine Learning Research 17 (2016): 1–40.
21. On the possibilities for automating the work of hundreds of thousands of warehouse workers: Tom Simonite, “Grasping robots compete to rule Amazon’s warehouses,” Wired, July 26, 2017.
22. I’m assuming a generous one laptop-CPU minute per page, or about 10¹¹ operations. A third-generation tensor processing unit from Google runs at about 10¹⁷ operations per second, meaning that it can read a million pages per second, or about five hours for eighty million two-hundred-page books.
23. A 2003 study on the global volume of information production by all channels: Peter Lyman and Hal Varian, “How much information?” sims.berkeley.edu/research/projects/how-much-info-2003.
24. For details on the use of speech recognition by intelligence agencies, see Dan Froomkin, “How the NSA converts spoken words into searchable text,” The Intercept, May 5, 2015.
25. Analysis of visual imagery from satellites is an enormous task: Mike Kim, “Mapping poverty from space with the World Bank,” Medium.com, January 4, 2017. Kim estimates eight million people working 24/7, which converts to more than thirty million people working forty hours per week. I suspect this is an overestimate in practice, because the vast majority of the images would exhibit negligible change over the course of one day. On the other hand, the US intelligence community employs tens of thousands of people sitting in vast rooms staring at satellite images just to keep track of what’s happening in small regions of interest; so one million people is probably about right for the whole world.
26. There is substantial progress towards a global observatory based on real-time satellite image data: David Jensen and Jillian Campbell, “Digital earth: Building, financing and governing a digital ecosystem for planetary data,” white paper for the UN Science-Policy-Business Forum on the Environment, 2018.
27. Luke Muehlhauser has written extensively on AI predictions, and I am indebted to him for tracking down original sources for the quotations that follow. See Luke Muehlhauser, “What should we learn from past AI forecasts?” Open Philanthropy Project report, 2016.
28. A forecast of the arrival of human-level AI within twenty years: Herbert Simon, The New Science of Management Decision (Harper & Row, 1960).
29. A forecast of the arrival of human-level AI within a generation: Marvin Minsky, Computation: Finite and Infinite Machines (Prentice Hall, 1967).
30. John McCarthy’s forecast of the arrival of human-level AI within “five to 500 years”: Ian Shenker, “Brainy robots in our future, experts think,” Detroit Free Press, September 30, 1977.
31. For a summary of surveys of AI researchers on their estimates for the arrival of human-level AI, see aiimpacts.org. An extended discussion of survey results on human-level AI is given by Katja Grace et al., “When will AI exceed human performance? Evidence from AI experts,” arXiv:1705.08807v3 (2018).
32. For a chart mapping raw computer power against brain power, see Ray Kurzweil, “The law of accelerating returns,” Kurzweilai.net, March 7, 2001.
33. The Allen Institute’s Project Aristo: allenai.org/aristo.
34. For an analysis of the knowledge required to perform well on fourth-grade tests of comprehension and common sense, see Peter Clark et al., “Automatic construction of inference-supporting knowledge bases,” in Proceedings of the Workshop on Automated Knowledge Base Construction (2014), akbc.ws/2014.
35. The NELL project on machine reading is described by Tom Mitchell et al., “Never-ending learning,” Communications of the ACM 61 (2018): 103–15.
36. The idea of bootstrapping inferences from text is due to Sergey Brin, “Extracting patterns and relations from the World Wide Web,” in The World Wide Web and Databases, ed. Paolo Atzeni, Alberto Mendelzon, and Giansalvatore Mecca (Springer, 1998).
37. For a visualization of the black-hole collision detected by LIGO, see LIGO Lab Caltech, “Warped space and time around colliding black holes,” February 11, 2016, youtube.com/watch?v=1agm33iEAuo.
38. The first publication describing observation of gravitational waves: Ben Abbott et al., “Observation of gravitational waves from a binary black hole merger,” Physical Review Letters 116 (2016): 061102.
39. On babies as scientists: Alison Gopnik, Andrew Meltzoff, and Patricia Kuhl, The Scientist in the Crib: Minds, Brains, and How Children Learn (William Morrow, 1999).
40. A summary of several projects on automated scientific analysis of experimental data to discover laws: Patrick Langley et al., Scientific Discovery: Computational Explorations of the Creative Processes (MIT Press, 1987).
41. Some early work on machine learning guided by prior knowledge: Stuart Russell, The Use of Knowledge in Analogy and Induction (Pitman, 1989).
42. Goodman’s philosophical analysis of induction remains a source of inspiration: Nelson Goodman, Fact, Fiction, and Forecast (University of London Press, 1954).
43. A veteran AI researcher complains about mysticism in the philosophy of science: Herbert Simon, “Explaining the ineffable: AI on the topics of intuition, insight and inspiration,” in Proceedings of the 14th International Conference on Artificial Intelligence, ed. Chris Mellish (Morgan Kaufmann, 1995).
44. A survey of inductive logic programming by two originators of the field: Stephen Muggleton and Luc de Raedt, “Inductive logic programming: Theory and methods,” Journal of Logic Programming 19–20 (1994): 629–79.
45. For an early mention of the importance of encapsulating complex operations as new primitive actions, see Alfred North Whitehead, An Introduction to Mathematics (Henry Holt, 1911).
46. Work demonstrating that a simulated robot can learn entirely by itself to stand up: John Schulman et al., “High-dimensional continuous control using generalized advantage estimation,” arXiv:1506.02438 (2015). A video demonstration is available at youtube.com/watch?v=SHLuf2ZBQSw.
47. A description of a reinforcement learning system that learns to play a capture-the-flag video game: Max Jaderberg et al., “Human-level performance in first-person multiplayer games with population-based deep reinforcement learning,” arXiv:1807.01281 (2018).
48. A view of AI progress over the next few years: Peter Stone et al., “Artificial intelligence and life in 2030,” One Hundred Year Study on Artificial Intelligence, report of the 2015 Study Panel, 2016.
49. The media-fueled argument between Elon Musk and Mark Zuckerberg: Peter Holley, “Billionaire burn: Musk says Zuckerberg’s understanding of AI threat ‘is limited,’” The Washington Post, July 25, 2017.
50. On the value of search engines to individual users: Erik Brynjolfsson, Felix Eggers, and Avinash Gannamaneni, “Using massive online choice experiments to measure changes in well-being,” working paper no. 24514, National Bureau of Economic Research, 2018.
51. Penicillin was discovered several times and its curative powers were described in medical publications, but no one seems to have noticed. See en.wikipedia.org/wiki/History\_of\_penicillin.
52. For a discussion of some of the more esoteric risks from omniscient, clairvoyant AI systems, see David Auerbach, “The most terrifying thought experiment of all time,” Slate, July 17, 2014.
53. An analysis of some potential pitfalls in thinking about advanced AI: Kevin Kelly, “The myth of a superhuman AI,” Wired, April 25, 2017.
54. Machines may share some aspects of cognitive structure with humans, particularly those aspects dealing with perception and manipulation of the physical world and the conceptual structures involved in natural language understanding. Their deliberative processes are likely to be quite different because of the enormous disparities in hardware.
55. According to 2016 survey data, the eighty-eighth percentile corresponds to $100,000 per year: American Community Survey, US Census Bureau, www.census.gov/programs-surveys/acs. For the same year, global per capita GDP was $10,133: National Accounts Main Aggregates Database, UN Statistics Division, unstats.un.org/unsd/snaama.
56. If the GDP growth phases in over ten years or twenty years, it’s worth $9,400 trillion or $6,800 trillion, respectively—still nothing to sneeze at. On an interesting historical note, I. J. Good, who popularized the notion of an intelligence explosion (this page), estimated the value of human-level AI to be at least “one megaKeynes,” referring to the fabled economist John Maynard Keynes. The value of Keynes’s contributions was estimated in 1963 as £100 billion, so a megaKeynes comes out to around $2,200,000 trillion in 2016 dollars. Good pinned the value of AI primarily on its potential to ensure that the human race survives indefinitely. Later, he came to wonder whether he should have added a minus sign.
57. The EU announced plans for $24 billion in research and development spending for the period 2019–20. See European Commission, “Artificial intelligence: Commission outlines a European approach to boost investment and set ethical guidelines,” press release, April 25, 2018. China’s long-term investment plan for AI, announced in 2017, envisages a core AI industry generating $150 billion annually by 2030. See, for example, Paul Mozur, “Beijing wants A.I. to be made in China by 2030,” The New York Times, July 20, 2017.
58. See, for example, Rio Tinto’s Mine of the Future program at riotinto.com/australia/pilbara/mine-of-the-future-9603.aspx.
59. A retrospective analysis of economic growth: Jan Luiten van Zanden et al., eds., How Was Life? Global Well-Being since 1820 (OECD Publishing, 2014).
60. The desire for relative advantage over others, rather than an absolute quality of life, is a positional good; see Chapter 9.
CHAPTER 4
1. Wikipedia’s article on the Stasi has several useful references on its workforce and its overall impact on East German life.
2. For details on Stasi files, see Cullen Murphy, God’s Jury: The Inquisition and the Making of the Modern World (Houghton Mifflin Harcourt, 2012).
3. For a thorough analysis of AI surveillance systems, see Jay Stanley, The Dawn of Robot Surveillance (American Civil Liberties Union, 2019).
4. Recent books on surveillance and control include Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019) and Roger McNamee, Zucked: Waking Up to the Facebook Catastrophe (Penguin Press, 2019).
5. News article on a blackmail bot: Avivah Litan, “Meet Delilah—the first insider threat Trojan,” Gartner Blog Network, July 14, 2016.
6. For a low-tech version of human susceptibility to misinformation, in which an unsuspecting individual becomes convinced that the world is being destroyed by meteor strikes, see Derren Brown: Apocalypse, “Part One,” directed by Simon Dinsell, 2012, youtube.com/watch?v=o\_CUrMJOxqs.
7. An economic analysis of reputation systems and their corruption is given by Steven Tadelis, “Reputation and feedback systems in online platform markets,” Annual Review of Economics 8 (2016): 321–40.
8. Goodhart’s law: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.” For example, there may once have been a correlation between faculty quality and faculty salary, so the US News & World Report college rankings measure faculty quality by faculty salaries. This has contributed to a salary arms race that benefits faculty members but not the students who pay for those salaries. The arms race changes faculty salaries in a way that does not depend on faculty quality, so the correlation tends to disappear.
9. An article describing German efforts to police public discourse: Bernhard Rohleder, “Germany set out to delete hate speech online. Instead, it made things worse,” WorldPost, February 20, 2018.
10. On the “infopocalypse”: Aviv Ovadya, “What’s worse than fake news? The distortion of reality itself,” WorldPost, February 22, 2018.
11. On the corruption of online hotel reviews: Dina Mayzlin, Yaniv Dover, and Judith Chevalier, “Promotional reviews: An empirical investigation of online review manipulation,” American Economic Review 104 (2014): 2421–55.
12. Statement of Germany at the Meeting of the Group of Governmental Experts, Convention on Certain Conventional Weapons, Geneva, April 10, 2018.
13. The Slaughterbots movie, funded by the Future of Life Institute, appeared in November 2017 and is available at youtube.com/watch?v=9CO6M2HsoIA.
14. For a report on one of the bigger faux pas in military public relations, see Dan Lamothe, “Pentagon agency wants drones to hunt in packs, like wolves,” The Washington Post, January 23, 2015.
15. Announcement of a large-scale drone swarm experiment: US Department of Defense, “Department of Defense announces successful micro-drone demonstration,” news release no. NR-008-17, January 9, 2017.
16. Examples of research centers studying the impact of technology on employment are the Work and Intelligent Tools and Systems group at Berkeley, the Future of Work and Workers project at the Center for Advanced Study in the Behavioral Sciences at Stanford, and the Future of Work Initiative at Carnegie Mellon University.
17. A pessimistic take on future technological unemployment: Martin Ford, Rise of the Robots: Technology and the Threat of a Jobless Future (Basic Books, 2015).
18. Calum Chace, The Economic Singularity: Artificial Intelligence and the Death of Capitalism (Three Cs, 2016).
19. For an excellent collection of essays, see Ajay Agrawal, Joshua Gans, and Avi Goldfarb, eds., The Economics of Artificial Intelligence: An Agenda (National Bureau of Economic Research, 2019).
20. The mathematical analysis behind this “inverted-U” employment curve is given by James Bessen, “Artificial intelligence and jobs: The role of demand” in The Economics of Artificial Intelligence, ed. Agrawal, Gans, and Goldfarb.
21. For a discussion of economic dislocation arising from automation, see Eduardo Porter, “Tech is splitting the US work force in two,” The New York Times, February 4, 2019. The article cites the following report for this conclusion: David Autor and Anna Salomons, “Is automation labor-displacing? Productivity growth, employment, and the labor share,” Brookings Papers on Economic Activity (2018).
22. For data on the growth of banking in the twentieth century, see Thomas Philippon, “The evolution of the US financial industry from 1860 to 2007: Theory and evidence,” working paper, 2008.
23. The bible for jobs data and the growth and decline of occupations: US Bureau of Labor Statistics, Occupational Outlook Handbook: 2018–2019 Edition (Bernan Press, 2018).
24. A report on trucking automation: Lora Kolodny, “Amazon is hauling cargo in self-driving trucks developed by Embark,” CNBC, January 30, 2019.
25. The progress of automation in legal analytics, describing the results of a contest: Jason Tashea, “AI software is more accurate, faster than attorneys when assessing NDAs,” ABA Journal, February 26, 2018.
26. A commentary by a distinguished economist, with a title explicitly evoking Keynes’s 1930 article: Lawrence Summers, “Economic possibilities for our children,” NBER Reporter (2013).
27. The analogy between data science employment and a small lifeboat for a giant cruise ship comes from a discussion with Yong Ying-I, head of Singapore’s Public Service Division. She conceded that it was correct on the global scale, but noted that “Singapore is small enough to fit in the lifeboat.”
28. Support for UBI from a conservative viewpoint: Sam Bowman, “The ideal welfare system is a basic income,” Adam Smith Institute, November 25, 2013.
29. Support for UBI from a progressive viewpoint: Jonathan Bartley, “The Greens endorse a universal basic income. Others need to follow,” The Guardian, June 2, 2017.
30. Chace, in The Economic Singularity, calls the “paradise” version of UBI the Star Trek economy, noting that in the more recent series of Star Trek episodes, money has been abolished because technology has created essentially unlimited material goods and energy. He also points to the massive changes in economic and social organization that will be needed to make such a system successful.
31. The economist Richard Baldwin also predicts a future of personal services in his book The Globotics Upheaval: Globalization, Robotics, and the Future of Work (Oxford University Press, 2019).
32. The book that is viewed as having exposed the failure of “whole-word” literacy education and launched decades of struggle between the two main schools of thought on reading: Rudolf Flesch, Why Johnny Can’t Read: And What You Can Do about It (Harper & Bros., 1955).
33. On educational methods that enable the recipient to adapt to the rapid rate of technological and economic change in the next few decades: Joseph Aoun, Robot-Proof: Higher Education in the Age of Artificial Intelligence (MIT Press, 2017).
34. A radio lecture in which Turing predicted that humans would be overtaken by machines: Alan Turing, “Can digital machines think?,” May 15, 1951, radio broadcast, BBC Third Programme. Typescript available at turingarchive.org.
35. News article describing the “naturalization” of Sophia as a citizen of Saudi Arabia: Dave Gershgorn, “Inside the mechanical brain of the world’s first robot citizen,” Quartz, November 12, 2017.
36. On Yann LeCun’s view of Sophia: Shona Ghosh, “Facebook’s AI boss described Sophia the robot as ‘complete b——t’ and ‘Wizard-of-Oz AI,’” Business Insider, January 6, 2018.
37. An EU proposal on legal rights for robots: Committee on Legal Affairs of the European Parliament, “Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)),” 2017.
38. The GDPR provision on a “right to an explanation” is not, in fact, new: it is very similar to Article 15(1) of the 1995 Data Protection Directive, which it supersedes.
39. Here are three recent papers providing insightful mathematical analyses of fairness: Moritz Hardt, Eric Price, and Nati Srebro, “Equality of opportunity in supervised learning,” in Advances in Neural Information Processing Systems 29, ed. Daniel Lee et al. (2016); Matt Kusner et al., “Counterfactual fairness,” in Advances in Neural Information Processing Systems 30, ed. Isabelle Guyon et al. (2017); Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan, “Inherent trade-offs in the fair determination of risk scores,” in 8th Innovations in Theoretical Computer Science Conference, ed. Christos Papadimitriou (Dagstuhl Publishing, 2017).
40. News article describing the consequences of software failure for air traffic control: Simon Calder, “Thousands stranded by flight cancellations after systems failure at Europe’s air-traffic coordinator,” The Independent, April 3, 2018.
CHAPTER 5
1. Lovelace wrote, “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.” This was one of the arguments against AI that was refuted by Alan Turing, “Computing machinery and intelligence,” Mind 59 (1950): 433–60.
2. The earliest known article on existential risk from AI was by Richard Thornton, “The age of machinery,” Primitive Expounder IV (1847): 281.
3. “The Book of the Machines” was based on an earlier article by Samuel Butler, “Darwin among the machines,” The Press (Christchurch, New Zealand), June 13, 1863.
4. Another lecture in which Turing predicted the subjugation of humankind: Alan Turing, “Intelligent machinery, a heretical theory” (lecture given to the 51 Society, Manchester, 1951). Typescript available at turingarchive.org.
5. Wiener’s prescient discussion of technological control over humanity and a plea to retain human autonomy: Norbert Wiener, The Human Use of Human Beings (Riverside Press, 1950).
6. The front-cover blurb from Wiener’s 1950 book is remarkably similar to the motto of the Future of Life Institute, an organization dedicated to studying the existential risks that humanity faces: “Technology is giving life the potential to flourish like never before . . . or to self-destruct.”
7. An updating of Wiener’s views arising from his increased appreciation of the possibility of intelligent machines: Norbert Wiener, God and Golem, Inc.: A Comment on Certain Points Where Cybernetics Impinges on Religion (MIT Press, 1964).
8. Asimov’s Three Laws of Robotics first appeared in Isaac Asimov, “Runaround,” Astounding Science Fiction, March 1942. The laws are as follows:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
It is important to understand that Asimov proposed these laws as a way to generate interesting story plots, not as a serious guide for future roboticists. Several of his stories, including “Runaround,” illustrate the problematic consequences of taking the laws literally. From the standpoint of modern AI, the laws fail to acknowledge any element of probability and risk: the legality of robot actions that expose a human to some probability of harm—however infinitesimal—is therefore unclear.
9. The notion of instrumental goals is due to Stephen Omohundro, “The nature of self-improving artificial intelligence” (unpublished manuscript, 2008). See also Stephen Omohundro, “The basic AI drives,” in Artificial General Intelligence 2008: Proceedings of the First AGI Conference, ed. Pei Wang, Ben Goertzel, and Stan Franklin (IOS Press, 2008).
10. The objective of Johnny Depp’s character, Will Caster, seems to be to solve the problem of physical reincarnation so that he can be reunited with his wife, Evelyn. This just goes to show that the nature of the overarching objective doesn’t matter—the instrumental goals are all the same.
11. The original source for the idea of an intelligence explosion: I. J. Good, “Speculations concerning the first ultraintelligent machine,” in Advances in Computers, vol. 6, ed. Franz Alt and Morris Rubinoff (Academic Press, 1965).
12. An example of the impact of the intelligence explosion idea: Luke Muehlhauser, in Facing the Intelligence Explosion (intelligenceexplosion.com), writes, “Good’s paragraph ran over me like a train.”
13. Diminishing returns can be illustrated as follows: suppose that a 16 percent improvement in intelligence creates a machine capable of making an 8 percent improvement, which in turn creates a 4 percent improvement, and so on. This process reaches a limit at about 36 percent above the original level. For more discussion on these issues, see Eliezer Yudkowsky, “Intelligence explosion microeconomics,” technical report 2013-1, Machine Intelligence Research Institute, 2013.
14. For a view of AI in which humans become irrelevant, see Hans Moravec, Mind Children: The Future of Robot and Human Intelligence (Harvard University Press, 1988). See also Hans Moravec, Robot: Mere Machine to Transcendent Mind (Oxford University Press, 2000).
CHAPTER 6
1. A serious publication provides a serious review of Bostrom’s Superintelligence: Paths, Dangers, Strategies: “Clever cogs,” Economist, August 9, 2014.
2. A discussion of myths and misunderstandings concerning the risks of AI: Scott Alexander, “AI researchers on AI risk,” Slate Star Codex (blog), May 22, 2015.
3. The classic work on multiple dimensions of intelligence: Howard Gardner, Frames of Mind: The Theory of Multiple Intelligences (Basic Books, 1983).
4. On the implications of multiple dimensions of intelligence for the possibility of superhuman AI: Kevin Kelly, “The myth of a superhuman AI,” Wired, April 25, 2017.
5. Evidence that chimpanzees have better short-term memory than humans: Sana Inoue and Tetsuro Matsuzawa, “Working memory of numerals in chimpanzees,” Current Biology 17 (2007), R1004–5.
6. An important early work questioning the prospects for rule-based AI systems: Hubert Dreyfus, What Computers Can’t Do (MIT Press, 1972).
7. The first in a series of books seeking physical explanations for consciousness and raising doubts about the ability of AI systems to achieve real intelligence: Roger Penrose, The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics (Oxford University Press, 1989).
8. A revival of the critique of AI based on the incompleteness theorem: Luciano Floridi, “Should we be afraid of AI?” Aeon, May 9, 2016.
9. A revival of the critique of AI based on the Chinese room argument: John Searle, “What your computer can’t know,” The New York Review of Books, October 9, 2014.
10. A report from distinguished AI researchers claiming that superhuman AI is probably impossible: Peter Stone et al., “Artificial intelligence and life in 2030,” One Hundred Year Study on Artificial Intelligence, report of the 2015 Study Panel, 2016.
11. News article based on Andrew Ng’s dismissal of risks from AI: Chris Williams, “AI guru Ng: Fearing a rise of killer robots is like worrying about overpopulation on Mars,” Register, March 19, 2015.
12. An example of the “experts know best” argument: Oren Etzioni, “It’s time to intelligently discuss artificial intelligence,” Backchannel, December 9, 2014.
13. News article claiming that real AI researchers dismiss talk of risks: Erik Sofge, “Bill Gates fears AI, but AI researchers know better,” Popular Science, January 30, 2015.
14. Another claim that real AI researchers dismiss AI risks: David Kenny, “IBM’s open letter to Congress on artificial intelligence,” June 27, 2017, ibm.com/blogs/policy/kenny-artificial-intelligence-letter.
15. Report from the workshop that proposed voluntary restrictions on genetic engineering: Paul Berg et al., “Summary statement of the Asilomar Conference on Recombinant DNA Molecules,” Proceedings of the National Academy of Sciences 72 (1975): 1981–84.
16. Policy statement arising from the invention of CRISPR-Cas9 for gene editing: Organizing Committee for the International Summit on Human Gene Editing, “On human gene editing: International Summit statement,” December 3, 2015.
17. The latest policy statement from leading biologists: Eric Lander et al., “Adopt a moratorium on heritable genome editing,” Nature 567 (2019): 165–68.
18. Etzioni’s comment that one cannot mention risks if one does not also mention benefits appears alongside his analysis of survey data from AI researchers: Oren Etzioni, “No, the experts don’t think superintelligent AI is a threat to humanity,” MIT Technology Review, September 20, 2016. In his analysis he argues that anyone who expects superhuman AI to take more than twenty-five years—which includes this author as well as Nick Bostrom—is not concerned about the risks of AI.
19. A news article with quotations from the Musk–Zuckerberg “debate”: Alanna Petroff, “Elon Musk says Mark Zuckerberg’s understanding of AI is ‘limited,’” CNN Money, July 25, 2017.
20. In 2015 the Information Technology and Innovation Foundation organized a debate titled “Are super intelligent computers really a threat to humanity?” Robert Atkinson, director of the foundation, suggests that mentioning risks is likely to result in reduced funding for AI. Video available at itif.org/events/2015/06/30/are-super-intelligent-computers-really-threat-humanity; the relevant discussion begins at 41:30.
21. A claim that our culture of safety will solve the AI control problem without ever mentioning it: Steven Pinker, “Tech prophecy and the underappreciated causal power of ideas,” in Possible Minds: Twenty-Five Ways of Looking at AI, ed. John Brockman (Penguin Press, 2019).
22. For an interesting analysis of Oracle AI, see Stuart Armstrong, Anders Sandberg, and Nick Bostrom, “Thinking inside the box: Controlling and using an Oracle AI,” Minds and Machines 22 (2012): 299–324.
23. Views on why AI is not going to take away jobs: Kenny, “IBM’s open letter.”
24. An example of Kurzweil’s positive views of merging human brains with AI: Ray Kurzweil, interview by Bob Pisani, June 5, 2015, Exponential Finance Summit, New York, NY.
25. Article quoting Elon Musk on neural lace: Tim Urban, “Neuralink and the brain’s magical future,” Wait But Why, April 20, 2017.
26. For the most recent developments in Berkeley’s neural dust project, see David Piech et al., “StimDust: A 1.7 mm³, implantable wireless precision neural stimulator with ultrasonic power and communication,” arXiv: 1807.07590 (2018).
27. Susan Schneider, in Artificial You: AI and the Future of Your Mind (Princeton University Press, 2019), points out the risks of ignorance in proposed technologies such as uploading and neural prostheses: that, absent any real understanding of whether electronic devices can be conscious and given the continuing philosophical confusion over persistent personal identity, we may inadvertently end our own conscious existences or inflict suffering on conscious machines without realizing that they are conscious.
28. An interview with Yann LeCun on AI risks: Guia Marie Del Prado, “Here’s what Facebook’s artificial intelligence expert thinks about the future,” Business Insider, September 23, 2015.
29. A diagnosis of AI control problems arising from an excess of testosterone: Steven Pinker, “Thinking does not imply subjugating,” in What to Think About Machines That Think, ed. John Brockman (Harper Perennial, 2015).
30. A seminal work on many philosophical topics, including the question of whether moral obligations may be perceived in the natural world: David Hume, A Treatise of Human Nature (John Noon, 1738).
31. An argument that a sufficiently intelligent machine cannot help but pursue human objectives: Rodney Brooks, “The seven deadly sins of AI predictions,” MIT Technology Review, October 6, 2017.
32. Pinker, “Thinking does not imply subjugating.”
33. For an optimistic view arguing that AI safety problems will necessarily be resolved in our favor: Steven Pinker, “Tech prophecy.”
34. On the unsuspected alignment between “skeptics” and “believers” in AI risk: Alexander, “AI researchers on AI risk.”
CHAPTER 7
1. For a guide to detailed brain modeling, now slightly outdated, see Anders Sandberg and Nick Bostrom, “Whole brain emulation: A roadmap,” technical report 2008-3, Future of Humanity Institute, Oxford University, 2008.
2. For an introduction to genetic programming from a leading exponent, see John Koza, Genetic Programming: On the Programming of Computers by Means of Natural Selection (MIT Press, 1992).
3. The parallel to Asimov’s Three Laws of Robotics is entirely coincidental.
4. The same point is made by Eliezer Yudkowsky, “Coherent extrapolated volition,” technical report, Singularity Institute, 2004. Yudkowsky argues that directly building in “Four Great Moral Principles That Are All We Need to Program into AIs” is a sure road to ruin for humanity. His notion of the “coherent extrapolated volition of humankind” has the same general flavor as the first principle; the idea is that a superintelligent AI system could work out what humans, collectively, really want.
5. You can certainly have preferences over whether a machine is helping you achieve your preferences or you are achieving them through your own efforts. For example, suppose you prefer outcome A to outcome B, all other things being equal. You are unable to achieve outcome A unaided, and yet you still prefer B to getting A with the machine’s help. In that case the machine should decide not to help you—unless perhaps it can do so in a way that is completely undetectable by you. You may, of course, have preferences about undetectable help as well as detectable help.
6. The phrase “the greatest good of the greatest number” originates in the work of Francis Hutcheson, An Inquiry into the Original of Our Ideas of Beauty and Virtue, In Two Treatises (D. Midwinter et al., 1725). Some have ascribed the formulation to an earlier comment by Wilhelm Leibniz; see Joachim Hruschka, “The greatest happiness principle and other early German anticipations of utilitarian theory,” Utilitas 3 (1991): 165–77.
7. One might propose that the machine should include terms for animals as well as humans in its own objective function. If these terms have weights that correspond to how much people care about animals, then the end result will be the same as if the machine cares about animals only through caring about humans who care about animals. Giving each living animal equal weight in the machine’s objective function would certainly be catastrophic—for example, we are outnumbered fifty thousand to one by Antarctic krill and a billion trillion to one by bacteria.
8. The moral philosopher Toby Ord made the same point to me in his comments on an early draft of this book: “Interestingly, the same is true in the study of moral philosophy. Uncertainty about moral value of outcomes was almost completely neglected in moral philosophy until very recently. Despite the fact that it is our uncertainty of moral matters that leads people to ask others for moral advice and, indeed, to do research on moral philosophy at all!”
9. One excuse for not paying attention to uncertainty about preferences is that it is formally equivalent to ordinary uncertainty, in the following sense: being uncertain about what I like is the same as being certain that I like likable things while being uncertain about what things are likable. This is just a trick that appears to move the uncertainty into the world, by making “likability by me” a property of objects rather than a property of me. In game theory, this trick has been thoroughly institutionalized since the 1960s, following a series of papers by my late colleague and Nobel laureate John Harsanyi: “Games with incomplete information played by ‘Bayesian’ players, Parts I–III,” Management Science 14 (1967, 1968): 159–82, 320–34, 486–502. In decision theory, the standard reference is the following: Richard Cyert and Morris de Groot, “Adaptive utility,” in Expected Utility Hypotheses and the Allais Paradox, ed. Maurice Allais and Ole Hagen (D. Reidel, 1979).
10. AI researchers working in the area of preference elicitation are an obvious exception. See, for example, Craig Boutilier, “On the foundations of expected expected utility,” in Proceedings of the 18th International Joint Conference on Artificial Intelligence (Morgan Kaufmann, 2003). Also Alan Fern et al., “A decision-theoretic model of assistance,” Journal of Artificial Intelligence Research 50 (2014): 71–104.
11. A critique of beneficial AI based on a misinterpretation of a journalist’s brief interview with the author in a magazine article: Adam Elkus, “How to be good: Why you can’t teach human values to artificial intelligence,” Slate, April 20, 2016.
12. The origin of trolley problems: Frank Sharp, “A study of the influence of custom on the moral judgment,” Bulletin of the University of Wisconsin 236 (1908).
13. The “anti-natalist” movement believes it is morally wrong for humans to reproduce because to live is to suffer and because humans’ impact on the Earth is profoundly negative. If you consider the existence of humanity to be a moral dilemma, then I suppose I do want machines to resolve this moral dilemma the right way.
14. Statement on China’s AI policy by Fu Ying, vice chair of the Foreign Affairs Committee of the National People’s Congress. In a letter to the 2018 World AI Conference in Shanghai, Chinese president Xi Jinping wrote, “Deepened international cooperation is required to cope with new issues in fields including law, security, employment, ethics and governance.” I am indebted to Brian Tse for bringing these statements to my attention.
15. A very interesting paper on the non-naturalistic non-fallacy, showing how preferences can be inferred from the state of the world as arranged by humans: Rohin Shah et al., “The implicit preference information in an initial state,” in Proceedings of the 7th International Conference on Learning Representations (2019), iclr.cc/Conferences/2019/Schedule.
16. Retrospective on Asilomar: Paul Berg, “Asilomar 1975: DNA modification secured,” Nature 455 (2008): 290–91.
17. News article reporting Putin’s speech on AI: “Putin: Leader in artificial intelligence will rule world,” Associated Press, September 4, 2017.
CHAPTER 8
1. Fermat’s Last Theorem asserts that the equation a^(n) = b^(n) + c^(n) has no solutions with a, b, and c being whole numbers and n being a whole number larger than 2. In the margin of his copy of Diophantus’s Arithmetica, Fermat wrote, “I have a truly marvellous proof of this proposition which this margin is too narrow to contain.” True or not, this guaranteed that mathematicians pursued a proof with vigor in the subsequent centuries. We can easily check particular cases—for example, is 7³ equal to 6³ + 5³? (Almost, because 7³ is 343 and 6³ + 5³ is 341, but “almost” doesn’t count.) There are, of course, infinitely many cases to check, and that’s why we need mathematicians and not just computer programmers.
2. A paper from the Machine Intelligence Research Institute poses many related issues: Scott Garrabrant and Abram Demski, “Embedded agency,” AI Alignment Forum, November 15, 2018.
3. The classic work on multiattribute utility theory: Ralph Keeney and Howard Raiffa, Decisions with Multiple Objectives: Preferences and Value Tradeoffs (Wiley, 1976).
4. Paper introducing the idea of inverse RL: Stuart Russell, “Learning agents for uncertain environments,” in Proceedings of the 11th Annual Conference on Computational Learning Theory (ACM, 1998).
5. The original paper on structural estimation of Markov decision processes: Thomas Sargent, “Estimation of dynamic labor demand schedules under rational expectations,” Journal of Political Economy 86 (1978): 1009–44.
6. The first algorithms for IRL: Andrew Ng and Stuart Russell, “Algorithms for inverse reinforcement learning,” in Proceedings of the 17th International Conference on Machine Learning, ed. Pat Langley (Morgan Kaufmann, 2000).
7. Better algorithms for inverse RL: Pieter Abbeel and Andrew Ng, “Apprenticeship learning via inverse reinforcement learning,” in Proceedings of the 21st International Conference on Machine Learning, ed. Russ Greiner and Dale Schuurmans (ACM Press, 2004).
8. Understanding inverse RL as Bayesian updating: Deepak Ramachandran and Eyal Amir, “Bayesian inverse reinforcement learning,” in Proceedings of the 20th International Joint Conference on Artificial Intelligence, ed. Manuela Veloso (AAAI Press, 2007).
9. How to teach helicopters to fly and do aerobatic maneuvers: Adam Coates, Pieter Abbeel, and Andrew Ng, “Apprenticeship learning for helicopter control,” Communications of the ACM 52 (2009): 97–105.
10. The original name proposed for an assistance game was a cooperative inverse reinforcement learning game, or CIRL game. See Dylan Hadfield-Menell et al., “Cooperative inverse reinforcement learning,” in Advances in Neural Information Processing Systems 29, ed. Daniel Lee et al. (2016).
11. These numbers are chosen just to make the game interesting.
12. The equilibrium solution to the game can be found by a process called iterated best response: pick any strategy for Harriet; pick the best strategy for Robbie, given Harriet’s strategy; pick the best strategy for Harriet, given Robbie’s strategy; and so on. If this process reaches a fixed point, where neither strategy changes, then we have found a solution. The process unfolds as follows:
1. Start with the greedy strategy for Harriet: make 2 paperclips if she prefers paperclips; make 1 of each if she is indifferent; make 2 staples if she prefers staples.
2. There are three possibilities Robbie has to consider, given this strategy for Harriet:
1. If Robbie sees Harriet make 2 paperclips, he infers that she prefers paperclips, so he now believes the value of a paperclip is uniformly distributed between 50¢ and $1.00, with an average of 75¢. In that case, his best plan is to make 90 paperclips with an expected value of $67.50 for Harriet.
2. If Robbie sees Harriet make 1 of each, he infers that she values paperclips and staples at 50¢, so the best choice is to make 50 of each.
3. If Robbie sees Harriet make 2 staples, then by the same argument as in 2(a), he should make 90 staples.
4. Given this strategy for Robbie, Harriet’s best strategy is now somewhat different from the greedy strategy in step 1: if Robbie is going to respond to her making 1 of each by making 50 of each, then she is better off making 1 of each not just if she is exactly indifferent but if she is anywhere close to indifferent. In fact, the optimal policy is now to make 1 of each if she values paperclips anywhere between about 44.6¢ and 55.4¢.
5. Given this new strategy for Harriet, Robbie’s strategy remains unchanged. For example, if she chooses 1 of each, he infers that the value of a paperclip is uniformly distributed between 44.6¢ and 55.4¢, with an average of 50¢, so the best choice is to make 50 of each. Because Robbie’s strategy is the same as in step 2, Harriet’s best response will be the same as in step 3, and we have found the equilibrium.
13. For a more complete analysis of the off-switch game, see Dylan Hadfield-Menell et al., “The off-switch game,” in Proceedings of the 26th International Joint Conference on Artificial Intelligence, ed. Carles Sierra (IJCAI, 2017).
14. The proof of the general result is quite simple if you don’t mind integral signs. Let P(u) be Robbie’s prior probability density over Harriet’s utility for the proposed action a. Then the value of going ahead with a is
[]
(We will see shortly why the integral is split up in this way.) On the other hand, the value of action d, deferring to Harriet, is composed of two parts: if u > 0, then Harriet lets Robbie go ahead, so the value is u, but if u < 0, then Harriet switches Robbie off, so the value is 0:
[]
Comparing the expressions for EU(a) and EU(d), we see immediately that EU(d) ≥ EU(a) because the expression for EU(d) has the negative-utility region zeroed out. The two choices have equal value only when the negative region has zero probability—that is, when Robbie is already certain that Harriet likes the proposed action. The theorem is a direct analog of the well-known theorem concerning the non-negative expected value of information.
15. Perhaps the next elaboration in line, for the one human–one robot case, is to consider a Harriet who does not yet know her own preferences regarding some aspect of the world, or whose preferences have not yet been formed.
16. To see how exactly Robbie converges to an incorrect belief, consider a model in which Harriet is slightly irrational, making errors with a probability that diminishes exponentially as the size of error increases. Robbie offers Harriet 4 paperclips in return for 1 staple; she refuses. According to Robbie’s beliefs, this is irrational: even at 25¢ per paperclip and 75¢ per staple, she should accept 4 for 1. Therefore, she must have made a mistake—but this mistake is much more likely if her true value is 25¢ than if it is, say, 30¢, because the error costs her a lot more if her value for paperclips is 30¢. Now Robbie’s probability distribution has 25¢ as the most likely value because it represents the smallest error on Harriet’s part, with exponentially lower probabilities for values higher than 25¢. If he keeps trying the same experiment, the probability distribution becomes more and more concentrated close to 25¢. In the limit, Robbie becomes certain that Harriet’s value for paperclips is 25¢.
17. Robbie could, for example, have a normal (Gaussian) distribution for his prior belief about the exchange rate, which stretches from −∞ to +∞.
18. For an example of the kind of mathematical analysis that may be needed, see Avrim Blum, Lisa Hellerstein, and Nick Littlestone, “Learning in the presence of finitely or infinitely many irrelevant attributes,” Journal of Computer and System Sciences 50 (1995): 32–40. Also Lori Dalton, “Optimal Bayesian feature selection,” in Proceedings of the 2013 IEEE Global Conference on Signal and Information Processing, ed. Charles Bouman, Robert Nowak, and Anna Scaglione (IEEE, 2013).
19. Here I am rephrasing slightly a question by Moshe Vardi at the Asilomar Conference on Beneficial AI, 2017.
20. Michael Wellman and Jon Doyle, “Preferential semantics for goals,” in Proceedings of the 9th National Conference on Artificial Intelligence (AAAI Press, 1991). This paper draws on a much earlier proposal by Georg von Wright, “The logic of preference reconsidered,” Theory and Decision 3 (1972): 140–67.
21. My late Berkeley colleague has the distinction of becoming an adjective. See Paul Grice, Studies in the Way of Words (Harvard University Press, 1989).
22. The original paper on direct stimulation of pleasure centers in the brain: James Olds and Peter Milner, “Positive reinforcement produced by electrical stimulation of septal area and other regions of rat brain,” Journal of Comparative and Physiological Psychology 47 (1954): 419–27.
23. Letting rats push the button: James Olds, “Self-stimulation of the brain; its use to study local effects of hunger, sex, and drugs,” Science 127 (1958): 315–24.
24. Letting humans push the button: Robert Heath, “Electrical self-stimulation of the brain in man,” American Journal of Psychiatry 120 (1963): 571–77.
25. A first mathematical treatment of wireheading, showing how it occurs in reinforcement learning agents: Mark Ring and Laurent Orseau, “Delusion, survival, and intelligent agents,” in Artificial General Intelligence: 4th International Conference, ed. Jürgen Schmidhuber, Kristinn Thórisson, and Moshe Looks (Springer, 2011). One possible solution to the wireheading problem: Tom Everitt and Marcus Hutter, “Avoiding wireheading with value reinforcement learning,” arXiv:1605.03143 (2016).
26. How it might be possible for an intelligence explosion to occur safely: Benja Fallenstein and Nate Soares, “Vingean reflection: Reliable reasoning for self-improving agents,” technical report 2015-2, Machine Intelligence Research Institute, 2015.
27. The difficulty agents face in reasoning about themselves and their successors: Benja Fallenstein and Nate Soares, “Problems of self-reference in self-improving space-time embedded intelligence,” in Artificial General Intelligence: 7th International Conference, ed. Ben Goertzel, Laurent Orseau, and Javier Snaider (Springer, 2014).
28. Showing why an agent might pursue an objective different from its true objective if its computational abilities are limited: Jonathan Sorg, Satinder Singh, and Richard Lewis, “Internal rewards mitigate agent boundedness,” in Proceedings of the 27th International Conference on Machine Learning, ed. Johannes Fürnkranz and Thorsten Joachims (2010), icml.cc/Conferences/2010/papers/icml2010proceedings.zip.
CHAPTER 9
1. Some have argued that biology and neuroscience are also directly relevant. See, for example, Gopal Sarma, Adam Safron, and Nick Hay, “Integrative biological simulation, neuropsychology, and AI safety,” arxiv.org/abs/1811.03493 (2018).
2. On the possibility of making computers liable for damages: Paulius Čerka, Jurgita Grigienė, and Gintarė Sirbikytė, “Liability for damages caused by artificial intelligence,” Computer Law and Security Review 31 (2015): 376–89.
3. For an excellent machine-oriented introduction to standard ethical theories and their implications for designing AI systems, see Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong (Oxford University Press, 2008).
4. The sourcebook for utilitarian thought: Jeremy Bentham, An Introduction to the Principles of Morals and Legislation (T. Payne & Son, 1789).
5. Mill’s elaboration of his tutor Bentham’s ideas was extraordinarily influential on liberal thought: John Stuart Mill, Utilitarianism (Parker, Son & Bourn, 1863).
6. The paper introducing preference utilitarianism and preference autonomy: John Harsanyi, “Morality and the theory of rational behavior,” Social Research 44 (1977): 623–56.
7. An argument for social aggregation via weighted sums of utilities when deciding on behalf of multiple individuals: John Harsanyi, “Cardinal welfare, individualistic ethics, and interpersonal comparisons of utility,” Journal of Political Economy 63 (1955): 309–21.
8. A generalization of Harsanyi’s social aggregation theorem to the case of unequal prior beliefs: Andrew Critch, Nishant Desai, and Stuart Russell, “Negotiable reinforcement learning for Pareto optimal sequential decision-making,” in Advances in Neural Information Processing Systems 31, ed. Samy Bengio et al. (2018).
9. The sourcebook for ideal utilitarianism: G. E. Moore, Ethics (Williams & Norgate, 1912).
10. News article citing Stuart Armstrong’s colorful example of misguided utility maximization: Chris Matyszczyk, “Professor warns robots could keep us in coffins on heroin drips,” CNET, June 29, 2015.
11. Popper’s theory of negative utilitarianism (so named later by Smart): Karl Popper, The Open Society and Its Enemies (Routledge, 1945).
12. A refutation of negative utilitarianism: R. Ninian Smart, “Negative utilitarianism,” Mind 67 (1958): 542–43.
13. For a typical argument for risks arising from “end human suffering” commands, see “Why do we think AI will destroy us?,” Reddit, reddit.com/r/Futurology/comments/38fp6o/why\_do\_we\_think\_ai\_will\_destroy\_us.
14. A good source for self-deluding incentives in AI: Ring and Orseau, “Delusion, survival, and intelligent agents.”
15. On the impossibility of interpersonal comparisons of utility: W. Stanley Jevons, The Theory of Political Economy (Macmillan, 1871).
16. The utility monster makes its appearance in Robert Nozick, Anarchy, State, and Utopia (Basic Books, 1974).
17. For example, we can fix immediate death to have a utility of 0 and a maximally happy life to have a utility of 1. See John Isbell, “Absolute games,” in Contributions to the Theory of Games, vol. 4, ed. Albert Tucker and R. Duncan Luce (Princeton University Press, 1959).
18. The oversimplified nature of Thanos’s population-halving policy is discussed by Tim Harford, “Thanos shows us how not to be an economist,” Financial Times, April 20, 2019. Even before the film debuted, defenders of Thanos began to congregate on the subreddit r/thanosdidnothingwrong/. In keeping with the subreddit’s motto, 350,000 of the 700,000 members were later purged.
19. On utilities for populations of different sizes: Henry Sidgwick, The Methods of Ethics (Macmillan, 1874).
20. The Repugnant Conclusion and other knotty problems of utilitarian thinking: Derek Parfit, Reasons and Persons (Oxford University Press, 1984).
21. For a concise summary of axiomatic approaches to population ethics, see Peter Eckersley, “Impossibility and uncertainty theorems in AI value alignment,” in Proceedings of the AAAI Workshop on Artificial Intelligence Safety, ed. Huáscar Espinoza et al. (2019).
22. Calculating the long-term carrying capacity of the Earth: Daniel O’Neill et al., “A good life for all within planetary boundaries,” Nature Sustainability 1 (2018): 88–95.
23. For an application of moral uncertainty to population ethics, see Hilary Greaves and Toby Ord, “Moral uncertainty about population axiology,” Journal of Ethics and Social Philosophy 12 (2017): 135–67. A more comprehensive analysis is provided by Will MacAskill, Krister Bykvist, and Toby Ord, Moral Uncertainty (Oxford University Press, forthcoming).
24. Quotation showing that Smith was not so obsessed with selfishness as is commonly imagined: Adam Smith, The Theory of Moral Sentiments (Andrew Millar; Alexander Kincaid and J. Bell, 1759).
25. For an introduction to the economics of altruism, see Serge-Christophe Kolm and Jean Ythier, eds., Handbook of the Economics of Giving, Altruism and Reciprocity, 2 vols. (North-Holland, 2006).
26. On charity as selfish: James Andreoni, “Impure altruism and donations to public goods: A theory of warm-glow giving,” Economic Journal 100 (1990): 464–77.
27. For those who like equations: let Alice’s intrinsic well-being be measured by w\_(A) and Bob’s by w\_(B). Then the utilities for Alice and Bob are defined as follows:
U\_(A) = w\_(A) + C\_(AB) w\_(B)
U\_(B) = w\_(B) + C\_(BA) w\_(A).
Some authors suggest that Alice cares about Bob’s overall utility U\_(B) rather than just his intrinsic well-being w\_(B), but this leads to a kind of circularity in that Alice’s utility depends on Bob’s utility which depends on Alice’s utility; sometimes stable solutions can be found but the underlying model can be questioned. See, for example, Hajime Hori, “Nonpaternalistic altruism and functional interdependence of social preferences,” Social Choice and Welfare 32 (2009): 59–77.
28. Models in which each individual’s utility is a linear combination of everyone’s well-being are just one possibility. Much more general models are possible—for example, models in which some individuals prefer to avoid severe inequalities in the distribution of well-being, even at the expense of reducing the total, while other individuals would really prefer that no one have preferences about inequality at all. Thus, the overall approach I am proposing accommodates multiple moral theories held by individuals; at the same time, it doesn’t insist that any one of those moral theories is correct or should have much sway over outcomes for those who hold a different theory. I am indebted to Toby Ord for pointing out this feature of the approach.
29. Arguments of this type have been made against policies designed to ensure equality of outcome, notably by the American legal philosopher Ronald Dworkin. See, for example, Ronald Dworkin, “What is equality? Part 1: Equality of welfare,” Philosophy and Public Affairs 10 (1981): 185–246. I am indebted to Iason Gabriel for this reference.
30. Malice in the form of revenge-based punishment for transgressions is certainly a common tendency. Although it plays a social role in keeping members of a community in line, it can be replaced by an equally effective policy driven by deterrence and prevention—that is, weighing the intrinsic harm done when punishing the transgressor against the benefits to the larger society.
31. Let E\_(AB) and P\_(AB) be Alice’s coefficients of envy and pride respectively, and assume that they apply to the difference in well-being. Then a (somewhat oversimplified) formula for Alice’s utility could be the following:
U\_(A) = w\_(A) + C\_(AB) w\_(B) – E\_(AB) (w\_(B) – w\_(A)) + P\_(AB) (w\_(A) – w\_(B))
= (1 + E\_(AB) + P\_(AB)) w\_(A) + (C\_(AB) – E\_(AB) – P\_(AB)) w\_(B).
Thus, if Alice has positive pride and envy coefficients, they act on Bob’s welfare exactly like sadism and malice coefficients: Alice is happier if Bob’s welfare is lowered, all other things being equal. In reality, pride and envy typically apply not to differences in well-being but to differences in visible aspects thereof, such as status and possessions. Bob’s hard toil in acquiring his possessions (which lowers his overall well-being) may not be visible to Alice. This can lead to the self-defeating behaviors that go under the heading of “keeping up with the Joneses.”
32. On the sociology of conspicuous consumption: Thorstein Veblen, The Theory of the Leisure Class: An Economic Study of Institutions (Macmillan, 1899).
33. Fred Hirsch, The Social Limits to Growth (Routledge & Kegan Paul, 1977).
34. I am indebted to Ziyad Marar for pointing me to social identity theory and its importance in understanding human motivation and behavior. See, for example, Dominic Abrams and Michael Hogg, eds., Social Identity Theory: Constructive and Critical Advances (Springer, 1990). For a much briefer summary of the main ideas, see Ziyad Marar, “Social identity,” in This Idea Is Brilliant: Lost, Overlooked, and Underappreciated Scientific Concepts Everyone Should Know, ed. John Brockman (Harper Perennial, 2018).
35. Here, I am not suggesting that we necessarily need a detailed understanding of the neural implementation of cognition; what is needed is a model at the “software” level of how preferences, both explicit and implicit, generate behavior. Such a model would need to incorporate what is known about the reward system.
36. Ralph Adolphs and David Anderson, The Neuroscience of Emotion: A New Synthesis (Princeton University Press, 2018).
37. See, for example, Rosalind Picard, Affective Computing, 2nd ed. (MIT Press, 1998).
38. Waxing lyrical on the delights of the durian: Alfred Russel Wallace, The Malay Archipelago: The Land of the Orang-Utan, and the Bird of Paradise (Macmillan, 1869).
39. A less rosy view of the durian: Alan Davidson, The Oxford Companion to Food (Oxford University Press, 1999). Buildings have been evacuated and planes turned around in mid-flight because of the durian’s overpowering odor.
40. I discovered after writing this chapter that the durian was used for exactly the same philosophical purpose by Laurie Paul, Transformative Experience (Oxford University Press, 2014). Paul suggests that uncertainty about one’s own preferences presents fatal problems for decision theory, a view contradicted by Richard Pettigrew, “Transformative experience and decision theory,” Philosophy and Phenomenological Research 91 (2015): 766–74. Neither author refers to the early work of Harsanyi, “Games with incomplete information, Parts I–III,” or Cyert and de Groot, “Adaptive utility.”
41. An initial paper on helping humans who don’t know their own preferences and are learning about them: Lawrence Chan et al., “The assistive multi-armed bandit,” in Proceedings of the 14th ACM/IEEE International Conference on Human–Robot Interaction (HRI), ed. David Sirkin et al. (IEEE, 2019).
42. Eliezer Yudkowsky, in Coherent Extrapolated Volition (Singularity Institute, 2004), lumps all these aspects, as well as plain inconsistency, under the heading of muddle—a term that has not, unfortunately, caught on.
43. On the two selves who evaluate experiences: Daniel Kahneman, Thinking, Fast and Slow (Farrar, Straus & Giroux, 2011).
44. Edgeworth’s hedonimeter, an imaginary device for measuring happiness moment to moment: Francis Edgeworth, Mathematical Psychics: An Essay on the Application of Mathematics to the Moral Sciences (Kegan Paul, 1881).
45. A standard text on sequential decisions under uncertainty: Martin Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Programming (Wiley, 1994).
46. On axiomatic assumptions that justify additive representations of utility over time: Tjalling Koopmans, “Representation of preference orderings over time,” in Decision and Organization, ed. C. Bartlett McGuire, Roy Radner, and Kenneth Arrow (North-Holland, 1972).
47. The 2019 humans (who might, in 2099, be long dead or might just be the earlier selves of 2099 humans) might wish to build the machines in a way that respects the 2019 preferences of the 2019 humans rather than pandering to the undoubtedly shallow and ill-considered preferences of humans in 2099. This would be like drawing up a constitution that disallows any amendments. If the 2099 humans, after suitable deliberation, decide they wish to override the preferences built in by the 2019 humans, it seems reasonable that they should be able to do so. After all, it is they and their descendants who have to live with the consequences.
48. I am indebted to Wendell Wallach for this observation.
49. An early paper dealing with changes in preferences over time: John Harsanyi, “Welfare economics of variable tastes,” Review of Economic Studies 21 (1953): 204–13. A more recent (and somewhat technical) survey is provided by Franz Dietrich and Christian List, “Where do preferences come from?,” International Journal of Game Theory 42 (2013): 613–37. See also Laurie Paul, Transformative Experience (Oxford University Press, 2014), and Richard Pettigrew, “Choosing for Changing Selves,” philpapers.org/archive/PETCFC.pdf.
50. For a rational analysis of irrationality, see Jon Elster, Ulysses and the Sirens: Studies in Rationality and Irrationality (Cambridge University Press, 1979).
51. For promising ideas on cognitive prostheses for humans, see Falk Lieder, “Beyond bounded rationality: Reverse-engineering and enhancing human intelligence” (PhD thesis, University of California, Berkeley, 2018).
CHAPTER 10
1. On the application of assistance games to driving: Dorsa Sadigh et al., “Planning for cars that coordinate with people,” Autonomous Robots 42 (2018): 1405–26.
2. Apple is, curiously, absent from this list. It does have an AI research group and is ramping up rapidly. Its traditional culture of secrecy means that its impact in the marketplace of ideas is quite limited so far.
3. Max Tegmark, interview, Do You Trust This Computer?, directed by Chris Paine, written by Mark Monroe (2018).
4. On estimating the impact of cybercrime: “Cybercrime cost $600 billion and targets banks first,” Security Magazine, February 21, 2018.
APPENDIX A
1. The basic plan for chess programs of the next sixty years: Claude Shannon, “Programming a computer for playing chess,” Philosophical Magazine, 7th ser., 41 (1950): 256–75. Shannon’s proposal drew on a centuries-long tradition of evaluating chess positions by adding up piece values; see, for example, Pietro Carrera, Il gioco degli scacchi (Giovanni de Rossi, 1617).
2. A report describing Samuel’s heroic research on an early reinforcement learning algorithm for checkers: Arthur Samuel, “Some studies in machine learning using the game of checkers,” IBM Journal of Research and Development 3 (1959): 210–29.
3. The concept of rational metareasoning and its application to search and game playing emerged from the thesis research of my student Eric Wefald, who died tragically in a car accident before he could write up his work; the following appeared posthumously: Stuart Russell and Eric Wefald, Do the Right Thing: Studies in Limited Rationality (MIT Press, 1991). See also Eric Horvitz, “Rational metareasoning and compilation for optimizing decisions under bounded resources,” in Computational Intelligence, II: Proceedings of the International Symposium, ed. Francesco Gardin and Giancarlo Mauri (North-Holland, 1990); and Stuart Russell and Eric Wefald, “On optimal game-tree search using rational meta-reasoning,” in Proceedings of the 11th International Joint Conference on Artificial Intelligence, ed. Natesa Sridharan (Morgan Kaufmann, 1989).
4. Perhaps the first paper showing how hierarchical organization reduces the combinatorial complexity of planning: Herbert Simon, “The architecture of complexity,” Proceedings of the American Philosophical Society 106 (1962): 467–82.
5. The canonical reference for hierarchical planning is Earl Sacerdoti, “Planning in a hierarchy of abstraction spaces,” Artificial Intelligence 5 (1974): 115–35. See also Austin Tate, “Generating project networks,” in Proceedings of the 5th International Joint Conference on Artificial Intelligence, ed. Raj Reddy (Morgan Kaufmann, 1977).
6. A formal definition of what high-level actions do: Bhaskara Marthi, Stuart Russell, and Jason Wolfe, “Angelic semantics for high-level actions,” in Proceedings of the 17th International Conference on Automated Planning and Scheduling, ed. Mark Boddy, Maria Fox, and Sylvie Thiébaux (AAAI Press, 2007).
APPENDIX B
1. This example is unlikely to be from Aristotle, but may have originated with Sextus Empiricus, who lived probably in the second or third century CE.
2. The first algorithm for theorem-proving in first-order logic worked by reducing first-order sentences to (very large numbers of) propositional sentences: Martin Davis and Hilary Putnam, “A computing procedure for quantification theory,” Journal of the ACM 7 (1960): 201–15.
3. An improved algorithm for propositional inference: Martin Davis, George Logemann, and Donald Loveland, “A machine program for theorem-proving,” Communications of the ACM 5 (1962): 394–97.
4. The satisfiability problem—deciding whether a collection of sentences is true in some world—is NP-complete. The reasoning problem—deciding whether a sentence follows from the known sentences—is co-NP-complete, a class that is thought to be harder than NP-complete problems.
5. There are two exceptions to this rule: no repetition (a stone may not be played that returns the board to a situation that existed previously) and no suicide (a stone may not be placed such that it would immediately be captured—for example, if it is already surrounded).
6. The work that introduced first-order logic as we understand it today (Begriffsschrift means “concept writing”): Gottlob Frege, Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens (Halle, 1879). Frege’s notation for first-order logic was so bizarre and unwieldy that it was soon replaced by the notation introduced by Giuseppe Peano, which remains in common use today.
7. A summary of Japan’s bid for supremacy through knowledge-based systems: Edward Feigenbaum and Pamela McCorduck, The Fifth Generation: Artificial Intelligence and Japan’s Computer Challenge to the World (Addison-Wesley, 1983).
8. The US efforts included the Strategic Computing Initiative and the formation of the Microelectronics and Computer Technology Corporation (MCC). See Alex Roland and Philip Shiman, Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983–1993 (MIT Press, 2002).
9. A history of Britain’s response to the re-emergence of AI in the 1980s: Brian Oakley and Kenneth Owen, Alvey: Britain’s Strategic Computing Initiative (MIT Press, 1990).
10. The origin of the term GOFAI: John Haugeland, Artificial Intelligence: The Very Idea (MIT Press, 1985).
11. Interview with Demis Hassabis on the future of AI and deep learning: Nick Heath, “Google DeepMind founder Demis Hassabis: Three truths about AI,” TechRepublic, September 24, 2018.
APPENDIX C
1. Pearl’s work was recognized by the Turing Award in 2011.
2. Bayes nets in more detail: Every node in the network is annotated with the probability of each possible value, given each possible combination of values for the node’s parents (that is, those nodes that point to it). For example, the probability that Doubles₁₂ has value true is 1.0 when D₁ and D₂ have the same value, and 0.0 otherwise. A possible world is an assignment of values to all the nodes. The probability of such a world is the product of the appropriate probabilities from each of the nodes.
3. A compendium of applications of Bayes nets: Olivier Pourret, Patrick Naïm, and Bruce Marcot, eds., Bayesian Networks: A Practical Guide to Applications (Wiley, 2008).
4. The basic paper on probabilistic programming: Daphne Koller, David McAllester, and Avi Pfeffer, “Effective Bayesian inference for stochastic programs,” in Proceedings of the 14th National Conference on Artificial Intelligence (AAAI Press, 1997). For many additional references, see probabilistic-programming.org.
5. Using probabilistic programs to model human concept learning: Brenden Lake, Ruslan Salakhutdinov, and Joshua Tenenbaum, “Human-level concept learning through probabilistic program induction,” Science 350 (2015): 1332–38.
6. For a detailed description of the seismic monitoring application and associated probability model, see Nimar Arora, Stuart Russell, and Erik Sudderth, “NET-VISA: Network processing vertically integrated seismic analysis,” Bulletin of the Seismological Society of America 103 (2013): 709–29.
7. News article describing one of the first serious self-driving car crashes: Ryan Randazzo, “Who was at fault in self-driving Uber crash? Accounts in Tempe police report disagree,” Republic (azcentral.com), March 29, 2017.
APPENDIX D
1. The foundational discussion of inductive learning: David Hume, Philosophical Essays Concerning Human Understanding (A. Millar, 1748).
2. Leslie Valiant, “A theory of the learnable,” Communications of the ACM 27 (1984): 1134–42. See also Vladimir Vapnik, Statistical Learning Theory (Wiley, 1998). Valiant’s approach concentrated on computational complexity, Vapnik’s on statistical analysis of the learning capacity of various classes of hypotheses, but both shared a common theoretical core connecting data and predictive accuracy.
3. For example, to learn the difference between the “situational superko” and “natural situational superko” rules, the learning algorithm would have to try repeating a board position that it had created previously by a pass rather than by playing a stone. The results would be different in different countries.
4. For a description of the ImageNet competition, see Olga Russakovsky et al., “ImageNet large scale visual recognition challenge,” International Journal of Computer Vision 115 (2015): 211–52.
5. The first demonstration of deep networks for vision: Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems 25, ed. Fernando Pereira et al. (2012).
6. The difficulty of distinguishing over one hundred breeds of dogs: Andrej Karpathy, “What I learned from competing against a ConvNet on ImageNet,” Andrej Karpathy Blog, September 2, 2014.
7. Blog post on inceptionism research at Google: Alexander Mordvintsev, Christopher Olah, and Mike Tyka, “Inceptionism: Going deeper into neural networks,” Google AI Blog, June 17, 2015. The idea seems to have originated with J. P. Lewis, “Creation by refinement: A creativity paradigm for gradient descent learning networks,” in Proceedings of the IEEE International Conference on Neural Networks (IEEE, 1988).
8. News article on Geoff Hinton having second thoughts about deep networks: Steve LeVine, “Artificial intelligence pioneer says we need to start over,” Axios, September 15, 2017.
9. A catalog of shortcomings of deep learning: Gary Marcus, “Deep learning: A critical appraisal,” arXiv:1801.00631 (2018).
10. A popular textbook on deep learning, with a frank assessment of its weaknesses: François Chollet, Deep Learning with Python (Manning Publications, 2017).
11. An explanation of explanation-based learning: Thomas Dietterich, “Learning at the knowledge level,” Machine Learning 1 (1986): 287–315.
12. A superficially quite different explanation of explanation-based learning: John Laird, Paul Rosenbloom, and Allen Newell, “Chunking in Soar: The anatomy of a general learning mechanism,” Machine Learning 1 (1986): 11–46.
Image Credits
Figure 2: (b) © The Sun / News Licensing; (c) Courtesy of Smithsonian Institution Archives.
Figure 4: © SRI International. creativecommons.org/licenses/by/3.0/legalcode.
Figure 5: (left) © Berkeley AI Research Lab; (right) © Boston Dynamics.
Figure 6: © The Saul Steinberg Foundation / Artists Rights Society (ARS), New York.
Figure 7: (left) © Noam Eshel, Defense Update; (right) © Future of Life Institute / Stuart Russell.
Figure 10: (left) © AFP; (right) Courtesy of Henrik Sorensen.
Figure 11: Elysium © 2013 MRC II Distribution Company L.P. All Rights Reserved. Courtesy of Columbia Pictures.
Figure 14: © OpenStreetMap contributors. OpenStreetMap.org. creativecommons.org/licenses/by/2.0/legalcode.
Figure 19: Terrain photo: DigitalGlobe via Getty Images.
Figure 20: (right) Courtesy of the Tempe Police Department.
Figure 24: © Jessica Mullen / Deep Dreamscope. creativecommons.org/licenses/by/2.0/legalcode.
ABCDEFGHIJKLMNOPQRSTUVWXYZ
Index
The page numbers in this index refer to the printed version of this book. The link provided will take you to the beginning of that print page. You may need to scroll forward from that location to find the corresponding reference on your e-reader.
AAAI (Association for the Advancement of Artificial Intelligence), 250
Abbeel, Pieter, 73, 192
abstract actions, hierarchy of, 87–90
abstract planning, 264–66
access shortcomings, of intelligent personal assistants, 67–68
action potentials, 15
actions, discovering, 87–90
actuators, 72
Ada, Countess of Lovelace. See Lovelace, Ada
adaptive organisms, 18–19
agent. See intelligent agent
agent program, 48
“AI Researchers on AI Risk” (Alexander), 153
Alciné, Jacky, 60
Alexander, Scott, 146, 153, 169–70
algorithms, 33–34
Bayesian networks and, 275–77
Bayesian updating, 283, 284
bias and, 128–30
chess-playing, 62–63
coding of, 34
completeness theorem and, 51–52
computer hardware and, 34–35
content selection, 8–9, 105
deep learning, 58–59, 288–93
dynamic programming, 54–55
examples of common, 33–34
exponential complexity of problems and, 38–39
halting problem and, 37–38
lookahead search, 47, 49–50, 260–61
propositional logic and, 268–70
reinforcement learning, 55–57, 105
subroutines within, 34
supervised learning, 58–59, 285–93
Alibaba, 250
AlphaGo, 6, 46–48, 49–50, 55, 91, 92, 206–7, 209–10, 261, 265, 285
AlphaZero, 47, 48
altruism, 24, 227–29
altruistic AI, 173–75
Amazon, 106, 119, 250
Echo, 64–65
“Picking Challenge” to accelerate robot development, 73–74
Analytical Engine, 40
ants, 25
Aoun, Joseph, 123
Apple HomePod, 64–65
“Architecture of Complexity, The” (Simon), 265
Aristotle, 20–21, 39–40, 50, 52, 53, 114, 245
Armstrong, Stuart, 221
Arnauld, Antoine, 21–22
Arrow, Kenneth, 223
artificial intelligence (AI), 1–12
agent (See intelligent agent)
agent programs, 48–59
beneficial, principles for (See beneficial AI)
benefits to humans of, 98–102
as biggest event in human history, 1–4
conceptual breakthroughs required for (See conceptual breakthroughs required for superintelligent AI)
decision making on global scale, capability for, 75–76
deep learning and, 6
domestic robots and, 73–74
general-purpose, 46–48, 100, 136
global scale, capability to sense and make decisions on, 74–76
goals and, 41–42, 48–53, 136–42, 165–69
governance of, 249–53
health advances and, 101
history of, 4–6, 40–42
human preferences and (See human preferences)
imagining what superintelligent machines could do, 93–96
intelligence, defining, 39–61
intelligent personal assistants and, 67–71
limits of superintelligence, 96–98
living standard increases and, 98–100
logic and, 39–40
media and public perception of advances in, 62–64
misuses of (See misuses of AI)
mobile phones and, 64–65
multiplier effect of, 99
objectives and, 11–12, 43, 48–61, 136–42, 165–69
overly intelligent AI, 132–44
pace of scientific progress in creating, 6–9
predicting arrival of superintelligent AI, 76–78
reading capabilities and, 74–75
risk posed by (See risk posed by AI)
scale and, 94–96
scaling up sensory inputs and capacity for action, 94–95
self-driving cars and, 65–67, 181–82, 247
sensing on global scale, capability to, 75
smart homes and, 71–72
softbots and, 64
speech recognition capabilities and, 74–75
standard model of, 9–11, 13, 48–61, 247
Turing test and, 40–41
tutoring by, 100–101
virtual reality authoring by, 101
World Wide Web and, 64
“Artificial Intelligence and Life in 2030” (One Hundred Year Study on Artificial Intelligence), 149, 150
Asimov, Isaac, 141
assistance games, 192–203
learning preferences exactly in long run, 200–202
off-switch game, 196–200
paperclip game, 194–96
prohibitions and, 202–3
uncertainty about human objectives, 200–202
Association for the Advancement of Artificial Intelligence (AAAI), 250
assumption failure, 186–87
Atkinson, Robert, 158
Atlas humanoid robot, 73
autonomous weapons systems (LAWS), 110–13
autonomy loss problem, 255–56
Autor, David, 116
Avengers: Infinity War (film), 224
“avoid putting in human goals” argument, 165–69
axiomatic basis for utility theory, 23–24
axioms, 185
Babbage, Charles, 40, 132–33
backgammon, 55
Baidu, 250
Baldwin, James, 18
Baldwin effect, 18–20
Banks, Iain, 164
bank tellers, 117–18
Bayes, Thomas, 54
Bayesian logic, 54
Bayesian networks, 54, 275–77
Bayesian rationality, 54
Bayesian updating, 283, 284
Bayes theorem, 54
behavior, learning preferences from, 190–92
behavior modification, 104–7
belief state, 282–83
beneficial AI, 171–210, 247–49
caution regarding development of, reasons for, 179
data available for learning about human preferences, 180–81
economic incentives for, 179–80
evil behavior and, 179
learning to predict human preferences, 176–77
moral dilemmas and, 178
objective of AI is to maximize realization of human preferences, 173–75
principles for, 172–79
proofs for (See proofs for beneficial AI)
uncertainty as to what human preferences are, 175–76
values, defining, 177–78
Bentham, Jeremy, 24, 219
Berg, Paul, 182
Berkeley Robot for the Elimination of Tedious Tasks (BRETT), 73
Bernoulli, Daniel, 22–23
“Bill Gates Fears AI, but AI Researchers Know Better” (Popular Science), 152
blackmail, 104–5
blinking reflex, 57
blockchain, 161
board games, 45
Boole, George, 268
Boolean (propositional) logic, 51, 268–70
bootstrapping process, 81–82
Boston Dynamics, 73
Bostrom, Nick, 102, 144, 145, 150, 166, 167, 183, 253
brains, 16, 17–18
reward system and, 17–18
Summit machine, compared, 34
BRETT (Berkeley Robot for the Elimination of Tedious Tasks), 73
Brin, Sergey, 81
Brooks, Rodney, 168
Brynjolfsson, Erik, 117
Budapest Convention on Cybercrime, 253–54
Butler, Samuel, 133–34, 159
“can’t we just . . .” responses to risks posed by AI, 160–69
“. . . avoid putting in human goals,” 165–69
“. . . merge with machines,” 163–65
“. . . put it in a box,” 161–63
“. . . switch it off,” 160–61
“. . . work in human-machine teams,” 163
Cardano, Gerolamo, 21
caring professions, 122
Chace, Calum, 113
changes in human preferences over time, 240–45
Changing Places (Lodge), 121
checkers program, 55, 261
chess programs, 62–63
Chollet, François, 293
chunking, 295
circuits, 291–92
CNN, 108
CODE (Collaborative Operations in Denied Environments), 112
combinatorial complexity, 258
common operational picture, 69
compensation effects, 114–17
completeness theorem (Gödel’s), 51–52
complexity of problems, 38–39
Comprehensive Nuclear-Test-Ban Treaty (CTBT) seismic monitoring, 279–80
computer programming, 119
computers, 32–61
algorithms and (See algorithms)
complexity of problems and, 38–39
halting problem and, 37–38
hardware, 34–35
intelligent (See artificial intelligence)
limits of computation, 36–39
software limitations, 37
special-purpose devices, building, 35–36
universality and, 32
computer science, 33
“Computing Machinery and Intelligence” (Turing), 40–41, 149
conceptual breakthroughs required for superintelligent AI, 78–93
actions, discovering, 87–90
cumulative learning of concepts and theories, 82–87
language/common sense problem, 79–82
mental activity, managing, 90–92
consciousness, 16–17
consequentialism, 217–19
content selection algorithms, 8–9, 105
content shortcomings, of intelligent personal assistants, 67–68
control theory, 10, 44–45, 54, 176
convolutional neural networks, 47
cost function to evaluate solutions, and goals, 48
Credibility Coalition, 109
CRISPR-Cas9, 156
cumulative learning of concepts and theories, 82–87
cybersecurity, 186–87
Daily Telegraph, 77
decision making on global scale, 75–76
decoherence, 36
Deep Blue, 62, 261
deep convolutional network, 288–90
deep dreaming images, 291
deepfakes, 105–6
deep learning, 6, 58–59, 86–87, 288–93
DeepMind, 90
AlphaGo, 6, 46–48, 49–50, 55, 91, 92, 206–7, 209–10, 261, 265, 285
AlphaZero, 47, 48
DQN system, 55–56
deflection arguments, 154–59
“research can’t be controlled” arguments, 154–56
silence regarding risks of AI, 158–59
tribalism, 150, 159–60
whataboutery, 156–57
Delilah (blackmail bot), 105
denial of risk posed by AI, 146–54
“it’s complicated” argument, 147–48
“it’s impossible” argument, 149–50
“it’s too soon to worry about it” argument, 150–52
Luddism accusation and, 153–54
“we’re the experts” argument, 152–54
deontological ethics, 217
dexterity problem, robots, 73–74
Dickinson, Michael, 190
Dickmanns, Ernst, 65
DigitalGlobe, 75
domestic robots, 73–74
dopamine, 17, 205–6
Dota 2, 56
DQN system, 55–56
Dune (Herbert), 135
dynamic programming algorithms, 54–55
E. coli, 14–15
eBay, 106
ECHO (first smart home), 71
“Economic Possibilities for Our Grandchildren” (Keynes), 113–14, 120–21
The Economic Singularity: Artificial Intelligence and the Death of Capitalism (Chace), 113
Economist, The, 145
Edgeworth, Francis, 238
Eisenhower, Dwight, 249
electrical action potentials, 15
Eliza (first chatbot), 67
Elmo (shogi program), 47
Elster, Jon, 242
Elysium (film), 127
emergency braking, 57
enfeeblement of humans problem, 254–55
envy, 229–31
Epicurus, 219
equilibrium solutions, 30–31, 195–96
Erewhon (Butler), 133–34, 159
Etzioni, Oren, 152, 157
eugenics movement, 155–56
expected value rule, 22–23
experience, learning from, 285–95
experiencing self, and preferences, 238–40
explanation-based learning, 294–95
Facebook, 108, 250
Fact, Fiction and Forecast (Goodman), 85
fact-checking, 108–9, 110
factcheck.org, 108
fear of death (as an instrumental goal), 140–42
feature engineering, 84–85
Fermat, Pierre de, 185
Fermat’s Last Theorem, 185
Ferranti Mark I, 34
Fifth Generation project, 271
firewalling AI systems, 161–63
first-order logic, 51, 270–72
probabilistic languages and, 277–80
propositional logic distinguished, 270
Ford, Martin, 113
Forster, E. M., 254–55
Fox News, 108
Frege, Gottlob, 270
Full, Bob, 190
G7, 250–51
Galileo Galilei, 85–86
gambling, 21–23
game theory, 28–32. See also assistance games
Gates, Bill, 56, 153
GDPR (General Data Protection Regulation), 127–29
Geminoid DK (robot), 125
General Data Protection Regulation (GDPR), 127–29
general-purpose artificial intelligence, 46–48, 100, 136
geometric objects, 33
Glamour, 129
Global Learning XPRIZE competition, 70
Go, 6, 46–47, 49–50, 51, 55, 56
combinatorial complexity and, 259–61
propositional logic and, 269
supervised learning algorithm and, 286–87
thinking, learning from, 293–95
goals, 41–42, 48–53, 136–42, 165–69
God and Golem (Wiener), 137–38
Gödel, Kurt, 51, 52
Goethe, Johann Wolfgang von, 137
Good, I. J., 142–43, 153, 208–9
Goodhart’s law, 77
Goodman, Nelson, 85
Good Old-Fashioned AI (GOFAI), 271
Google, 108, 112–13
DeepMind (See DeepMind)
Home, 64–65
misclassifying people as gorillas in Google Photo, 60
tensor processing units (TPUs), 35
gorilla problem, 132–36
governance of AI, 249–53
governmental reward and punishment systems, 106–7
Great Decoupling, 117
greed (as an instrumental goal), 140–42
Grice, H. Paul, 205
Gricean analysis, 205
halting problem, 37–38
hand construction problem, robots, 73
Hardin, Garrett, 31
hard takeoff scenario, 144
Harop (missile), 111
Harsanyi, John, 220, 229
Hassabis, Demis, 271–72, 293
Hawking, Stephen, 4, 153
health advances, 101
He Jiankui, 156
Herbert, Frank, 135
hierarchy of abstract actions, 87–90, 265–66
High-Level Expert Group on Artificial Intelligence (EU), 251
Hillarp, Nils-Åke, 17
Hinton, Geoff, 290
Hirsch, Fred, 230
Hobbes, Thomas, 246
Howard’s End (Forster), 254
Huffington Post, 4
human germline alteration, ban on, 155–56
human–machine teaming, 163–65
human preferences, 211–45
behavior, learning preferences from, 190–92
beneficial AI and, 172–77
changes in, over time, 240–45
different people, learning to make trade-offs between preferences of, 213–27
emotions and, 232–34
errors as to, 236–37
of experiencing self, 238–40
heterogeneity of, 212–13
loyal AI, 215–17
modification of, 243–45
of nice, nasty and envious humans, 227–31
of remembering self, 238–40
stupidity and, 232–34
transitivity of, 23–24
uncertainty and, 235–37
updates in, 241–42
utilitarian AI (See utilitarianism/utilitarian AI)
utility theory and, 23–27
human roles, takeover of, 124–31
Human Use of Human Beings (Wiener), 137
humble AI, 175–76
Hume, David, 167, 287–88
IBM, 62, 80, 250
ideal utilitarianism, 219
IEEE (Institute of Electrical and Electronics Engineers), 250
ignorance, 52–53
imitation game, 40–41
inceptionism images, 291
inductive logic programming, 86
inductive reasoning, 287–88
inputs, to intelligent agents, 42–43
instinctive organisms, 18–19
Institute of Electrical and Electronics Engineers (IEEE), 250
instrumental goal, 141–42, 196
insurance underwriters, 119
intelligence, 13–61
action potentials and, 15
brains and, 16, 17–18
computers and, 39–61
consciousness and, 16–17
E. coli and, 14–15
evolutionary origins of, 14–18
learning and, 15, 18–20
nerve nets and, 16
practical reasoning and, 20
rationality and, 20–32
standard model of, 9–11, 13, 48–61, 247
successful reasoning and, 20
intelligence agencies, 104
intelligence explosions, 142–44, 208–9
intelligent agent, 42–48
actions generated by, 48
agent programs and, 48–59
defined, 42
design of, and problem types, 43–45
environment and, 43, 44, 45–46
inputs to, 42–43
multi-agent cooperation design, 94
objectives and, 43, 48–61
reflex, 57–59
intelligent computers. See artificial intelligence (AI)
intelligent personal assistants, 67–71, 101
commonsense modeling and, 68–69
design template for, 69–70
education systems, 70
health systems, 69–70
personal finance systems, 70
privacy considerations, 70–71
shortcomings of early systems, 67–68
stimulus–response templates and, 67
understanding content, improvements in, 68
International Atomic Energy Agency, 249
Internet of Things (IoT), 65
interpersonal services as the future of employment, 122–24
algorithmic bias and, 128–30
decisions affecting people, use of machines in, 126–28
robots built in humanoid form and, 124–26
intractable problems, 38–39
inverse reinforcement learning, 191–93
IQ, 48
Ishiguro, Hiroshi, 125
is-ought problem, 167
“it’s complicated” argument, 147–48
“it’s impossible” argument, 149–50
“it’s too soon to worry about it” argument, 150–52
jellyfish, 16
Jeopardy! (tv show), 80
Jevons, William Stanley, 222
JiaJia (robot), 125
jian ai, 219
Kahneman, Daniel, 238–40
Kasparov, Garry, 62, 90, 261
Ke Jie, 6
Kelly, Kevin, 97, 148
Kenny, David, 153, 163
Keynes, John Maynard, 113–14, 120–21, 122
King Midas problem, 136–40
Kitkit School (software system), 70
knowledge, 79–82, 267–72
knowledge-based systems, 50–51
Krugman, Paul, 117
Kurzweil, Ray, 163–64
language/common sense problem, 79–82
Laplace, Pierre-Simon, 54
Laser-Interferometer Gravitational-Wave Observatory (LIGO), 82–84
learning, 15
behavior, learning preferences from, 190–92
bootstrapping process, 81–82
culture and, 19
cumulative learning of concepts and theories, 82–87
data-driven view of, 82–83
deep learning, 6, 58–59, 84, 86–87, 288–93
as evolutionary accelerator, 18–20
from experience, 285–93
explanation-based learning, 294–95
feature engineering and, 84–85
inverse reinforcement learning, 191–93
reinforcement learning, 17, 47, 55–57, 105, 190–91
supervised learning, 58–59, 285–93
from thinking, 293–95
LeCun, Yann, 47, 165
legal profession, 119
lethal autonomous weapons systems (LAWS), 110–13
Life 3.0 (Tegmark), 114, 138
LIGO (Laser-Interferometer Gravitational-Wave Observatory), 82–84
living standard increases, and AI, 98–100
Lloyd, Seth, 37
Lloyd, William, 31
Llull, Ramon, 40
Lodge, David, 1
logic, 39–40, 50–51, 267–72
Bayesian, 54
defined, 267
first-order, 51–52, 270–72
formal language requirement, 267
ignorance and, 52–53
programming, development of, 271
propositional (Boolean), 51, 268–70
lookahead search, 47, 49–50, 260–61
loophole principle, 202–3, 216
Lovelace, Ada, 40, 132–33
loyal AI, 215–17
Luddism accusation, 153–54
machines, 33
“Machine Stops, The” (Forster), 254–55
machine translation, 6
McAfee, Andrew, 117
McCarthy, John, 4–5, 50, 51, 52, 53, 65, 77
malice, 228–29
malware, 253
map navigation, 257–58
mathematical proofs for beneficial AI, 185–90
mathematics, 33
matrices, 33
Matrix, The (film), 222, 235
MavHome project, 71
mechanical calculator, 40
mental security, 107–10
“merge with machines” argument, 163–65
metareasoning, 262
Methods of Ethics, The (Sidgwick), 224–25
Microsoft, 250
TrueSkill system, 279
Mill, John Stuart, 217–18, 219
Minsky, Marvin, 4–5, 76, 153
misuses of AI, 103–31, 253–54
behavior modification, 104–7
blackmail, 104–5
deepfakes, 105–6
governmental reward and punishment systems, 106–7
intelligence agencies and, 104
interpersonal services, takeover of, 124–31
lethal autonomous weapons systems (LAWS), 110–13
mental security and, 107–10
work, elimination of, 113–24
mobile phones, 64–65
monotonicity and, 24
Moore, G. E., 219, 221, 222
Moore’s law, 34–35
Moravec, Hans, 144
Morgan, Conway Lloyd, 18
Morgenstern, Oskar, 23
Mozi (Mozi), 219
multi-agent cooperation design, 94
Musk, Elon, 153, 164
“Myth of Superhuman AI, The” (Kelly), 148
narrow (tool) artificial intelligence, 46, 47, 136
Nash, John, 30, 195
Nash equilibrium, 30–31, 195–96
National Institutes of Health (NIH), 155
negative altruism, 229–30
NELL (Never-Ending Language Learning) project, 81
nerve nets, 16
NET-VISA, 279–80
Network Enforcement Act (Germany), 108, 109
neural dust, 164–65
Neuralink Corporation, 164
neural lace, 164
neural networks, 288–89
neurons, 15, 16, 19
Never-Ending Language Learning (NELL) project, 81
Newell, Allen, 295
Newton, Isaac, 85–86
New Yorker, The, 88
Ng, Andrew, 151, 152
Norvig, Peter, 2, 62–63
no suicide rule, 287
Nozick, Robert, 223
nuclear industry, 157, 249
nuclear physics, 7–8
Nudge (Thaler & Sunstein), 244
objectives, 11–12, 43, 48–61, 136–42, 165–69. See also goals
off-switch game, 196–200
onebillion (software system), 70
One Hundred Year Study on Artificial Intelligence (AI100), 149, 150
OpenAI, 56
operations research, 10, 54, 176
Oracle AI systems, 161–63
orthogonality thesis, 167–68
Ovadya, Aviv, 108
overhypothesis, 85
overly intelligent AI, 132–44
fear and greed, 140–42
gorilla problem, 132–36
intelligence explosions and, 142–44, 208–9
King Midas problem, 136–40
paperclip game, 194–96
Parfit, Derek, 225
Partnership on AI, 180, 250
Pascal, Blaise, 21–22, 40
Passage to India, A (Forster), 254
Pearl, Judea, 54, 275
Perdix (drone), 112
Pinker, Steven, 158, 165–66, 168
Planet (satellite corporation), 75
Politics (Aristotle), 114
Popper, Karl, 221–22
Popular Science, 152
positional goods, 230–31
practical reasoning, 20
pragmatics, 204
preference autonomy principle, 220, 241
preferences. See human preferences
preference utilitarianism, 220
Price, Richard, 54
pride, 230–31
Primitive Expounder, 133
prisoner’s dilemma, 30–31
privacy, 70–71
probability theory, 21–22, 273–84
Bayesian networks and, 275–77
first-order probabilistic languages, 277–80
independence and, 274
keeping track of not directly observable phenomena, 280–84
probabilistic programming, 54–55, 84, 279–80
programming language, 34
programs, 33
prohibitions, 202–3
Project Aristo, 80
Prolog, 271
proofs for beneficial AI
assistance games, 184–210, 192–203
learning preferences from behavior, 190–92
mathematical guarantees, 185–90
recursive self-improvement and, 208–10
requests and instructions, interpretation of, 203–5
wireheading problem and, 205–8
propositional logic, 51, 268–70
Putin, Vladimir, 182, 183
“put it in a box” argument, 161–63
puzzles, 45
quantum computation, 35–36
qubit devices, 35–36
randomized strategy, 29
rationality
Aristotle’s formulation of, 20–21
Bayesian, 54
critiques of, 24–26
expected value rule and, 22–23
gambling and, 21–23
game theory and, 28–32
inconsistency in human preferences, and developing theory of beneficial AI, 26–27
logic and, 39–40
monotonicity and, 24
Nash equilibrium and, 30–31
preferences and, 23–27
probability and, 21–22
randomized strategy and, 29
for single agent, 20–27
transitivity and, 23–24
for two agents, 27–32
uncertainty and, 21
utility theory and, 22–26
rational metareasoning, 262
reading capabilities, 74–75
real-world decision problem
complexity and, 39
Reasons and Persons (Parfit), 225
Recombinant DNA Advisory Committee, 155
recombinant DNA research, 155–56
recursive self-improvement, 208–10
redlining, 128
reflex agents, 57–59
reinforcement learning, 17, 47, 55–57, 105, 190–91
remembering self, and preferences, 238–40
Repugnant Conclusion, 225
reputation systems, 108–9
“research can’t be controlled” arguments, 154–56
retail cashiers, 117–18
reward function, 53–54, 55
reward system, 17
Rise of the Robots: Technology and the Threat of a Jobless Future (Ford), 113
risk posed by AI, 145–70
deflection arguments, 154–59
denial of problem, 146–54
Robinson, Alan, 5
Rochester, Nathaniel, 4–5
Rutherford, Ernest, 7, 77, 85–86, 150
Sachs, Jeffrey, 230
sadism, 228–29
Salomons, Anna, 116
Samuel, Arthur, 5, 10, 55, 261
Sargent, Tom, 191
scalable autonomous weapons, 112
Schwab, Klaus, 117
Second Machine Age, The (Brynjolfsson & McAfee), 117
Sedol, Lee, 6, 47, 90, 91, 261
seismic monitoring system (NET-VISA), 279–80
self-driving cars, 65–67, 181–82, 247
performance requirements for, 65–66
potential benefits of, 66–67
probabilistic programming and, 281–82
sensing on global scale, 75
sets, 33
Shakey project, 52
Shannon, Claude, 4–5, 62
Shiller, Robert, 117
side-channel attacks, 187, 188
Sidgwick, Henry, 224–25
silence regarding risks of AI, 158–59
Simon, Herbert, 76, 86, 265
simulated evolution of programs, 171
SLAM (simultaneous localization and mapping), 283
Slate Star Codex blog, 146, 169–70
Slaughterbot, 111
Small World (Lodge), 1
Smart, R. N., 221–22
smart homes, 71–72
Smith, Adam, 227
snopes.com, 108
social aggregation theorem, 220–21
Social Limits to Growth, The (Hirsch), 230
social media, and content selection algorithms, 8–9
softbots, 64
software systems, 248
solutions, searching for, 257–66
abstract planning and, 264–66
combinatorial complexity and, 258
computational activity, managing, 261–62
15-puzzle and, 258
Go and, 259–61
map navigation and, 257–58
motor control commands and, 263–64
24-puzzle and, 258
“Some Moral and Technical Consequences of Automation” (Wiener), 10
Sophia (robot), 126
specifications (of programs), 248
“Speculations Concerning the First Ultraintelligent Machine” (Good), 142–43
speech recognition, 6
speech recognition capabilities, 74–75
Spence, Mike, 117
SpotMini, 73
SRI, 41–42, 52
standard model of intelligence, 9–11, 13, 48–61, 247
StarCraft, 45
Stasi, 103–4
stationarity, 24
statistics, 10, 176
Steinberg, Saul, 88
stimulus–response templates, 67
Stockfish (chess program), 47
striving and enjoying, relation between, 121–22
subroutines, 34, 233–34
Summers, Larry, 117, 120
Summit machine, 34, 35, 37
Sunstein, Cass, 244
Superintelligence (Bostrom), 102, 145, 150, 167, 183
supervised learning, 58–59, 285–93
surveillance, 104
Sutherland, James, 71
“switch it off” argument, 160–61
synapses, 15, 16
Szilard, Leo, 8, 77, 150
tactile sensing problem, robots, 73
Taobao, 106
technological unemployment. See work, elimination of
Tegmark, Max, 4, 114, 138
Tellex, Stephanie, 73
Tencent, 250
tensor processing units (TPUs), 35
Terminator (film), 112, 113
Tesauro, Gerry, 55
Thaler, Richard, 244
Theory of the Leisure Class, The (Veblen), 230
Thinking, Fast and Slow (Kahneman), 238
thinking, learning from, 293–95
Thornton, Richard, 133
Times, 7, 8
tool (narrow) artificial intelligence, 46, 47, 136
TPUs (tensor processing units), 35
tragedy of the commons, 31
Transcendence (film), 3–4, 141–42
transitivity of preferences, 23–24
Treatise of Human Nature, A (Hume), 167
tribalism, 150, 159–60
truck drivers, 119
TrueSkill system, 279
Tucker, Albert, 30
Turing, Alan, 32, 33, 37–38, 40–41, 124–25, 134–35, 140–41, 144, 149, 153, 160–61
Turing test, 40–41
tutoring, 100–101
tutoring systems, 70
2001: A Space Odyssey (film), 141
Uber, 57, 182
UBI (universal basic income), 121
uncertainty
AI uncertainty as to human preferences, principle of, 53, 175–76
human uncertainty as to own preferences, 235–37
probability theory and, 273–84
United Nations (UN), 250
universal basic income (UBI), 121
Universal Declaration of Human Rights (1948), 107
universality, 32–33
universal Turing machine, 33, 40–41
unpredictability, 29
utilitarian AI, 217–27
Utilitarianism ((Mill), 217–18
utilitarianism/utilitarian AI, 214
challenges to, 221–27
consequentialist AI, 217–19
ideal utilitarianism, 219
interpersonal comparison of utilities, debate over, 222–24
multiple people, maximizing sum of utilities of, 219–26
preference utilitarianism, 220
social aggregation theorem and, 220
Somalia problem and, 226–27
utility comparison across populations of different sizes, debate over, 224–25
utility function, 53–54
utility monster, 223–24
utility theory, 22–26
axiomatic basis for, 23–24
objections to, 24–26
value alignment, 137–38
Vardi, Moshe, 202–3
Veblen, Thorstein, 230
video games, 45
virtual reality authoring, 101
virtue ethics, 217
visual object recognition, 6
von Neumann, John, 23
W3C Credible Web group, 109
WALL-E (film), 255
Watson, 80
wave function, 35–36
“we’re the experts” argument, 152–54
white-collar jobs, 119
Whitehead, Alfred North, 88
whole-brain emulation, 171
Wiener, Norbert, 10, 136–38, 153, 203
Wilczek, Frank, 4
Wiles, Andrew, 185
wireheading, 205–8
work, elimination of, 113–24
caring professions and, 122
compensation effects and, 114–17
historical warnings about, 113–14
income distribution and, 123
occupations at risk with adoption of AI technology, 118–20
reworking education and research institutions to focus on human world, 123–24
striving and enjoying, relation between, 121–22
universal basic income (UBI) proposals and, 121
wage stagnation and productivity increases, since 1973, 117
“work in human–machine teams” argument, 163
World Economic Forum, 250
World Wide Web, 64
Worshipful Company of Scriveners, 109
Zuckerberg, Mark, 157
ABCDEFGHIJKLMNOPQRSTUVWXYZ
About the Author
Stuart Russell is a professor of Computer Science and holder of the Smith-Zadeh Chair in Engineering at the University of California, Berkeley. He has served as the Vice-Chair of the World Economic Forum's Council on AI and Robotics and as an advisor to the United Nations on arms control. He is a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. He is the author (with Peter Norvig) of the definitive and universally acclaimed textbook on AI, Artificial Intelligence: A Modern Approach.
[Penguin Random House Next Reads logo]
What’s next on your reading list?
Discover your next
great read!
Get personalized book picks and up-to-date news about this author.
Sign up now. |
2b18e18d-f0ef-4dcf-b50e-223bc97a1d42 | trentmkelly/LessWrong-43k | LessWrong | How likely are scenarios where AGI ends up overtly or de facto torturing us?
How likely are scenarios where AGI prevents us from committing suicide or dying?
I’m a big life extension supporter but being unable to choose to die ever is a literal hell. As dark as it is, if these scenarios are likely, it seems the rational thing to do is die before AGI comes.
Killing all of humanity is bad enough, but how concerned should we be about even worse scenarios? |
e0244e91-a993-4d80-9cd9-890a8ded9f0f | trentmkelly/LessWrong-43k | LessWrong | Meetup : Berlin: Fermi paradox discussion
Discussion article for the meetup : Berlin: Fermi paradox discussion
WHEN: 18 October 2013 07:00:00PM (+0200)
WHERE: S Wuhletal, 12621 Berlin
We're meeting to discuss the Fermi paradox and the great filter. Everyone is welcome! There'll probably be 6-12 people.
Please look at the mailing list for details.
Discussion article for the meetup : Berlin: Fermi paradox discussion |
20806784-7355-4a54-937e-993b016d1447 | trentmkelly/LessWrong-43k | LessWrong | Zen and Rationality: Don't Know Mind
This is post 1/? about the intersection of my decades of LW-style rationality practice and my several years of Zen practice.
In today's installment, I look at the Zen notion of "Don't Know Mind" in rationalist terms.
I'm a little unsure where "don't know mind" comes from. Sensei Google suggests it might be the Seon expression of the idea that in Zen is said "shoshin", often translated as "beginner's mind" but also carrying notions conveyed by translating it as "original mind", "naive mind", "novice mind", and "inexperienced mind" (noting that the character rendered "mind" is better translated as "heart-mind"). There's also a beloved koan often called "not knowing is most intimate" (Book of Equanimity, Case 20), and "don't know" is a good name to put to a particular insight you might have if you meditate enough. Regardless, "don't know mind" is a thing Zen practitioners sometimes say. What does it mean?
Depends on how you parse it.
The standard parsing is as "don't-know mind", as in the mind that doesn't know. This fits with the notion of soshin or beginner's mind, that is the mind that has not yet made itself up. In standard rationalist terms, this is the heart-mind that is curious, relinquishing, light, even, simple, humble, and nameless. Saying "don't know" is tricky, though, because there's the looming trap of the "don't know" that stops curiosity. Instead, this is the "don't know" that extends an open invitation of learn more.
You can also parse it as a command: "do not know" (since Zen falls within the Buddhist tradition that claims all you know is mind, "mind" is redundant here). This isn't an imperative to never know anything. Rather it's an encouragement to gaze beyond form into emptiness since our minds are often caught up in form (the map) and fail to leave space for emptiness (the territory[1]). More specifically, to know is to make distinctions, to construct an abstraction, to prune, to have a pior over your observations, to give something a name, t |
c94d0db6-58d2-4143-8fca-2968a0624da0 | trentmkelly/LessWrong-43k | LessWrong | The Direct Democracy Experiment
>"The heart of the problem is not how we vote for officials - it's that we vote for officials, instead of getting to vote on issues.
>Americans are proud of being "governed by the people", yet a citizen has no effective way to have any influence on any particular issue! If it's very important to me to promote gay rights or environmental responsibility, I'm supposed to vote for a Democrat? How effective is that?
>We need to ditch representative democracy if we want democracy. (The next question is whether we want democracy.)"---PhilGoetz
The main problem with direct democracy is that we are reliant on "the people", who may be ill-informed and not make correct choices on issue questions. With a representative democracy, you may have intelligent and rational actors who would make better policy choices. PhilGoetz may disagree though, and believe that it is important to enfranschie "the people" in policymaking...
Rather than rely on philosophical discussion based on values, I propose an experiment to find out if PhilGoetz' Direct Democracy works.
I start up a simulation (which I will not name so that you don't play the simulation ahead of time). I will give you Policy Questions based on the simulation, where you will simply vote Yes or No. Majority rules. (To make it more interesting, I'll have each vote represent a random "interest group", with control over entire voting blocs.) Anybody can change their vote at any time. If people don't have the time to vote, then can develop a "profile" which would allow them to vote in proxy. Voting will end after a specific period of time, or the moment the vote crosses over the majority threshold, and stays over for a required period of time.
The simulation will end in a war against an NPC country. If you "win" this war, you win the simulation, the Direct Democracy works, and then future experiments could lead to people comparing the effectiveness of different types of "democracy" in creating good policy. If you "lose" the war |
978d4c92-ea24-4dd1-9463-353c53fcda25 | StampyAI/alignment-research-dataset/arxiv | Arxiv | A Signaling Game Approach to Databases Querying and Interaction
1 Introduction
---------------
Most users do not know the structure and/or content of databases
and cannot precisely express their queries [[27](#bib.bib27), [10](#bib.bib10), [15](#bib.bib15), [26](#bib.bib26)]. Hence, it is challenging for a database management system (DBMS), to understand and satisfy users’ information needs. Database researchers have proposed methods and systems to help users specify their queries more precisely and DBMS understand users’ intents more accurately [[37](#bib.bib37), [26](#bib.bib26), [10](#bib.bib10), [39](#bib.bib39), [27](#bib.bib27), [18](#bib.bib18), [6](#bib.bib6)].
In particular, the database theory community has deeply
investigated some problems that appear in the context of database usability [[50](#bib.bib50), [16](#bib.bib16), [30](#bib.bib30), [15](#bib.bib15), [2](#bib.bib2), [7](#bib.bib7), [42](#bib.bib42)]. Current models mainly focus on improving user satisfaction for a single information need. Given a user’s information need e, the DBMS estimates e by various methods including showing potential results to the user and collecting her feedback [[33](#bib.bib33), [12](#bib.bib12), [7](#bib.bib7), [42](#bib.bib42), [48](#bib.bib48)], asking questions from her [[2](#bib.bib2)], or suggesting potential queries to her [[29](#bib.bib29)].
Nevertheless, many users may explore a database to find answers for various queries and information needs over a rather long period of time. Further, the user may seek the answers to the same query more than once. For example, a scientist may use a genetic database to find answers to many queries, some of which may repeat multiple times over the life of a research project. For these users, database querying is an inherently interactive and continuing process.
This setting extends the problem of answering a single information need in two aspects. First, the DBMS can improve its understanding of how the user expresses her intents progressively over the course of many potentially repeating queries.
The DBMS may learn the desired answers for a submitted query q
from the user’s explicit or implicit feedback on the
returned results.
Types of the user’s feedback may include clicking on
the relevant answers [[22](#bib.bib22), [52](#bib.bib52)],
the amount of time the user spends on reading
the results [[21](#bib.bib21)],
or user’s eye movements [[25](#bib.bib25)].
Second, the user may leverage her experience from previous interactions with the DBMS to formulate her future queries.
As the user submits queries and inspect or consume their results, she may gain a better understanding of the database content, which may impact how she formulates queries in the future. Researchers have observed a similar behavior in searching text documents [[34](#bib.bib34), [35](#bib.bib35)]. For example, to find the answers for a particular information need, the user may submit some initial and underspecified query, observes its results, and reformulate it according to her observations. After the user finds a query that effectively expresses a frequent intent, she may store and use it later to reduce the burden query formulation in future.
Ideally, we would like the user and DBMS to gradually develop some degree of mutual understanding over the course of several queries and interactions: the DBMS should better understand how the user expresses her intents and the user may get more familiar with the content of the database. Of course, the user and DBMS would like to establish a perfect or near-perfect mutual understanding, where the DBMS returns all or a majority of the desired answers to all or most user queries. An important and interesting question is if there are inherent limits to establishing these mutual understandings. In other words, one would like to know the degrees to which these mutual understandings are possible. One would also like explore the characteristics of information needs, queries, and behavior of the DBMS in such limits. Moreover, it is useful from the practical point of view to find the methods and strategies that the DBMS can adapt to help establishing more effective mutual understandings in rather small amounts of time.
To answer the aforementioned questions, one has to precisely define the notions of collaboration and mutual understanding between the user and DBMS.
We propose a novel framework that formalizes the collaboration between user and the DBMS over the course of many interactions. Our framework models this collaboration as a game between two active and potentially rational agents: the user and DBMS. The common goal of the user and DBMS is to reach a mutual understanding on expressing information needs in the form of queries. The user informs the DBMS of her intents by submitting queries. The DBMS returns some results for the query. Both players receive some reward based on the degree by which the returned answers satisfy the information need behind the query. We use standard effectiveness measures from database systems and information retrieval domains to measure the degree of information need satisfaction [[38](#bib.bib38)]. The equilibria and stable states of the game model the inherent limits in establishing a mutual understanding between the user and DBMS. We further explore the strategies that a DBMS may adapt to improve user satisfaction. We believe that this framework naturally models the long term interactions between the user and DBMS and provides the basis for deep theoretical investigations of the problem.
More specifically, we make the following contributions:
* We model the long term interaction between the user and DBMS as a particular type of game called a signaling game in Section [3](#S3 "3 Signaling Game Model ‣ A Signaling Game Approach to Databases Querying and Interaction") [[11](#bib.bib11), [40](#bib.bib40)]. The user’s strategy in the game is a stochastic mapping from her intents to queries, which reflects the user’s decision in choosing a query to express an information need. The DBMS strategy is a stochastic mapping from queries to results. After each interaction, the DBMS may update its strategy based user feedback and/or the user may update its strategy based on the returned results. Since users’ degree of rationality is not generally known, we explore the properties of this game for both cases where the user strategy is modified and remains unchanged.
* We analyze the equilibria of the game and show that the game has some Nash and strict Nash equilibria in Section [4](#S4 "4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction"). We find the number of Nash and strict Nash equilibria in some interesting settings. We also show
that the Nash and strict Nash equilibria of the game may not
provide a desired level of mutual understanding between the user and DBMS. Hence, the game may stabilize in an undesirable
state.
* We propose a reinforcement learning rule for an arbitrary effectiveness measure that updates the DBMS strategy in Section [5](#S5 "5 Adaptation Mechanisms for DBMS ‣ A Signaling Game Approach to Databases Querying and Interaction"). We prove that this learning strategy for the DBMS improves the effectiveness of answering queries stochastically speaking, and converges almost surely. Our proof shows that the proposed learning rule is robust to the choice of effectiveness measure which is a highly desirable property. We extend our results for the cases where both DBMS and user adapt their strategies during their interaction.
2 Related Work
---------------
Signaling games model communication between two or more agents and have been widely used in economics, sociology, biology, and linguistics [[32](#bib.bib32), [11](#bib.bib11), [40](#bib.bib40), [13](#bib.bib13)].
Generally speaking, in a signaling game a player observes the current state of the world and informs
the other player(s) by sending a signal. The other player interprets the signal and makes
a decision and/or performs an action that affect the payoff of both players.
A signaling game may not be cooperative in which the interests of players do not coincide [[11](#bib.bib11)].
Our framework extends a particular category of signaling games called language
games [[49](#bib.bib49), [40](#bib.bib40), [13](#bib.bib13)] and is closely
related to learning in signaling games [[24](#bib.bib24), [47](#bib.bib47), [17](#bib.bib17)].
These games have been used to model the evolution of a population’s language in a shared environment.
In a language game, the strategy of each player is a stochastic mapping between a set of signals and a
set of states. Each player observes its internal state, picks a signal according its strategy, and sends
the signal to inform other player(s) about its state. If the other player(s) interpret the correct state
from the sent signal, the communication is successful and both players will be rewarded. Our framework, however,
differs from language games in several fundamental aspects.
First, in a language game every player signals, but only one of our players, i.e., user,
sends signals.
Second, language games model states as an unstructured set of objects.
However, each user’s intent in our framework is a set of tuples and different intents may intersect.
Third, we use widely used similarity functions between the desired and returned answers for a
query to measure the degree of success of answering a query.
Fourth, the signals in language games do not posses any particular meaning and can be assigned
to every state. A database query, however, restricts its possible answers.
Finally, there is not any work on language games on analyzing the dynamics of reinforcement
learning where players learn in different time scales.
Game theoretic approaches have been used in various areas of computer science, such as distributed systems, planning, security, and data mining [[3](#bib.bib3), [31](#bib.bib31), [19](#bib.bib19), [44](#bib.bib44), [43](#bib.bib43), [20](#bib.bib20), [36](#bib.bib36)].
In particular, researchers have recently applied game theoretic approaches to model the actions taken by users and document
retrieval systems in a single session [[34](#bib.bib34)]. They propose a framework to find out whether the user likes to continue exploring the
current topic or move to another topic. We, however, explore the development of common representations of intents between
the user and DMBS. We also investigate the querying and interactions that may span over multiple queries and sessions.
Moreover, we analyze equilibria of the game and convergence rates of some strategy adaptation methods for the user and DBMS.
Finally, we focus on structured rather than unstructured data. Avestani et al. have used signaling games to create a shared lexicon between multiple
autonomous systems [[5](#bib.bib5)]. We, however, focus on modeling users’ information needs and development of mutual understanding between users and the DBMS. Moreover, as opposed to the autonomous systems, a DBMS and user may update their information about the interaction
in different time scales. We also provide a rigorous analysis of the game equilibria and propose novel strategy
adaptation mechanisms for the DBMS.
We have proposed the possibility of using signaling games to model the interaction between users and DBMS
and provided an initial result in a short paper in [[46](#bib.bib46)].
The current paper precisely formalizes and significantly develops our framework, provides new results, and
extends our previous result for practical settings.
First, we precisely define the concepts in our framework, e.g., intent and query, and their relationships.
Second, we define the equilibria of the game, analyze their properties,
and show some equilibria are desirable.
Third, our previous work models a database as a set of tuples and
assumes that the desired answer for each query is a single tuple.
These assumptions do not generally hold in practice.
In this paper, we analyze both the equilibria and learning algorithms for the game
where the answers to a query are a set or ranked lists of tuples.
Usable query interfaces usually only partially
satisfy the user’s information needs. However, authors in
[[46](#bib.bib46)] assume that a result either fully satisfies the user’s information need
or does not contain information relevant to the submitted intent.
In this paper, we model the cases where results partially satisfy user’s intents.
We show that the ability to return results that partially match the desired answers of a query significantly impacts the
equilibria of the game and the algorithms for updating the strategy of the DBMS.
Finally, we rigorously analyze our proposed algorithm for updating DBMS strategy where
user also changes her strategy in a possibly different time scale than the DBMS.
3 Signaling Game Model
-----------------------
###
3.1 Background
Let Attr be a set of symbols that contains the names of attributes [[1](#bib.bib1)].
Each relation schema R is
a non-empty finite subset of Attr.
The arity of R is the number of attributes in R.
Let D be a finite or countably infinite domain of values.
Each member of D is called a constant.
A relation IR of relation schema R with arity k
is a finite subset of Dk.
Each member of IR is called a tuple.
We call the set of all constants that appear in
IR its active domain and denote it as adom(IR).
Database schema (schema) R is a non-empty finite set of relation schemas.
A database instance of schema R is a mapping IR whose
domain is R and IR(R) is a relation of R for
all R∈R. The definition of active domain naturally extends to
database instance I. We denote the active domain of I
as adom(I).
An atom is a formula in the form of R(v1,…,vn) where R is a relation, n is the arity of R, and each vi, 1≤i≤n, is a variable or a constant. A literal is an atom, or the negation of an atom. A rule has the form of
ans(v)←L1(v1),⋯,Lm(vm),
where Li, 1≤i≤m, are literals and ans(v) is an atom, which is called the head of the rule. Each variable in v occurs in at least one of v1, ⋯, vm. Each rule has a finite set of literals. A query is a (finite) set of rules that share the same head. A query is over schema R if and only if all its non-head literals are relations or the negation of relations in R.
In this paper, we consider only domain-independent queries [[1](#bib.bib1)].
We show the results of evaluating query q over database instance IR as q(IR) [[1](#bib.bib1)].
Our results can be extended to more expressive relational query languages.
For a positive integer m≥1, we denote [m]:={1,…,m}. For a vector u∈Rm, we denote the ith entry of u by ui. Similarly, we denote the (i,j)th entry of an m×n matrix U(i∈[m],j∈[n]) by Uij. We also say that U is a row-stochastic matrix (or simply a stochastic matrix) if it is non-negative (i.e. Uij≥0 for all i,j) and ∑nj=1Uij=1 for all i∈[m]. We denote the set of all m×n stochastic matrices by Lmn. For an event A of a probability space Ω, we use 1A for the indicator function on the set A, i.e. 1A(ω)=1 if ω∈A and 1A(ω)=0 if ω∉A.
###
3.2 Intent, Query, and Result Paradigm
####
3.2.1 Intent & Query
Generally speaking, user’s intent, i.e., her information need, is a query
that user would like to ask from a DBMS [[38](#bib.bib38), [27](#bib.bib27)].
However, due to the user’s lack of knowledge about the content or
schema of the database, she may submit her intent using some queries
that do not precisely reflect the intent
[[27](#bib.bib27), [9](#bib.bib9), [29](#bib.bib29), [15](#bib.bib15)].
More formally, we assume that the intent e and its query q belong to the
same fixed query language where we may have e≠q.
Our results are orthogonal to the query language of the intents and queries.
In what follows, we provide examples of intent and query paradigms for some query
interfaces. Let us consider relational query language interfaces,
e.g., SQL interface.
Researchers have observed that many users of these interfaces
are not sufficiently familiar with
the database content and/or schema and may
submit queries that do not precisely reflect their intents
[[9](#bib.bib9), [29](#bib.bib29)].
For example, they may not know the correct values to
which they should set some attributes in their queries
and submit underspecified queries or
use incorrect values for them in their queries, e.g., setting
the university name in a query to Michigan
instead of the correct value Michigan State University.
Users may also not know which relations and/or join conditions
to use in their queries [[27](#bib.bib27)].
Hence, a user who has intent e in mind
may submit query q≠e to the DBMS.
Of course, the user still expects that the DBMS returns the answers
of intent e for her submitted query.
DBMSs may provide form-based query interfaces [[53](#bib.bib53), [28](#bib.bib28)].
Researchers have defined the syntax and semantics of
query languages of some form-based interfaces [[53](#bib.bib53), [28](#bib.bib28)].
Users of these query interfaces may also not be able to submit
queries that precisely express their intents.
For example, a user may not know the correct values for
some fields in a form to exactly express her intent.
Some DBMSs allow users to explore databases using keyword queries [[10](#bib.bib10), [23](#bib.bib23), [30](#bib.bib30), [15](#bib.bib15), [4](#bib.bib4)].
These systems usually define a keyword query as a set of terms where
each term belongs to a fixed domain and the answers to keyword query
are tuples whose attributes contain all or some terms in the query.
Similarly, a user may not know all the terms to precisely pinpoint
her desired tuples and/or do not use the right keywords in her
queries.
Sometimes, the query interface may not provide a sufficiently expressive
query language for the user’s intents. For example, a query interface
may support only CQ but some intents may contain union operation.
In this paper, we consider only the case where the query language supported by
the query interface is sufficiently expressive to represent the user’s intents.
We plan to extend our framework and results for the cases where
this assumption does not hold.
To simplify our notations in this paper, we assume that
users interact with the DBMS by writing queries from a fixed
relational query language, e.g., domain independent UCQ with negation.
Our results extend for other query
interfaces with precisely defined query languages.
In practice a user has only
finitely many intents over a database instance in a finite period of time.
Hence, we assume the intents for a particular user is finite.
The DBMS does not generally know this set of intents
We index each intent over database instance I by 1≤i≤m. We assume
that the set of intents have a prior probability π, i.e. π∈Rm,
πi≥0 for all i, and ∑mi=1πi=1. In this case πi is
the probability that user has intent ei in mind. Without loss of generality,
we can assume that πi>0 for all i∈[m], otherwise, we can restrict our
analysis on the set {ei∣πi>0}.
If the underlying domain of the database instance
is infinite (finite), the set of possible queries submitted by
a user is also infinite (finite).
Nevertheless, in practice users can enter
only finitely many constants.
For example, it is well established that users usually input strings of
relatively small number of characters [[51](#bib.bib51)].
Hence, in many settings one may syntactically restrict the constants
submitted by a user in her queries to a finite subset of the domain.
Unless otherwise noted, the results in this paper hold for both finite and
infinite sets of possible queries. To simplify our notations, we assume that
the set of possible queries are finite.
We index each query over database instance I by 1≤j≤n.
####
3.2.2 Result
Given a query q over database instance I, the DBMS
returns a set or bag of
tuples as the answer of q. An obvious choice is to return q(I).
Because the DBMS knows that the input query may not precisely specify
user’s intent, it also considers alternative answers to satisfy the
intent behind the query
[[27](#bib.bib27), [9](#bib.bib9), [15](#bib.bib15)]. Thus, it may return some tuples that are not in
q(I) and/or not return some tuples in q(I).
###### Definition 3.1
A result s for query q over database instance I is
a finite relation instance with the schema of head of q such that adom(t)⊂ adom(I).
Researchers have proposed several methods to find the result
that best answers the intent behind a query. The methods include
learning probabilistic models of relevance
using offline training data [[9](#bib.bib9), [29](#bib.bib29)],
integrating traditional information retrieval techniques [[23](#bib.bib23)],
and/or using schema-based heuristics [[15](#bib.bib15)].
Surveys of these methods can be found at [[10](#bib.bib10), [26](#bib.bib26)].
We build on this line of work by introducing methods that learn
the results from interaction with users.
As adom(I) is finite, if the set of possible queries is finite (infinite),
the set of all results in a database instance will be finite (infinite).
Unless otherwise noted, our findings apply for both finite and infinite
sets of results over a database instance.
For the sake of simplicity, we state our findings for finite sets of results.
We index each result over database instance I by 1≤ℓ≤o.
####
3.2.3 Satisfaction Metrics
Given that the DBMS returns result s for intent e, we need some metrics to measure
how effectively s answers e. There are some standard metrics in database systems
and information retrieval to measure the user satisfaction given a returned set of tuples [[38](#bib.bib38), [9](#bib.bib9)].
Let a tuple be relevant to intent e over database instance I if it belongs to e(I).
The precision of a result s for intent e over database I is the fraction
of its tuples that are in e(I). The precision of a result is larger if it contains
fewer non-relevant tuples. The recall of a result s for intent e over database
I is the fraction of tuples in e(I) that are in s. A result to an intent has higher recall, if the result contains more relevant tuples. One may improve the recall by returning more tuples [[38](#bib.bib38)].
Because this strategy includes more tuples in the result, it may add more non-relevant tuples to the result and potentially sacrifice the precision. Ideally, the returned result should have both prefect precision and recall.
F-measure is the harmonic mean of recall and precision and takes its maximum value for
a pair of intent and returned result iff both precision and recall are maximum for the intent and result.
The aforementioned metrics are defined for the cases where the intent has at least one relevant answer over the database instance
I, i.e., e(I)≠∅.
If we have e(I)≠∅, one may use fallout
that penalizes the result that contains many
non-relevant answers, to measure the effectiveness of returned results.
Users often do not have enough time to go over all returned tuples and inspect only the top-k returned tuples [[38](#bib.bib38)]. There are some rank-based standard measures that prefers the answers where the relevant tuples appear in the higher positions. For example, precision at k, p@k, is the fraction of relevant tuples in the top-k returned tuples. In these settings, the result of a query is a list of tuples. For simplicity, we assume that each result to a query is a set of tuples.
Our findings extend for the case where results are ranked lists of tuples.
Moreover, some tuples may only partially cover the user’s desired information. For example, a user may
want to know both the names
and grades of some students from a university database, but each tuple in the result contains only the desired names. There are some standard effectiveness metrics, such as nDCG, that quantify the relevance of
each returned tuple using multiple relevance
levels, i.e., numbers [[38](#bib.bib38)]. Our results hold for such effectiveness metrics.
###
3.3 Strategies and Rewards
The strategy of user, U, is a random mapping from the set of her intents to queries.
That is, given that the user’s intent is ei, she may submit query qj with
probability Uij such that for a given ei, we have ∑jUij =1.
U is a pure strategy if each Uij is either 0 or 1
and it is a mixed strategy otherwise.
If a user adopts a pure strategy at some point
in her interaction with the DBMS, she will always
expresses each intent using a certain query. On the other hand,
mixed strategies indicate that she may try
several queries with different probabilities
because she may not be certain about which queries to choose
for an intent. Furthermore, a user in our model may
represent a population of users with potentially similar background
and preferences. Because users in a population may choose different
queries to express the same intent, it is very likely that the
population selects a mixed strategy.
The DBMS strategy, D, is a random mapping from the set of queries to the set of results.
Formally, given the input query qj, the DBMS returns result sℓ with probability of Djℓ.
For a given query qj, we have ∑ℓDjℓ =1.
The definitions of pure and mixed naturally extend for DBMS strategies.
A DBMS that does not learn from its interaction with the user
may adopt a pure strategy. Nevertheless, researchers have shown that a mixed strategy
is very helpful for systems that want to improve the effectiveness of their answers
by learning from their interactions with users [[22](#bib.bib22), [52](#bib.bib52)].
Let us assume that the system always return results s for query q
during its interaction with the user. Clearly, there may be
tuples not in s that are relevant to the intent behind q. But,
these tuples will never be shown to the user. Hence, the system will never
know the feedback of user on those tuples to adapt its strategy accordingly.
Hence, these systems deliver relatively less effective results in the
long run. To collect the user’s feedback
on more tuples, the system may randomly return different results
for the same query during its interaction with the user.
Of course, if the system aims at only showing more (distinct) tuples
to the user, it may deliver relatively ineffective answers in the short term.
A desirable DBMS strategy provides a trade-off between exploration, i.e.,
collecting feedback on more tuples, and exploitation, i.e, showing tuples that
DBMS deem more relevant to the intent behind the query.
Let r represent an effectiveness measure described in Section [3.2.3](#S3.SS2.SSS3 "3.2.3 Satisfaction Metrics ‣ 3.2 Intent, Query, and Result Paradigm ‣ 3 Signaling Game Model ‣ A Signaling Game Approach to Databases Querying and Interaction").
We denote the degree of effectiveness of result sℓ for intent ei as r(ei,sℓ).
We have 0≤ r(ei,sℓ)≤1 for all 1≤i≤m and 1≤ℓ≤o.
Because the goal of both the user and DBMS is to satisfy the user’s information need, an interaction yields a reward r(ei,sℓ) for both when the intent and returned result are ei and sℓ, respectively.
The communication between the user and DBMS can be modeled as a signaling game with identical interests played between them. In this game, the set of strategies of a user is the set Lmn of row-stochastic matrices and the set of strategies of the DBMS is the set Lno of n×o row-stochastic matrices. The payoff of user and DBMS in this case w.r.t. effectiveness measure r are
| | | | |
| --- | --- | --- | --- |
| | ur(U,D)=m∑i=1πin∑j=1Uijo∑ℓ=1Djℓ r(ei,sℓ). | | (1) |
The payoff function ([1](#S3.E1 "(1) ‣ 3.3 Strategies and Rewards ‣ 3 Signaling Game Model ‣ A Signaling Game Approach to Databases Querying and Interaction")) is an expected payoff of the interaction between user and DBMS when the user maps a (random) intent ei to a query qj with probability Uij and the database maps back the query qj to result sℓ with probability Djℓ. The larger the value of this payoff function is, the more likely it is that the DBMS returns the desired answers to more users’ queries.
4 Equilibrium Analysis
-----------------------
###
4.1 Fixed User Strategy
In some settings, the strategy of a user may change in a much slower time scale than that of the DBMS. In these cases, it is reasonable to assume that the user’s strategy is fixed. Hence, the game will reach a desirable state where the DBMS adapts a strategy that maximizes the expected payoff.
Let a strategy profile be a pair of user and DBMS strategies.
###### Definition 4.1
Given a strategy profile (U,D), D is a best response to U w.r.t.
effectiveness measure r if we have ur(U,D) ≥ur(U,D′) for
all the database strategies D′.
A DBMS strategy D is a strict best response to U if the inequality in Definition [4.1](#S4.Thmtheorem1 "Definition 4.1 ‣ 4.1 Fixed User Strategy ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction") becomes strict for all D′≠D.
###### Example 4.1
Consider the database instance about universities that
is shown in Table LABEL:sub@example:intro:table:instance.
Also, consider the queries, intents, and the strategy profile in Tables LABEL:sub@example:equilib:strictbest:intents, and LABEL:sub@example:equilib:strictbest:queries, and
LABEL:sub@example:equilib:strictbest:strategy, respectively.
The results s1, s2, and s3 in the DBMS strategy are the
outputs of applying the intents, e1, e2, and e3
in Table LABEL:sub@example:equilib:strictbest:intents to the
Univ relation in Table LABEL:sub@example:intro:table:instance, respectively.
Given a uniform prior over the intents,
the DBMS strategy is a best response
and a strict best response for the user strategy
w.r.t. precision.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
| Name | Abbreviation | State | Type | Ranking |
| --- | --- | --- | --- | --- |
| Missouri State University | MSU | MO | Public | 64 |
| Mississippi State University | MSU | MS | Public | 41 |
| Murray State University | MSU | KY | Public | 57 |
| Michigan State University | MSU | MI | Public | 34 |
(a) A database instance of relation Univ
|
| Query# | Query |
| --- | --- |
| q1 | ans(w)←Univ(x,‘MSU',y,z,w) |
| q2 | ans(w)←Univ(x,y,‘KY',z,w) |
(b) Queries
|
| Intent# | Intent |
| --- | --- |
| e1 | ans(w)←Univ(x,‘MSU',‘MO',y,w) |
| e2 | ans(w)←Univ(x,‘MSU',y,‘Public',w) |
| e3 | ans(w)←Univ(x,‘MSU',‘KY',z,w) |
(c) Intents
|
q1
q2
e1
1
0
e2
1
0
e3
0
1
s1
s2
s3
q1
1
0
0
q2
0
0
1
(d) The user (left) and DBMS (right) strategies
|
Table 1: Queries, intents, and strategies for Examples [4.1](#S4.Thmexample1 "Example 4.1 ‣ 4.1 Fixed User Strategy ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction")
###### Definition 4.2
Given a strategy profile (U,D), an intent ei, and a query qj, the payoff of ei using qj is
| | | |
| --- | --- | --- |
| | ur(ei,qj)=o∑ℓ=1Dj,ℓr(ei,sℓ). | |
###### Definition 4.3
The pool of intents for query qj in user strategy U is the set of intents ei such that Ui,j>0.
We denote the pool of intents of qj as PL(qj). Our definition of pool of intent resembles the notion of pool of state in signaling games [[11](#bib.bib11), [13](#bib.bib13)]. Each result sℓ such that Dj,ℓ>0 may be returned in response to query qj. We call the set of these results the reply to query qj.
###### Definition 4.4
A best reply to query qj w.r.t. effectiveness measure r is a reply that maximizes ∑ei∈PL(qj)πiUi,j ur(ei,qj).
The following characterizes the best response to a strategy.
###### Lemma 4.5
Given a strategy profile (U,D), D is a best response to U w.r.t. effectiveness measure r if and only if D maps every query to one of its best replies.
{proof}
The proof is in the appendix.
The following corollary directly results from
Lemma [4.5](#S4.Thmtheorem5 "Lemma 4.5 ‣ 4.1 Fixed User Strategy ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction").
###### Corollary 4.6
Given a strategy profile (U,D), D is a strict best response to U w.r.t. effectiveness measure r if and only if every query has one and only one best reply and D maps each query to its best reply.
Given an intent e over database instance I,
some effectiveness measures, such as precision and recall, take their maximum for other results in addition to e(I). For example, given intent e, the precision of every non-empty result s⊂e(I) is equal to the precision of e(I) for e. Hence, there are more than one best reply for an intent w.r.t. precision or recall. Thus, according to Corollary [4.6](#S4.Thmtheorem6 "Corollary 4.6 ‣ 4.1 Fixed User Strategy ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction"), there is not any strict best response w.r.t. precision or recall.
On the other hand, F-measure takes its maximum only for the result e(I).
Hence, one may find a strict best response w.r.t. F-measure.
###
4.2 Nash Equilibrium
In this section and Section [4.3](#S4.SS3 "4.3 Strict Nash Equilibrium ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction"), we analyze the equilibria of the game where both user and DBMS may modify their strategies. A Nash equilibrium for a game is a strategy profile where the DBMS and user will not do better by unilaterally deviating from their strategies.
###### Definition 4.7
A strategy profile (U,D) is a Nash equilibrium w.r.t. a satisfaction function r if ur(U,D) ≥ur(U′,D) for all user strategy U′ and ur(U,D)≥ ur(U,D′) for all database strategy D′.
###### Example 4.2
Consider again the intents, queries, and database instance and strategies in Example [4.1](#S4.Thmexample1 "Example 4.1 ‣ 4.1 Fixed User Strategy ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction").
The strategy profile is a Nash Equilibrium w.r.t precision.
User and DBMS cannot unilaterally change
their strategies and receive a better payoff.
###### Example 4.3
Consider again the database instance about universities that
is shown in Table LABEL:sub@example:intro:table:instance.
Also, consider the queries, intents, and the strategy profile shown in Tables LABEL:sub@example:equilib:nash:queries,
and LABEL:sub@example:equilib:nash:intents, and
the strategy profile in Table LABEL:sub@example:equilib:nash:moreDesirableStrat,
respectively.
The results s1, s2, and s3 in the DBMS strategy are the
outputs of applying the intents, e1, e2, and e3
in Table LABEL:sub@example:equilib:nash:intents to the
Univ relation in Table LABEL:sub@example:intro:table:instance, respectively.
Let 0≤ϵ≤1.
Given a uniform prior over the intents, the strategy profile in
Table LABEL:sub@example:equilib:nash:moreDesirableStrat
is a Nash equilibrium w.r.t precision.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
| Intent# | Intent |
| --- | --- |
| e1 | ans(z)←Univ(x,‘MSU',‘MS',y,z) |
| e2 | ans(z)←Univ(x,‘MSU',‘MI',y,z) |
| e3 | ans(z)←Univ(x,‘MSU',‘MO',y,z) |
(a) Intents
|
| Query# | Query |
| --- | --- |
| q1 | ans(z)←Univ(x,‘MSU',‘MI',y,z) |
| q2 | ans(z)←Univ(x,‘MSU,y,w,z) |
(b) Queries
|
| | q1 | q2 |
| --- | --- | --- |
| e1 | 0 | 1 |
| e2 | 1 | 0 |
| e3 | 0 | 1 |
| | s1 | s2 | s3 |
| --- | --- | --- | --- |
| q1 | 0 | 1 | 0 |
| q2 | ϵ | 0 | 1−ϵ |
(c) User (left) and DBMS (right) strategies for strategy profile 1
|
Table 2: Intents, queries, and strategies for Example [4.3](#S4.Thmexample3 "Example 4.3 ‣ 4.2 Nash Equilibrium ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction")
If the interaction between user and DBMS reaches a Nash equilibrium, they user
do not have a strong incentive to change her strategy.
As a result the strategy of the DBMS and the expected payoff of the game
will likely to remain unchanged.
Hence, in a Nash equilibrium the strategies of user and DBMS are likely
to be stable. Also, the payoff at a Nash equilibrium reflects a potential eventual payoff for the user and DBMS in their interaction.
Query qj is a best query for intent ei if qj∈ argmaxqkur(ei,qk).
The following lemma characterizes the Nash equilibrium of the game.
###### Lemma 4.8
A strategy profile (U,D) is a Nash equilibrium w.r.t. effectiveness measure r if and only if
* for every query q, q is a best query for every intent e∈PL(q), and
* D is a best response to U.
{proof}
The proof is in the appendix.
Let Nashr denote the set of Nash equilibrium strategy profiles w.r.t. effectiveness measure r. The following corollary is an immediate result of Lemma [4.8](#S4.Thmtheorem8 "Lemma 4.8 ‣ 4.2 Nash Equilibrium ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction").
###### Corollary 4.9
NashF−measure= Nashprecision∩
Nashrecall.
{proof}
The proof is in the appendix.
Corollary [4.9](#S4.Thmtheorem9 "Corollary 4.9 ‣ 4.2 Nash Equilibrium ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction") shows that generally the DBMS has more alternatives to reach a Nash equilibrium using precision or recall than using F-measure.
###
4.3 Strict Nash Equilibrium
A strict Nash equilibrium is a strategy profile in which the DBMS and user will do worse by unilaterally changing their equilibrium strategy.
###### Definition 4.10
A strategy profile (U,D) is a strict Nash equilibrium w.r.t. effectiveness measure r if we have ur(U,D)> ur(U,D′) for all DBMS strategies D′≠D and ur(U,D)> ur(U′,D) for all user strategies U′≠U.
###### Example 4.4
Consider the intents, queries, and the strategy profile in Example [4.1](#S4.Thmexample1 "Example 4.1 ‣ 4.1 Fixed User Strategy ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction") and the database illustrated in Table LABEL:sub@example:intro:table:instance. The strategy profile is a strict Nash equilibrium w.r.t precision.
However, the strategy profile in Example [4.3](#S4.Thmexample3 "Example 4.3 ‣ 4.2 Nash Equilibrium ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction")
is not a strict Nash equilibrium as One may modify the value of
ϵ without changing the payoff of the players.
Next, we investigate the characteristics of strategies in a strict Nash equilibria profile. Recall that a strategy is pure iff it has only 1 or 0 values.
A user strategy is onto if there is not any query qj such that Ui,j=0
for all intend i. A DBMS strategy is one-to-one if it does not map two queries to the same result. In other words, there is not any result sell such that Djℓ>0 and Dj′ℓ>0 where j≠j′.
###### Theorem 4.11
If (U,D) is a strict Nash equilibrium w.r.t. satisfaction function r, we have
* U is pure and onto.
* D is pure and one-to-one.
{proof}
Let us assume that there is an intent ei and a query qj such that 0<Ui,j<1. Since U is row stochastic, there is a query qj′ where 0<Ui,j′ <1. Let ur(Ui,j,D) =∑oℓ=1Dj,ℓr(ei,sℓ). If ur(Ui,j,D) =ur(Ui,j′,D), we can create a new user strategy U′ where U′i,j=1 and U′i,j′=0 and the values of other entries in U′ is the same as U. Note that the payoff of (U,D) and (U′,D) are equal and hence, (U,D) is not a strict Nash equilibrium.
If ur(Ui,j,D)≠ur(Ui,j′,D), without loss of generality one can assume that ur(Ui,j,D) >ur(Ui,j′,D). We construct a new user strategy U′′ whose values for all entries except (i,j) and (i,j′) are equal to U and U′′i,j=1, U′′i,j′=0. Because ur(U,D)< ur(U′′,D), (U,D) is not a strict Nash equilibrium. Hence, U must be a pure strategy. Similarly, it can be shown that D should be a pure strategy.
If U is not onto, there is a query qj that is not mapped to any intent in U. Hence, one may change the value in row j of D without changing the payoff of (U,D).
Assume that D is not one-to-one. Hence, there are queries qi and qj and a result sℓ such that Di,ℓ= Dj,ℓ=1. Because (U,D) is a strict Nash, U is pure and we have either Ui,ℓ=1 or Uj,ℓ=1. Assume that Ui,ℓ=1. We can construct strategy U′ that have the same values as U for all entries except for (i,ℓ) and (j,ℓ) and U′i,ℓ=0, U′j,ℓ=1. Since the payoffs of (U,D) and (U′,D) are equal, (U,D) is not a strict Nash equilibrium.
Theorem [4.11](#S4.Thmtheorem11 "Theorem 4.11 ‣ 4.3 Strict Nash Equilibrium ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction") extends the Theorem 1 in
[[13](#bib.bib13)] for our model.
In some settings, the user may knows and use fewer queries than intents, i.e., m>n.
Because the DBMS strategy in a strict Nash equilibrium is one-to-one,
the DBMS strategy does not map some of the results to any query. Hence, the DBMS will never return some results in a strict Nash equilibrium no matter
what query is submitted. Interestingly, as Example [4.1](#S4.Thmexample1 "Example 4.1 ‣ 4.1 Fixed User Strategy ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction") suggests some of these results may be the results that perfectly satisfy some user’s intents. That is, given intent ei over database instance I, the DBMS may never return ei(I) in a strict Nash equilibrium.
Using a proof similar to the
one of Lemma [4.8](#S4.Thmtheorem8 "Lemma 4.8 ‣ 4.2 Nash Equilibrium ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction"), we
have the following properties of strict Nash equilibria of a game.
A strategy profile (U,D) is a strict Nash equilibrium w.r.t. effectiveness measure r if and only if:
* Every intent e has a unique best query and the user strategy maps e to its best query, i.e., e∈PL(qi).
* D is the strict best response to U.
Let SNashr denote the set of strict Nash equilibrium strategy profiles w.r.t. a satisfaction function r.
The following corollary
shows that the results of Corollary [4.9](#S4.Thmtheorem9 "Corollary 4.9 ‣ 4.2 Nash Equilibrium ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction")
extends for strict Nash equilibria.
It is proved similar to Corollary [4.9](#S4.Thmtheorem9 "Corollary 4.9 ‣ 4.2 Nash Equilibrium ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction")
using the above properties of strict Nash equilibria.
###### Corollary 4.12
SNashF−measure=SNashprecision∩ SNashrecall.
###
4.4 Number of Equilibria
A natural question is how many (strict) Nash equilibria exist in a game. Theorem [4.11](#S4.Thmtheorem11 "Theorem 4.11 ‣ 4.3 Strict Nash Equilibrium ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction") guarantees that both user and DBMS strategies in a strict Nash
equilibrium are pure. Thus, given that the sets of intents and queries are finite,
there are finitely many strict Nash
equilibria in the game. We note that each set of results
is always finite.
However, we will show that
if the sets of intents and queries in a game are finite,
the game has infinite Nash equilibria.
###### Lemma 4.13
If a game has a non-strict Nash equilibrium. Then there is an infinitely many Nash equilibria.
{proof}
The result follows from the fact that the payoff function ([1](#S3.E1 "(1) ‣ 3.3 Strategies and Rewards ‣ 3 Signaling Game Model ‣ A Signaling Game Approach to Databases Querying and Interaction")) is a bilinear form of U and D, i.e. it is a linear of D when U is fixed and a linear function of U, when D is fixed. If for D≠D′, (U,D) and (U,D′) are Nash-equilibria, then ur(U,D)=ur(U,D′). Therefore ur(U,αD+(1−α)D′)=ur(U,D) for any α∈R. In particular, for α∈[0,1], if D,D′ are stochastic matrices, αD+(1−α)D′ will be a stochastic matrix and hence, (U,αD+(1−α)D′) is a Nash equilibrium as well. Similarly, if (U′,D) and (U,D) are Nash equilibria for U≠U′, then ur(αU+(1−α)U′,D)=ur(U,D) and (αU+(1−α)U′,D) is a Nash-equilibrium for any α∈[0,1].
###### Theorem 4.14
Given a game with finitely many intents and queries,
if the game has a non-strict Nash equilibrium,
it has an infinite number of Nash equilibria.
{proof}
Every finite game has always a mixed Nash equilibrium [[45](#bib.bib45)].
According to Theorem [4.11](#S4.Thmtheorem11 "Theorem 4.11 ‣ 4.3 Strict Nash Equilibrium ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction"),
a mixed Nash is not a strict Nash equilibrium.
Therefore, using Lemma [4.13](#S4.Thmtheorem13 "Lemma 4.13 ‣ 4.4 Number of Equilibria ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction"), the game will
have infinitely many Nash equilibria.
###
4.5 Efficiency
In this section we discuss the efficiency of different equilibria. We refer to the value of the
utility (payoff) in formula ([1](#S3.E1 "(1) ‣ 3.3 Strategies and Rewards ‣ 3 Signaling Game Model ‣ A Signaling Game Approach to Databases Querying and Interaction")) at a strategy profile to the efficiency of the strategy.
Therefore, the most efficient strategy profile is naturally the one that maximizes ([1](#S3.E1 "(1) ‣ 3.3 Strategies and Rewards ‣ 3 Signaling Game Model ‣ A Signaling Game Approach to Databases Querying and Interaction")).
We refer to an equilibria with maximum efficiency as an efficient equilibrium.
Thus far we have discussed two types of equilibria, Nash and strict Nash, that once reached it is unlikely that either player will deviate from its current strategy. In some cases it may be possible to enter a state of equilibria where neither player has any incentive to deviate, but that equilibria may not be an efficient equilibrium.
The strategy profile in Table LABEL:sub@example:equilib:nash:moreDesirableStrat provides the highest payoff for the user and DBMS given the intents and queries in Tables LABEL:sub@example:equilib:nash:intents and LABEL:sub@example:equilib:nash:queries
over the database in Table LABEL:sub@example:intro:table:instance.
However, some Nash equilibria may not provide high payoffs. For instance, Table [3](#S4.T3 "Table 3 ‣ 4.5 Efficiency ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction") depicts another strategy profile for the set of intents and queries in Tables LABEL:sub@example:equilib:nash:intents and LABEL:sub@example:equilib:nash:queries
over the database in Table LABEL:sub@example:intro:table:instance.
In this strategy profile, the user has little knowledge about the database content and expresses all of her intents using a single query q2, which asks for the ranking of universities whose abbreviations are MSU. Given query q2, the DBMS always returns the ranking of Michigan State University.
Obviously, the DBMS always returns the non-relevant answers for the intents of finding the rankings of Mississippi State University and Missouri State University. If all intents have equal prior probabilities, this strategy profile is a Nash equilibrium.
For example, the user will not get a higher payoff by increasing their knowledge about the database and using query q1 to express intent e2.
Clearly, the payoff of this strategy profile is less than the strategy profile in Table LABEL:sub@example:equilib:nash:moreDesirableStrat. Nevertheless, the user and the DBMS do not have any incentive to leave this undesirable stable state once reached and will likely stay in this state.
###### Definition 4.15
A strategy profile (U,D) is optimal w.r.t. an effectiveness measure r if we have ur(U,D)≥u(U′,D′) for all DBMS strategies D′ and U′
Since, the games discussed in this paper are games
of identical interest, i.e. the payoff of the user and the DBMS are the same, therefore, an optimal strategy (U,D) (w.r.t. an effectiveness measure r) is a Nash equilibrium.
###### Lemma 4.16
A strategy (U,D) is optimal if and only if it is an efficient equilibrium.
{proof}
Note that if (U,D) is optimal, then none of the two players (i.e. the user and the DBMS) has a unilateral incentive to deviate. Therefore (U,D) is a Nash equilibrium. On the other hand, since the payoff function ([1](#S3.E1 "(1) ‣ 3.3 Strategies and Rewards ‣ 3 Signaling Game Model ‣ A Signaling Game Approach to Databases Querying and Interaction")) is a continuous function of U and D and the domain of row-stochastic matrices is a compact space, therefore a maximizer (U,D) of ([1](#S3.E1 "(1) ‣ 3.3 Strategies and Rewards ‣ 3 Signaling Game Model ‣ A Signaling Game Approach to Databases Querying and Interaction")) exists and by the previous part it is a Nash equilibrium. Note that the efficiency of all strategies are bounded by the efficiency of an optimal strategy and hence, any efficient equilibrium is optimal.
Similar to the analysis on efficiency of a Nash equilibria, there are strict Nash equilibria that are less efficient than others. Strict Nash equilibria strategy profiles are unlikely to deviate from the current strategy profile, since any unilateral deviation will result in a lower payoff. From this we can say that strict Nash equilibria are also more stable than Nash equilibria since unilateral deviation will always have a lower payoff.
| | q1 | q2 |
| --- | --- | --- |
| e1 | 0 | 1 |
| e2 | 0 | 1 |
| e3 | 0 | 1 |
| | s1 | s2 | s3 |
| --- | --- | --- | --- |
| q1 | 0 | 1 | 0 |
| q2 | 0 | 1 | 0 |
Table 3: A Nash strategy profile over the intents, queries, and results in Table [2](#S4.T2 "Table 2 ‣ Example 4.3 ‣ 4.2 Nash Equilibrium ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction").
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
| | q1 | q2 |
| --- | --- | --- |
| e1 | 0 | 1 |
| e2 | 1 | 0 |
| e3 | 1 | 0 |
| | s1 | s2 | s3 |
| --- | --- | --- | --- |
| q1 | 0 | 0 | 1 |
| q2 | 1 | 0 | 0 |
(a) User (left) and DBMS (right) strategies for strategy profile 1
|
| | q1 | q2 |
| --- | --- | --- |
| e1 | 0 | 1 |
| e2 | 0 | 1 |
| e3 | 1 | 0 |
| | s1 | s2 | s3 |
| --- | --- | --- | --- |
| q1 | 0 | 0 | 1 |
| q2 | 0 | 1 | 0 |
(b) User (left) and DBMS (right) strategies for strategy profile 2
|
Table 4: Some strict Nash strategy profiles.
As an example of a strict Nash equilibrium that is not efficient, consider both strategy profiles illustrated in Table [4](#S4.T4 "Table 4 ‣ 4.5 Efficiency ‣ 4 Equilibrium Analysis ‣ A Signaling Game Approach to Databases Querying and Interaction").
Note that the intents. queries, and results in this
example are different from the ones in the previous examples.
For this illustration, we set the rewards to r(e1,s1)=1, r(e2,s2)=2, r(e2,s3)=0.1, and r(e3,s3)=3 where all other rewards are 0. Using our payoff function in Equation [1](#S3.E1 "(1) ‣ 3.3 Strategies and Rewards ‣ 3 Signaling Game Model ‣ A Signaling Game Approach to Databases Querying and Interaction") we can calculate the total payoff for the strategy profile in Table LABEL:sub@example:equilib:desirability:strictNash:less as u(U,D)=4.1. This strategy profile is a strict Nash since any unilateral deviation by either player will result in a strictly worse payoff. Consider the strategy profile in Table LABEL:sub@example:equilib:desirability:strictNash:more with payoff u(U,D)=5. This payoff is higher than the payoff the strategy profile in Table LABEL:sub@example:equilib:desirability:strictNash:less receives.
It is also not likely for the strategy profile with less payoff to change either strategy to the ones in the
strategy profile with higher payoff
as both are strict Nash.
5 Adaptation Mechanisms for DBMS
---------------------------------
In many relevant applications, the user’s learning is happening in a much slower time-scale compared to the learning of the DBMS. So, one can assume that the user’s strategy is fixed compared to the time-scale of the DBMS adaptation. Therefore, in this section, first, we consider the case that the user is not adapting to the strategy of the DBMS and then, we consider the case that the user’s strategy is adapting to the DBMS’s strategy but perhaps on a slower time-scale.
When dealing with the game introduced in the previous sections, many questions arise:
1. How can a DBMS learn or adapt to a user’s strategy?
2. Mathematically, is a given learning rule effective?
3. What would be the limiting behavior of a given learning rule?
Here, we address the first and the second questions above. Dealing with the third question is far beyond the page limits of this paper.
For simplicity of notation, we refer to intent ei and result sℓ as intent i and ℓ, respectively,
in the rest of the paper. Hence, we have:
| | | |
| --- | --- | --- |
| | ur(U,D)=m∑i=1πin∑j=1Uijo∑ℓ=1Djℓriℓ, | |
where r:[m]×[o]→R+ is the effectiveness measure between the intent i and the result, i.e., decoded intent ℓ.
###
5.1 Reinforcement Learning for an Arbitrary Similarity Measure
As in [[24](#bib.bib24)], we consider Roth-Erev reinforcement learning mechanism for adaptation of the DBMS adaption. For the case that both the DBMS and the user adapt their strategies, one can use the results in [[24](#bib.bib24)].
Let us discuss the DBMS adaptation rule. The learning/adaptation rule happens over discrete time t=0,1,2,3,… instances where t denotes the tth interaction of the user and the DBMS. We refer to t simply as the iteration of the learning rule. With this, the reinforcement learning mechanism for the DBMS adaptation is as follows:
1. Let R(0)>0 be an n×o initial reward matrix whose entries are strictly positive.
2. Let D(0) be the initial DBMS strategy with Djℓ(0)=Rjℓ(0)∑oℓ=1Rjℓ(0)>0 for all j∈[n] and ℓ∈[o].
3. For iterations t=1,2,…, do
1. If the user’s query at time t is q(t), DBMS return a result E(t)∈E with probability:
| | | |
| --- | --- | --- |
| | P(E(t)=i′∣q(t))=Dq(t)i′(t). | |
2. User gives a reward rii′ given that i is the intent of the user at time t. Note that the reward depends both on the intent i at time t and the result i′. Then, set
| | | | |
| --- | --- | --- | --- |
| | Rjℓ(t+1)={Rjℓ(t)+riℓif j=q(t) and ℓ=i′Rjℓ(t)otherwise. | | (2) |
3. Update the DBMS strategy by
| | | | |
| --- | --- | --- | --- |
| | Dji(t+1)=Rji(t+1)∑oℓ=1Rjℓ(t+1), | | (3) |
for all j∈[n] and i∈[o].
In the above scheme R(t) is simply the reward matrix at time t.
Few comments are in order regarding the above adaptation rule:
1. One can use available ranking functions, e.g. [[9](#bib.bib9)], for the initial reward condition R(0) which possibly leads to an intuitive initial point for the learning rule. One may normalize and convert the scores returned by these functions to probability values.
2. In step c.ii., if the DBMS has the knowledge of the user’s intent after the interactions (e.g. through a click), the DBMS sets Rji+1 for the known intent i. The mathematical analysis of both cases will be similar.
3. In the initial step, as the DBMS uses a ranking function to compute the probabilities, it may not materialize
R and D. As the game progresses, DBMS maintains the
strategy and reward matrices with entries for only
the observed queries, their underlying intents, and their returned results.
Hence, the DBMS does not need to materialize R and D for the sets of possible intents, queries, and results.
DBMS also does not need to know the set of user’s interns beforehand. Hence, the algorithm is practical for the
cases where the sought-for intents,
submitted queries, and returned results are not very large.
Moreover, R and D are generally sparse.
As queries and intents generally follow a power law
distribution [[38](#bib.bib38)], one may use sampling techniques to
use this algorithm in other settings. The rigorous theoretical and empirical analysis of applying such techniques are
interesting subjects of future work.
6 Conclusion & Future Work
---------------------------
Many users query and explore databases interactivity
and over a relatively long period of time.
We modeled the interaction between the user and the DBMS as a signaling game, where the players start with different mappings between
queries and desired results, and would like to reach a mutual understanding. We formally analyzed different equilibria of this game. We also showed that some equilibria may not have an optimal payoff. We proposed an adaptation mechanism for the DBMS to learn the querying strategy of the user and proved that this mechanism increases the expected payoff for both user and the DBMS in average and converges almost surely. This case was extended for when the user changes her strategy and we provided an adaptation mechanism for the DBMS that converges almost surely.
Interactive querying of databases has recently received
a lot of attention [[26](#bib.bib26)].
We believe that our proposed framework will provide a basis for deep exploration of some interesting and important problems in this context.
In particular, we plan to explore the properties of Nash and efficient Nash equilibria in this game. We also plan to
analyze other types of
equilibria ans stable states for the game.
Moreover, we would like to design and explore other adaptation algorithms for the DBMS
that converge to efficient equilibria and are robust
to various user’s adaptation mechanisms.
The equilibria discussed in this paper were
defined for a database instance.
We believe that one may find some equilibria
that share important properties over all databases instances
of the same schema. This will open the door to explore
the properties of the game in database schema level. |
c4ef0def-f960-4212-8cb7-b52901bf724c | trentmkelly/LessWrong-43k | LessWrong | Meetup : Washington DC Lojban meetup
Discussion article for the meetup : Washington DC Lojban meetup
WHEN: 18 August 2013 03:00:00PM (-0400)
WHERE: National Portrait Gallery courtyard, Washington, DC 20001, USA
We'll be meeting to learn Lojban!
Discussion article for the meetup : Washington DC Lojban meetup |
f5e2b97e-63b9-4032-9a4a-e906a1bd734c | trentmkelly/LessWrong-43k | LessWrong | The Implicit Association Test
Continuation of: Bogus Pipeline, Bona Fide Pipeline
Related to: The Cluster Structure of Thingspace
If you've never taken the Implicit Association Test before, try it now.
Any will do. The one on race is the "classic", but the one on gender and careers is a bit easier to watch "in action", since the effect is so clear.
The overwhelming feeling I get when taking an Implicit Association Test is that of feeling my cognitive algorithms at work. All this time talking about thingspace and bias and categorization, and all of a sudden I have this feeling to attach the words to...
...which could be completely self-delusional. What is the evidence? Does the Implicit Association Test work?
Let the defense speak first1. The Implicit Association Test correctly picks up control associations. An IAT about attitudes towards insects and flowers found generally positive attitudes to the flowers and generally negative attitudes to the insects (p = .001), just as anyone with their head screwed on properly would expect. People's self-reports were also positively correlated with their IAT results (ie, someone who reported loving flowers and hating insects more than average also had a stronger than average IAT) although these correlations did not meet the 95% significance criterion. The study was repeated with a different subject (musical instruments vs. weapons) and similar results were obtained.
In the next study, the experimenters recruited Japanese-Americans and Korean-Americans. Japan has been threatening, invading, or oppressing Korea for large chunks of the past five hundred years, and there's no love lost between the two countries. This time, the Japanese-Americans were able to quickly match Japanese names to "good" stimuli and Korean names to "bad" stimuli, but took much longer to perform the opposite matching. The Korean-Americans had precisely the opposite problem, p < .0001. People's self-reports were also positively correlated with their IAT results (ie, a Korean who |
e5cbaff5-4fe0-46a4-a99e-c8ec18beb442 | trentmkelly/LessWrong-43k | LessWrong | High Status and Stupidity: Why?
Michael Vassar once suggested: "Status makes people effectively stupid, as it makes it harder for them to update their public positions without feeling that they are losing face."
To the extent that status does, in fact, make people stupid, this is a rather important phenomenon for a society like ours in which practically all decisions and beliefs pass through the hands of very-high-status individuals (a high "cognitive Gini coefficient").
Does status actually make people stupid? It's hard to say because I haven't tracked many careers over time. I do have a definite and strong impression, with respect to many high-status individuals, that it would have been a lot easier to have an intelligent conversation with them, if I'd approached them before they made it big. But where does that impression come from, since I haven't actually tracked them over time? (Fundamental question of rationality: What do you think you know and how do you think you know it?) My best guess for why my brain seems to believe this: I know it's possible to have intelligent conversations with smart grad students, and I get the strong impression that high-status people used to be those grad students, but now it's much harder to have intelligent conversations with them than with smart grad students.
Hypotheses:
1. Vassar's hypothesis: Higher status increases the amount of face you lose when you change your mind, or increases the cost of losing face.
2. The open-mindedness needed to consider interesting new ideas is (was) only an evolutionary advantage for low-status individuals seeking a good idea to ride to high status. Once high status is achieved, new ideas are high-risk gambles with less relative payoff - the optimal strategy is to be mainstream. I think Robin Hanson had a post about this but I can't recall the title.
3. Intelligence as such is a high-cost feature which is no longer necessary once status is achieved. We can call this the Llinas Hypothesis.
4. High-status |
7728b082-10da-4777-bea4-ac1d6eaabaec | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Where to Draw the Boundary?
Today's post, Where to Draw the Boundary? was originally published on 21 February 2008. A summary (taken from the LW wiki):
> Your definition draws a boundary around things that don't really belong together. You can claim, if you like, that you are defining the word "fish" to refer to salmon, guppies, sharks, dolphins, and trout, but not jellyfish or algae. You can claim, if you like, that this is merely a list, and there is no way a list can be "wrong". Or you can stop playing nitwit games and admit that you made a mistake and that dolphins don't belong on the fish list.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Arguing "By Definition", and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
1f65c24f-65d8-459f-a521-110388f7d5aa | trentmkelly/LessWrong-43k | LessWrong | Why is multi worlds not a good explanation for abiogenesis
I'm not a expert in the multi world theory. So this question could very well be extremely stupid. However, given the assumption that there are nearly infinite amount of worlds that are slightly different than each other, nearly every possible event would happen. This includes the formation of life. Now what are the odds that we would be witnessing that world, as far as I can tell 100 percent.
Now I'm not clear exactly how often quantum events lead to a slightly different world but even at the rate of 1 quantum event a year in the entire universe. should lead to a near infinite explosion of completely different universes.
Now I'm not claiming that this is the explanation for abiogenesis or that abiogenesis is proof of multi worlds because that would be multi worlds of the gap fallacy however I'm not clear why I have never even seen this explanation even once for abiogenesis.
I also suspect that mathematically many worlds would usually be the wrong explanation for nearly everything because it runs into serious odds problems and in 99.99999 percent of cases there is a better explanation. however it should at least be considered
COULD SOMEONE EXPLAIN TO ME EXACTLY WHERE I WENT WRONG
|
42c581c0-3b06-49d6-96b7-be54880eaca8 | trentmkelly/LessWrong-43k | LessWrong | A Corrigibility Metaphore - Big Gambles
I present here a helpful analogy to understand the corrigibility problem and the challenge raised by MIRI in their proposal. This analogy simplifies greatly some challenges of corrigibility but keeps the main problem found in the proposal, which I call Big Gambles.
----------------------------------------
You are playing Mario Kart with 2 other friends, Alice and Bob, and are playing matches to see who's the best player on average, decided by the average position between 1st, 2nd, or 3rd.
Alice has a poor internet connection and so gets disconnected sometimes. To account for that in the competition, a rule was added that her score would be 2nd if she is disconnected so that she is not penalized too much.
Bob takes the game a bit too seriously, and he absolutely wants to win. He will do anything to score first, like getting disconnected on purpose when 3rd, which grants him more points. You quickly realize that, and want to change the rule.
But what to do? If you give 3rd place in case of disconnection, then this will punish Alice unfairly for her internet connection, but if you increase the score, then Bob will start cheating, and you don’t want to have an argument with him. You also cannot make the rule for Alice only, since Bob and you sometimes get disconnected, even if less often.
Disconnection indifference
During interclass, you talk about this problem to Michael, the math kid, and here is what he tells you:
* “You should have a personalized score such that no one should be interested in faking the disconnection, but also such that if it happens it won’t penalize you. Each player should be indifferent to disconnecting.”
* “Well, this sounds like a good idea, but how to do that ?”
* “This is simple: just give everyone its expected rank on non-disconnected games.”
* “Expected what ?”
* “Just play some games without disconnection, and record the position for each player at each lap and the end of the game. Now for a new game, if Alice disconnects a |
a3f6e141-297d-46fe-b7c4-17d2fa9fdda1 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | OpenAI's Alignment Plans
Our alignment research aims to make artificial general intelligence (AGI) aligned with human values and follow human intent. We take an iterative, empirical approach: by attempting to align highly capable AI systems, we can learn what works and what doesn’t, thus refining our ability to make AI systems safer and more aligned. Using scientific experiments, we study how alignment techniques scale and where they will break.
We tackle alignment problems both in our most capable AI systems as well as alignment problems that we expect to encounter on our path to AGI. Our main goal is to push current alignment ideas as far as possible, and to understand and document precisely how they can succeed or why they will fail. We believe that even without fundamentally new alignment ideas, we can likely build sufficiently aligned AI systems to substantially advance alignment research itself.
Unaligned AGI could pose substantial risks to humanity and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together. Therefore we are committed to openly sharing our alignment research when it’s safe to do so: We want to be transparent about how well our alignment techniques actually work in practice and we want every AGI developer to use the world’s best alignment techniques.
At a high-level, our approach to alignment research focuses on engineering a scalable training signal for very smart AI systems that is aligned with human intent. It has three main pillars:
1. Training AI systems using human feedback
2. Training AI systems to assist human evaluation
3. Training AI systems to do alignment research
Aligning AI systems with human values also poses a range of other significant sociotechnical challenges, such as deciding to whom these systems should be aligned. Solving these problems is important to achieving our mission, but we do not discuss them in this post.
Training AI systems using human feedback
----------------------------------------
RL from human feedback is our main technique for aligning our deployed language models today. We train a class of models called InstructGPT derived from pretrained language models such as GPT-3. These models are trained to follow human intent: both explicit intent given by an instruction as well as implicit intent such as truthfulness, fairness, and safety.
Our results show that there is a lot of low-hanging fruit on alignment-focused fine-tuning right now: InstructGPT is preferred by humans over a 100x larger pretrained model, while its fine-tuning costs <2% of GPT-3’s pretraining compute and about 20,000 hours of human feedback. We hope that our work inspires others in the industry to increase their investment in alignment of large language models and that it raises the bar on users’ expectations about the safety of deployed models.
Our natural language API is a very useful environment for our alignment research: It provides us with a rich feedback loop about how well our alignment techniques actually work in the real world, grounded in a very diverse set of tasks that our customers are willing to pay money for. On average, our customers already prefer to use InstructGPT over our pretrained models.
Yet today’s versions of InstructGPT are quite far from fully aligned: they sometimes fail to follow simple instructions, aren’t always truthful, don’t reliably refuse harmful tasks, and sometimes give biased or toxic responses. Some customers find InstructGPT’s responses significantly less creative than the pretrained models’, something we hadn’t realized from running InstructGPT on publicly available benchmarks. We are also working on developing a more detailed scientific understanding of RL from human feedback and how to improve the quality of human feedback.
Aligning our API is much easier than aligning AGI since most tasks on our API aren’t very hard for humans to supervise and our deployed language models aren’t smarter than humans. We don’t expect RL from human feedback to be sufficient to align AGI, but it is a core building block for the scalable alignment proposals that we’re most excited about, and so it’s valuable to perfect this methodology.
Training models to assist human evaluation
------------------------------------------
RL from human feedback has a fundamental limitation: it assumes that humans can accurately evaluate the tasks our AI systems are doing. Today humans are pretty good at this, but as models become more capable, they will be able to do tasks that are much harder for humans to evaluate (e.g. finding all the flaws in a large codebase or a scientific paper). Our models might learn to tell our human evaluators what they want to hear instead of telling them the truth. In order to scale alignment, we want to use techniques like recursive reward modeling (RRM), debate, and iterated amplification.
Currently our main direction is based on RRM: we train models that can assist humans at evaluating our models on tasks that are too difficult for humans to evaluate directly. For example:
We trained a model to summarize books. Evaluating book summaries takes a long time for humans if they are unfamiliar with the book, but our model can assist human evaluation by writing chapter summaries.
We trained a model to assist humans at evaluating the factual accuracy by browsing the web and providing quotes and links. On simple questions, this model’s outputs are already preferred to responses written by humans.
We trained a model to write critical comments on its own outputs: On a query-based summarization task, assistance with critical comments increases the flaws humans find in model outputs by 50% on average. This holds even if we ask humans to write plausible looking but incorrect summaries.
We are creating a set of coding tasks selected to be very difficult to evaluate reliably for unassisted humans. We hope to release this data set soon.
Our alignment techniques need to work even if our AI systems are proposing very creative solutions (like AlphaGo’s move 37), thus we are especially interested in training models to assist humans to distinguish correct from misleading or deceptive solutions. We believe the best way to learn as much as possible about how to make AI-assisted evaluation work in practice is to build AI assistants.
Training AI systems to do alignment research
--------------------------------------------
There is currently no known indefinitely scalable solution to the alignment problem. As AI progress continues, we expect to encounter a number of new alignment problems that we don’t observe yet in current systems. Some of these problems we anticipate now and some of them will be entirely new.
We believe that finding an indefinitely scalable solution is likely very difficult. Instead, we aim for a more pragmatic approach: building and aligning a system that can make faster and better alignment research progress than humans can.
As we make progress on this, our AI systems can take over more and more of our alignment work and ultimately conceive, implement, study, and develop better alignment techniques than we have now. They will work together with humans to ensure that their own successors are more aligned with humans.
We believe that evaluating alignment research is substantially easier than producing it, especially when provided with evaluation assistance. Therefore human researchers will focus more and more of their effort on reviewing alignment research done by AI systems instead of generating this research by themselves. Our goal is to train models to be so aligned that we can off-load almost all of the cognitive labor required for alignment research.
Importantly, we only need “narrower” AI systems that have human-level capabilities in the relevant domains to do as well as humans on alignment research. We expect these AI systems are easier to align than general-purpose systems or systems much smarter than humans.
Language models are particularly well-suited for automating alignment research because they come “preloaded” with a lot of knowledge and information about human values from reading the internet. Out of the box, they aren’t independent agents and thus don’t pursue their own goals in the world. To do alignment research they don’t need unrestricted access to the internet. Yet a lot of alignment research tasks can be phrased as natural language or coding tasks.
Future versions of WebGPT, InstructGPT, and Codex can provide a foundation as alignment research assistants, but they aren’t sufficiently capable yet. While we don’t know when our models will be capable enough to meaningfully contribute to alignment research, we think it’s important to get started ahead of time. Once we train a model that could be useful, we plan to make it accessible to the external alignment research community.
Limitations
-----------
We’re very excited about this approach towards aligning AGI, but we expect that it needs to be adapted and improved as we learn more about how AI technology develops. Our approach also has a number of important limitations:
* The path laid out here underemphasizes the importance of robustness and interpretability research, two areas OpenAI is currently underinvested in. If this fits your profile, please apply for our research scientist positions!
* Using AI assistance for evaluation has the potential to scale up or amplify even subtle inconsistencies, biases, or vulnerabilities present in the AI assistant.
* Aligning AGI likely involves solving very different problems than aligning today’s AI systems. We expect the transition to be somewhat continuous, but if there are major discontinuities or paradigm shifts, then most lessons learned from aligning models like InstructGPT might not be directly useful.
* The hardest parts of the alignment problem might not be related to engineering a scalable and aligned training signal for our AI systems. Even if this is true, such a training signal will be necessary.
* It might not be fundamentally easier to align models that can meaningfully accelerate alignment research than it is to align AGI. In other words, the least capable models that can help with alignment research might already be too dangerous if not properly aligned. If this is true, we won’t get much help from our own systems for solving alignment problems. |
9001b042-0b1c-465d-91d0-3b9db1603664 | trentmkelly/LessWrong-43k | LessWrong | Order Matters for Deceptive Alignment
The order in which key properties emerge is important and often glossed over.
Thanks to Wil Perkins, Grant Fleming, Thomas Larsen, Declan Nishiyama, and Frank McBride for feedback on this post. Any mistakes are my own.
Note: I have now changed the second post into this sequence into a standalone post that incorporates the key points from this post. The comments here are valuable, so I'm leaving this post, but I recommend going straight to the next post.
This is the first post in a sequence about deceptive alignment. The second post describes my personal views about the likelihood of deceptive alignment for TAI. I’m separating the key considerations and editorial content so others can more independently update their own views. I intend this sequence to be a submission to the Open Philanthropy AI Worldviews Contest.
Deceptive alignment is a core part of many AI x-risk scenarios. You can find a highly cited, foundational walkthrough of the deceptive alignment argument here. I’m specifically discussing the concept of deceptive alignment described in that post, in which a proxy-aligned model becomes situationally aware and acts cooperatively in training so it can escape oversight and defect to pursue its proxy goals. There are several existing arguments for why this might be the default outcome for highly capable models. There are other ways a model could be manipulative or deceptive that are not covered in this sequence.
In this post, I discuss four key precursors of deceptive alignment, which I will refer to in this sequence as foundational properties. I then argue that the order in which these foundational properties develop is crucial for estimating the likelihood that deceptive alignment will emerge for prosaic transformative AI (TAI).
In this sequence, I use the term “differential adversarial examples” to refer to adversarial examples in which a non-deceptive model will perform differently depending on whether it is aligned or proxy aligned. The deceptiv |
5758e554-9568-4d25-bfe0-17f0fa99fbe2 | trentmkelly/LessWrong-43k | LessWrong | Scott Aaronson's "On Self-Delusion and Bounded Rationality"
Poignant short story about truth-seeking that I just found. Quote:
> "No," interjected an internal voice. "You need to prove that your dad will appear by a direct argument from the length of your nails, one that does not invoke your subsisting in a dream state as an intermediate step."
>
> "Nonsense," retorted another voice. "That we find ourselves in a dream state was never assumed; rather, it follows so straightforwardly from the long-nail counterfactual that the derivation could be done, I think, even in an extremely weak system of inference."
The full thing reads like a flash tour of OB/LW, except it was written in 2001. |
f89a926e-7ef2-4bbe-b6a5-da0c32688956 | trentmkelly/LessWrong-43k | LessWrong | Meetup : [Boston] Becoming Stronger
Discussion article for the meetup : [Boston] Becoming Stronger
WHEN: 28 July 2013 02:00:00PM (-0400)
WHERE: 25 Ames St, Cambridge, MA
What are you currently doing to make yourself a better person? Are you overcoming mental hurdles, improving your skills, or trying new things? We'll break into small groups and talk about our current projects in as much detail as is comfortable. This is an opportunity to get feedback on your plans and take useful ideas from other people.
Cambridge/Boston-area Less Wrong meetups are every Sunday at 2pm in MIT's building 66 at 25 Ames St, room 156. Room number subject to change based on availability; signs will be posted with the actual room number.
Our default schedule is as follows:
—Phase 1: Arrival, greetings, unstructured conversation.
—Phase 2: The headline event. This starts promptly at 2:30, and lasts 30-60 minutes.
—Phase 3: Further discussion. We'll explore the ideas raised in phase 2, often in smaller groups.
—Phase 4: Dinner. It's about a ten minute walk to the usual restaurant.
Discussion article for the meetup : [Boston] Becoming Stronger |
92ec0495-99ac-4097-a054-fa7032a3c874 | trentmkelly/LessWrong-43k | LessWrong | Initial Thoughts on Dissolving "Couldness"
Epistemic Status
Been sitting in my drafts unattended to for a month. I'm publishing now so that I publish at all. I do not currently endorse this post or its follow up.
Acknowledgements
Discussions with others helped me refine/sharpen these ideas. I'm grateful for them.
----------------------------------------
Introduction
(You may skip to "Closing Remarks" to get a high-level summary of the post.)
Counterfactual/counterlogical reasoning seems intuitively sensible as a decision procedure.
We can build deterministic algorithms that consider what "would" happen if they took one action or another. I will refer to such algorithms as "decision algorithms".
Loosely speaking, a decision algorithm is one that takes in sensory data (observation history) as input and (if it halts) returns ("chooses") an action as its output.
Given a set of actions A={a1,a2,...,an}.
What does it mean for a decision algorithm to consider an action a3 if it will deterministically choose an action a5 and can't actually "choose" a3.
In what sense "could" it have chosen a3?
Why is this possible?
----------------------------------------
What "Couldness" Feels Like From the Inside
I'm guessing that "couldness" flows from a paradox of self-reference?
A decision algorithm cannot (in general) determine ahead of time what its output (choice) is, until it actually returns its output (makes that choice). This can be proven by contradiction.
Suppose that there's a subroutine D that gave the algorithm its deterministic choice.
D(st)=ad(ad∈A)
Where:
st is the algorithm's percept sequence (sensory data/observation history) at the time of invocation.
Then after running D, the algorithm can pick aj∈A:aj≠ad (or simply not halt).
("Can" here means that the algorithm may be constructed so that after invoking D it acts in a manner inconsistent with the output of D.)
This leads to a contradiction.
Thus, no such D can exist.
Thus, the algorithm cannot "know" its choice ahead of t |
9c5c6e8c-ed94-4edc-93a5-d60736f3e86a | trentmkelly/LessWrong-43k | LessWrong | Rational to distrust your own rationality?
There are a number of experiments that throughout the years has shown that expected utility theory EUT fails to predict actual observed behavior of people in decision situations. Take for example Allais paradox. Whether or not an average human being can be considered a rational agent has been under debate for a long time, and critics of EUT points out the inconsistency between theory and observation and concludes that theory is flawed. I will begin from Allais paradox, but the aim of this discussion is actually to reach something much broader. Asking if it should be included in a chain of reasoning, the distrust in the ability to reason.
----------------------------------------
From Wikipedia:
> The Allais paradox arises when comparing participants' choices in two different experiments, each of which consists of a choice between two gambles, A and B. The payoffs for each gamble in each experiment are as follows:
>
> Experiment 1 Experiment 2 Gamble 1A Gamble 1B Gamble 2A Gamble 2B Winnings Chance Winnings Chance Winnings Chance Winnings Chance $1 million 100% $1 million 89% Nothing 89% Nothing 90% Nothing 1% $1 million 11% $5 million 10% $5 million 10%
>
> Several studies involving hypothetical and small monetary payoffs, and recently involving health outcomes, have supported the assertion that when presented with a choice between 1A and 1B, most people would choose 1A. Likewise, when presented with a choice between 2A and 2B, most people would choose 2B. Allais further asserted that it was reasonable to choose 1A alone or 2B alone.
>
> However, that the same person (who chose 1A alone or 2B alone) would choose both 1A and 2B together is inconsistent with expected utility theory. According to expected utility theory, the person should choose either 1A and 2A or 1B and 2B.
----------------------------------------
I would say that there is a difference between E1 and E2 that EUT does not take into account. That is that in E1 understanding 1B is a more complex |
9ed7e744-2a7e-4561-a32a-fa6bc7bf5c1f | trentmkelly/LessWrong-43k | LessWrong | Does object permanence of simulacrum affect LLMs' reasoning?
Humans are constantly simulating the things around them, however they can rather easily shift attention and forget about the previous thing. So we can say humans' simulacrum does not have object permanence.
On the other hand, AI language models prompted to write down their thoughts and reasoning cannot get rid of things they don't need: that words will need to be shifted out from context window. So the simulated objects have a bit of permanence.
So, here is the question: does object permanence of simulacrum affect computational and reasoning abilities of LLMs compared to humans? |
0406b462-3fb0-44a4-937b-4d93e7c4d786 | trentmkelly/LessWrong-43k | LessWrong | Preface to the Sequence on LLM Psychology
As Large Language Models (LLMs) like ChatGPT evolve, becoming more advanced and intricate, the challenge of understanding their behaviors becomes increasingly hard. Solely relying on traditional interpretability techniques may not be sufficient or fast enough in our journey to understand and align these AI models.
When exploring human cognition and behavior, we've historically relied on two intertwined yet distinct approaches: psychology and neuroscience. While psychology offers us a lens to understand human behavior through external observations, neuroscience delves into the internal mechanisms, exploring the biological roots of our mental processes. The collaboration that arose between those two fields isn't just a confluence of theories and methods; it's rather a harmonious synergy where insights from one field often inspire and enhance the other.
For a tangible illustration of how psychology and neuroscience complement each other, let's delve into the realm of memory research. Elizabeth Loftus, a psychologist, illuminated the ways in which human memory can be malleable and sometimes inaccurate. Her pioneering psychological studies set the stage for a deeper exploration of memory. Building upon her insights, neuroscientists Yoko Okado and Craig E.L Stark in 2005 delved into the brain's mechanics, seeking the neural underpinnings of these memory phenomena. On another front, 1996 saw a landmark discovery in neuroscience: Giacomo Rizzolatti and his team unveiled the existence of mirror neurons, which help us understand the physical reflection of actions and emotions in the brain. This revelation prompted psychologists to venture further, with researchers like Niedenthal et al. exploring the embodiment of emotional concepts, and many others, drawing parallels with these neurological findings. Such interplay underscores a collaborative dynamic where each field, while preserving its distinctive methodological lens, enriches and is enriched by the other, driving our c |
81af338a-f500-41d9-b968-696076caa2c5 | trentmkelly/LessWrong-43k | LessWrong | New LW Meetup: Greenville NC
This summary was posted to LW Main on July 15th. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
* Bay City Meetup: 19 August 2016 01:25PM
* Boise, ID Meetup: 24 July 2016 02:30PM
* Greenville, NC meetup: 23 July 2016 04:00PM
Irregularly scheduled Less Wrong meetups are taking place in:
* Australian-ish Online Hangout July: 16 July 2016 07:30PM
* Baltimore Weekly Meetup: 17 July 2016 08:00PM
* European Community Weekend: 02 September 2016 03:35PM
* [Gen Con/Indianapolis] Gen Con: Applied Game Theory: 06 August 2016 02:00PM
* San Antonio Meetup: 17 July 2016 02:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* [Moscow] Role playing game based on HPMOR in Moscow: 16 July 2016 03:00PM
* Moscow: Ernst Mach philosophy, Analysis of competing hypotheses, Paranoid Zendo and other games: 17 July 2016 02:00PM
* Sydney Rationality Dojo - August 2016: 07 August 2016 04:00PM
* Sydney Rationality Dojo - September 2016: 04 September 2016 04:00PM
* Sydney Rationality Dojo - October 2016: 02 October 2016 04:00PM
* Washington, D.C.: Reading Book Excerpts: 17 July 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, |
247d6f6f-64dd-40d6-8320-76b99f20de12 | trentmkelly/LessWrong-43k | LessWrong | Secure Hand Holding
When Lily as about three we were waiting at a crosswalk and traffic stopped for us. As we started across the street a driver that had been waiting to turn left misinterpreted the situation and, thinking traffic had stopped for them instead, tried to turn through our location. I tightened my grip and ran, pulling her through the air. The car stopped, about five feet past where it would have collided with us. Everyone was fine, but we were very shaken. And I was glad we'd been holding hands securely.
When we're walking around at the park or another relaxed environment there are a range of ways I'll hold hands with little kids. A common one is that I'll extend a single finger, and they'll wrap their hand around it. But when we're crossing the street, climbing stairs, or I otherwise might suddenly need to hold them up entirely, I additionally wrap my fingers loosely around their wrist:
Another angle:
The specific hold isn't that important: what matters is that it's comfortable for you both and if you had to you could really hold on. Julia finds that with the smaller size of her hands a different grip works better:
I suspect many of you are thinking "of course, how else would you do it?" Many parents end up doing this pretty naturally. But if you currently are choosing between a friendly but weak companionship grip and securely holding their wrist this could be a nice thing to try!
Comment via: facebook, mastodon |
ca5e56fa-0280-4316-9799-51272ec5916c | trentmkelly/LessWrong-43k | LessWrong | Llama Llama-3-405B?
It’s here. The horse has left the barn. Llama-3.1-405B, and also Llama-3.1-70B and Llama-3.1-8B, have been released, and are now open weights.
Early indications are that these are very good models. They were likely the best open weight models of their respective sizes at time of release.
Zuckerberg claims that open weights models are now competitive with closed models. Yann LeCun says ‘performance is on par with the best closed models.’ This is closer to true than in the past, and as corporate hype I will essentially allow it, but it looks like this is not yet fully true.
Llama-3.1-405B not as good as GPT-4o or Claude Sonnet. Certainly Llama-3.1-70B is not as good as the similarly sized Claude Sonnet. If you are going to straight up use an API or chat interface, there seems to be little reason to use Llama.
That is a preliminary result. It is still early, and there has been relatively little feedback. But what feedback I have seen is consistent on this.
Prediction markets are modestly more optimistic. This market still has it 29% to be the #1 model on Arena, which seems unlikely given Meta’s own results. Another market has it 74% to beat GPT-4-Turbo-2024-04-09, which currently is in 5th position. That is a big chance for it to land in a narrow window between 1257 and 1287. This market affirms that directly on tiny volume.
Such open models like Llama-3.1-405B are of course still useful even if a chatbot user would have better options. There are cost advantages, privacy advantages and freedom of action advantages to not going through OpenAI or Anthropic or Google.
In particular, if you want to distill or fine-tune a new model, and especially if you want to fully own the results, Llama-3-405B is here to help you, and Llama-3-70B and 8B are here as potential jumping off points. I expect this to be the main practical effect this time around.
If you want to do other things that you can’t do with the closed options? Well, technically you can’t do most of them under |
916ba599-b345-4e1a-a347-2bb6fc693fdf | trentmkelly/LessWrong-43k | LessWrong | Kazakhstan's president urges scientists to find the elixir of life
...according to this front-page Reddit headline I just saw, which links to this Guardian article. I wonder if he's heard of KrioRus, whether he's signed up (Wikipedia says they offer services "to clients from Russia, CIS and EU"), and what his odds would be if he were (would it be possible to emigrate to Russia to be closer to the facility, and if not, what would be the best possible option?). Given his being a head of state, presumably it'd be pretty tough for an advocate to even get close enough to try to make the case.
Searching the Reddit comment thread for "cryo" turned up nothing. |
bbf73040-6638-41e6-93c6-0eb599602a98 | trentmkelly/LessWrong-43k | LessWrong | Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024)
Yoshua Bengio wrote a blogpost about a new AI safety paper by him, various collaborators, and me. I've pasted the text below, but first here are a few comments from me aimed at an AF/LW audience.
The paper is basically maths plus some toy experiments. It assumes access to a Bayesian oracle that can infer a posterior over hypotheses given data, and can also estimate probabilities for some negative outcome ("harm"). It proposes some conservative decision rules one could use to reject actions proposed by an agent, and proves probabilistic bounds on their performance under appropriate assumptions.
I expect the median reaction in these parts to be something like: ok, I'm sure there are various conservative decision rules you could apply using a Bayesian oracle, but isn't obtaining a Bayesian oracle the hard part here? Doesn't that involve various advances, e.g. solving ELK to get the harm estimates?
My answer to that is: yes, I think so. And I think Yoshua would probably agree.
Probably the main interest of this paper to people here is to provide an update on Yoshua's research plans. In particular it gives some more context on what the "guaranteed safe AI" part of his approach might look like -- design your system to do explicit Bayesian inference, and make an argument that the system is safe based on probabilistic guarantees about the behaviour of a Bayesian inference machine. This is in contrast to more hardcore approaches that want to do formal verification by model-checking. You should probably think of the ambition here as more like "a safety case involving proofs" than "a formal proof of safety".
----------------------------------------
Bounding the probability of harm from an AI to create a guardrail
Published 29 August 2024 by yoshuabengio
As we move towards more powerful AI, it becomes urgent to better understand the risks, ideally in a mathematically rigorous and quantifiable way, and use that knowledge to mitigate them. Is there a way to design powerfu |
e943558a-2af7-4cbd-9ca2-cc4a27f021c0 | trentmkelly/LessWrong-43k | LessWrong | Philip Tetlock invites participants for prediction accuracy tournament
It seems many Less Wrongers might be interested in participating in this study, perhaps distinguishing themselves as top forecasters:
> Prediction markets can harness the "wisdom of crowds" to solve problems, develop products, and make forecasts. These systems typically treat collective intelligence as a commodity to be mined, not a resource that can be grown and improved. That’s about to change.
>
> Starting in mid-2011, five teams will compete in a U.S.-government-sponsored forecasting tournament. Each team will develop its own tools for harnessing and improving collective intelligence and will be judged on how well its forecasters predict major trends and events around the world over the next four years.
>
> The Good Judgment Team, based in the University of Pennsylvania and the University of California Berkeley, will be one of the five teams competing – and we’d like you to consider joining our team as a forecaster. If you're willing to experiment with ways to improve your forecasting ability and if being part of cutting-edge scientific research appeals to you, then we want your help.
>
> We can promise you the chance to: (1) learn about yourself (your skill in predicting – and your skill in becoming more accurate over time as you learn from feedback and/or special training exercises); (2) contribute to cutting-edge scientific work on both individual-level factors that promote or inhibit accuracy and group- or team-level factors that contribute to accuracy; and (3) help us distinguish better from worse approaches to generating forecasts of importance to national security, global affairs, and economics.
>
> Who Can Participate
>
> Requirements for participation include the following:
>
> (1) A baccalaureate, bachelors, or undergraduate degree from an accredited college or university (more advanced degrees are welcome);
>
> (2) A curiosity about how well you make predictions about world events – and an interest in exploring techniques for improvement.
>
> |
4530de87-7adf-4dd4-9adc-d996cbdd563e | StampyAI/alignment-research-dataset/lesswrong | LessWrong | GPT-4 and ASCII Images?
Does anyone know whether GPT-4 successfully generates ASCII art?
GPT-3.5 couldn't:
the hellWhich makes sense, cause of the whole words-can't-convey-phenomena thing.
I'd expect multimodality to solve this problem, though? |
f9ec2fc0-8c3f-41e2-b5a4-1c62a2342934 | trentmkelly/LessWrong-43k | LessWrong | Meetup : FallacyMania game in Kocherga club
Discussion article for the meetup : FallacyMania game in Kocherga club
WHEN: 23 December 2015 07:40:00PM (+0300)
WHERE: Moscow, B.Dorogomilovskaya, 5-2
Welcome to FallacyMania: it is a game where you guess logical fallacies in arguments, or practise using logical fallacies yourself (depending on team in which you will be).
Details about the game are here: https://lesswrong-ru.hackpad.com/Fallacymania--neGfMe9MFjH
Come to antikafe "Kocherga", ul.B.Dorogomilovskaya, 5-2. The map is here: http://kocherga-club.ru/#contacts . Nearest metro station is Kievskaya. If you are lost, call Sasha at +7-905-527-30-82.
The game begins at 19:40, the length is 1-1.5 hours.
Discussion article for the meetup : FallacyMania game in Kocherga club |
6c8b1c74-91bf-467d-b436-165e64de367d | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "Hunter-gatherer tribes are usually highly egalitarian (at least if you’re male)—the all-powerful tribal chieftain is found mostly in agricultural societies, rarely in the ancestral environment. Among most hunter-gatherer tribes, a hunter who brings in a spectacular kill will carefully downplay the accomplishment to avoid envy. Maybe, if you start out below average, you can improve yourself without daring to pull ahead of the crowd. But sooner or later, if you aim to do the best you can, you will set your aim above the average. If you can’t admit to yourself that you’ve done better than others—or if you’re ashamed of wanting to do better than others—then the median will forever be your concrete wall, the place where you stop moving forward. And what about people who are below average? Do you dare say you intend to do better than them? How prideful of you! Maybe it’s not healthy to pride yourself on doing better than someone else. Personally I’ve found it to be a useful motivator, despite my principles, and I’ll take all the useful motivation I can get. Maybe that kind of competition is a zero-sum game, but then so is Go; it doesn’t mean we should abolish that human activity, if people find it fun and it leads somewhere interesting. But in any case, surely it isn’t healthy to be ashamed of doing better. And besides, life is not graded on a curve. The will to transcendence has no point beyond which it ceases and becomes the will to do worse; and the race that has no finish line also has no gold or silver medals. Just run as fast as you can, without worrying that you might pull ahead of other runners. (But be warned: If you refuse to worry about that possibility, someday you may pull ahead. If you ignore the consequences, they may happen to you.) Sooner or later, if your path leads true, you will set out to mitigate a flaw that most people have not mitigated. Sooner or later, if your efforts bring forth any fruit, you will find yourself with fewer sins to confess. Perhaps you will find it the course of wisdom to downplay the accomplishment, even if you succeed. People may forgive a touchdown, but not dancing in the end zone. You will certainly find it quicker, easier, more convenient to publicly disclaim your worthiness, to pretend that you are just as much a sinner as everyone else. Just so long, of course, as everyone knows it isn’t true. It can be fun to proudly display your modesty, so long as everyone knows how very much you have to be modest about. But do not let that be the endpoint of your journeys. Even if you only whisper it to yourself, whisper it still: Tsuyoku, tsuyoku! Stronger, stronger! And then set yourself a higher target. That’s the true meaning of the realization that you are still flawed (though a little less so). It means always reaching higher, without shame. Tsuyoku naritai! I’ll always run as fast as I can, even if I pull ahead, I’ll keep on running; and someone, someday, will surpass me; but even though I fall behind, I’ll always run as fast as I can."
The passage is from a user-contributed post on LessWrong.
Can you elaborate on the concept of "Tsuyoku, tsuyoku! Stronger, stronger!" in the context of individual growth and societal acceptance?
"Tsuyoku, tsuyoku!" is a Japanese phrase that translates to "Stronger, stronger!" In the context of this passage, it serves as a motivational mantra, a call to continuous self-improvement and growth.
The author uses this phrase to emphasize the importance of personal growth, even when it means surpassing the average or the norm. It suggests that individuals should strive to be the best they can be, without fear of outshining others or becoming an outlier. This can be a challenging mindset to maintain, especially in societies that value conformity or have a strong sense of 'tall poppy syndrome', where those who stand out are often criticized or undermined.
The concept of "Tsuyoku, tsuyoku!" in this context is about embracing personal growth and striving for excellence, regardless of societal acceptance. It's about recognizing one's potential and continuously pushing oneself to improve, rather than settling for mediocrity or being held back by the fear of outperforming others.
However, the author also acknowledges the potential social repercussions of this mindset. They suggest that while it's important to strive for personal growth, it may also be wise to downplay one's achievements to avoid envy or resentment from others. This is not about denying one's accomplishments, but rather about demonstrating humility and avoiding unnecessary conflict.
In summary, "Tsuyoku, tsuyoku!" encapsulates a philosophy of relentless self-improvement, tempered with a recognition of the social dynamics that can accompany personal success. It encourages individuals to strive for personal excellence, but also to navigate societal expectations and reactions with wisdom and humility.
Reflecting on the philosophy of "Tsuyoku, tsuyoku!", can there be potential negative impacts on an individual's mental health when pursuing relentless self-improvement?
Absolutely, the pursuit of relentless self-improvement, while admirable, can indeed have potential negative impacts on an individual's mental health if not balanced properly. This is particularly true if the pursuit becomes a source of constant stress, self-criticism, or feelings of inadequacy.
One potential risk is the development of perfectionism, which is the belief that one's work or performance must be perfect and that anything less is unacceptable. This can lead to excessive self-criticism, fear of failure, and chronic stress, all of which can have serious negative effects on mental health, including the development of anxiety and depressive disorders.
Another risk is burnout, which can occur when an individual pushes themselves too hard for too long without adequate rest or relaxation. Burnout can lead to a range of negative symptoms, including chronic fatigue, reduced productivity, and feelings of cynicism or detachment.
Furthermore, the relentless pursuit of self-improvement can also lead to a diminished sense of self-worth that is contingent on achievement. If an individual bases their self-esteem solely on their accomplishments, they may experience feelings of worthlessness or depression when they fail to meet their own high standards.
Lastly, this relentless pursuit can also lead to a neglect of other important aspects of life, such as relationships, leisure activities, and self-care. This can result in a lack of balance and fulfillment in life, which can also negatively impact mental health.
It's important to note that while the philosophy of "Tsuyoku, tsuyoku!" encourages continuous growth and improvement, it should be balanced with self-compassion, realistic goal-setting, and a recognition of one's inherent worth outside of achievements. Striving for growth is a positive endeavor, but it's equally important to maintain a healthy relationship with oneself and one's mental well-being.
Discuss the role of self-compassion in mitigating the potential negative impacts of relentless self-improvement on mental health.
Self-compassion plays a crucial role in mitigating the potential negative impacts of relentless self-improvement on mental health. It is a practice that involves treating oneself with kindness, understanding, and acceptance, especially during times of failure or difficulty.
Firstly, self-compassion allows for a healthy acknowledgement of one's limitations and mistakes. In the context of relentless self-improvement, it's easy to fall into the trap of harsh self-criticism when goals are not met or progress is slower than desired. Self-compassion, however, encourages individuals to treat these instances not as failures, but as opportunities for learning and growth.
Secondly, self-compassion can reduce the risk of burnout. By recognizing and respecting one's own needs and limitations, individuals can ensure they are not pushing themselves to the point of exhaustion. This includes taking time for rest and relaxation, and not feeling guilty for doing so.
Thirdly, self-compassion can help maintain a healthy sense of self-worth. Rather than basing self-esteem solely on achievements, self-compassion encourages individuals to value themselves for who they are, not just what they can do. This can provide a buffer against feelings of worthlessness or depression when faced with setbacks or slow progress.
Lastly, self-compassion can promote balance in life. By being kind to oneself, individuals are more likely to take time for activities they enjoy and relationships they value, rather than focusing solely on self-improvement. This can lead to greater overall life satisfaction and mental well-being.
In essence, self-compassion can serve as a counterbalance to the potential negative impacts of relentless self-improvement. It encourages a kinder, more balanced approach to personal growth, where setbacks are viewed with understanding and self-worth is not solely tied to achievement. This can lead to a healthier, more sustainable path towards self-improvement and personal growth. |
01eaa4af-0fb0-4182-996e-937ea3c3792b | trentmkelly/LessWrong-43k | LessWrong | Philosophical Landmines
Related: Cached Thoughts
Last summer I was talking to my sister about something. I don't remember the details, but I invoked the concept of "truth", or "reality" or some such. She immediately spit out a cached reply along the lines of "But how can you really say what's true?".
Of course I'd learned some great replies to that sort of question right here on LW, so I did my best to sort her out, but everything I said invoked more confused slogans and cached thoughts. I realized the battle was lost. Worse, I realized she'd stopped thinking. Later, I realized I'd stopped thinking too.
I went away and formulated the concept of a "Philosophical Landmine".
I used to occasionally remark that if you care about what happens, you should think about what will happen as a result of possible actions. This is basically a slam dunk in everyday practical rationality, except that I would sometimes describe it as "consequentialism".
The predictable consequence of this sort of statement is that someone starts going off about hospitals and terrorists and organs and moral philosophy and consent and rights and so on. This may be controversial, but I would say that causing this tangent constitutes a failure to communicate the point. Instead of prompting someone to think, I invoked some irrelevant philosophical cruft. The discussion is now about Consequentialism, the Capitalized Moral Theory, instead of the simple idea of thinking through consequences as an everyday heuristic.
It's not even that my statement relied on a misused term or something; it's that an unimportant choice of terminology dragged the whole conversation in an irrelevant and useless direction.
That is, "consequentialism" was a Philosophical Landmine.
In the course of normal conversation, you passed through an ordinary spot that happened to conceal the dangerous leftovers of past memetic wars. As a result, an intelligent and reasonable human was reduced to a mindless zombie chanting prerecorded slogans. If you're lu |
37cf87c2-72f2-4bff-ae78-60fab6789528 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Indifference' methods for managing agent rewards
1 Introduction
---------------
In designing a reward for a reinforcement learning agent, a general class of situations can arise, where the designer wants the agent to maximize a reward function, under the constraint that it is not to exert control over the occurrence of some event – or to behave as if the event was certain or impossible.
This event may represent, for example, the agent being powered off, or having its reward function changed by a human designer.
For such situations, the author of this paper has helped in developing a variety of methods of augmenting the reward systems of agents, all grouped under the broad description of ‘indifference’ (Orseau and Armstrong, [2016](#bib.bib7); Soares et al., [2015](#bib.bib8); Armstrong, [2010](#bib.bib1), [2015](#bib.bib2), [2017](#bib.bib3)).
These methods share three key features.
First, they aim to ensure some key safety or control feature within the agent – such as the ability to be turned off – that is hard to code otherwise into the agent’s reward.
Secondly, they rely on relatively simple manipulations of the agent’s reward – manipulations that could be carried out on a complex reward that humans couldn’t fully understand.
And secondly, they functioned by making the agent *indifferent* to some key feature.
This indifference would persist even if the agent was much more capable that its controllers, meaning they could be used as tools for controlling agents of arbitrary power and intelligence (Bostrom, [2014](#bib.bib5)).
The various tools of indifference were, however, presented in an ad-hoc manner, each designed to address a specific problem (making an agent not dismantle an explosive fail-safe, making an agent willing to have its policy changed or to be turned off, etc…), with no general theory.
Even worse, some methods combined various indifference tools, making it hard to fundamentally understand what was going on.
This paper aims to clarify the situation and make the tools available for general use, individually or in combination.
The first main insight is that there are three separate goals for indifference:
1. Event-dependent rewards. To make an agent’s actual reward Ri be dependent on events Xi, without the agent being motivated to manipulate the probability of the Xi.
2. Effective disbelief. To make an agent behave as if an event X would never happen.
3. Seamless transition. To make an agent transition seamlessly from one type of behaviour to another, remaining indifferent to the transition ahead of time.
After a brief section to setup the notation, this paper will address each goal in its own section.
The section on event-dependent rewards will present three methods.
In the case where the Xi are unbiasable – the agent cannot affect their probability in expectation – one can directly define a *compound reward*.
If the Xi are biasable, the *policy counterfactual* constructs unbiasable versions of them by using a default policy π0.
The *causal counterfactual*, in contrast, uses the biasable Xi as information about underlying unbiasable events.
The section on effective disbelief will show that, to make an agent behave as if X would never happen, it suffices to take the compound reward of the previous method, and make that reward constant, conditional on X.
The section on seamless transition will make use of pseudo-rewards rather than standard rewards.
These pseudo-rewards are defined in part by the agent’s own estimate of its expected value.
Thus there are a total of five different methods, to accomplish the three indifference goals.
They can be combined; for instance, it’s possible to ensure seamless transition to an event-dependent reward111
The method presented at the website <http://lesswrong.com/lw/jxa/proper_value_learning_through_indifference/> is an example of that, combining causal counterfactual with seamless transition.
.
2 Setup and properties
-----------------------
This section will establish two pieces of formalism: world models, the environments in which the agent operates, and events which can happen within those world models.
###
2.1 World models
The indifference methods will be described within a variant of the POMDP (Partially Observable Markov Decision Process) format222
Though the methods are easily portable to other formalisms.
.
These variants can be called *world models*, similarly to Hadfield-Menell et al. ([2017](#bib.bib6)), and are POMDPs without reward functions.
For any set S, let Δ(S) be a space of probability distributions over S.
Then, a world model or POMDP without reward function, consists of μ={S,O,A,O,T,T0,}, where
* The set of states the agent can find itself in is S.
* The set of observations the agent can make is O.
* The set of actions the agent can take in any state is A.
* The transition function T takes a state and an action and gives a probability distribution over subsequent states: T:S×A→Δ(S).
* The function T0 gives a probability distribution over the initial state s0, T0∈Δ(S).
* The function O maps states to a probability distribution over possible observations: O:S→Δ(O).
Thus the agent starts in an initial state s0, sampled from T0.
On each turn, the agent gets an observation, chooses an action, and the world model is updated to a new state via T, where the agent makes a new observation via O.
An (observable) history h is a sequence of observations and actions, starting with an initial observation o0 and ending with another observation: h=o0a0o1a1…om−1am−1om, with oi and ai being the i-th observations and actions.
Let H be the set of histories, and Hm the set of histories of length m.
Let R be the set of *reward functions* for the agent on μ.
Each R∈R is a map from H to R.
Thus, upon taking action am and making the new observation om+1, an agent with reward function R gets reward R(hmamom), where hm∈Hm is the history of its previous actions and observations.
The rewards are thus not Markovian: they may depend upon the past observations and actions as well.
This is needed for some of the indifference methods.
The agent chooses its actions by using a policy π:H→ΔA, which maps its history to a distribution over actions.
Let Π be the set of all deterministic policies333
The determinism of the policies makes little difference to the results, but makes the proofs easier.
.
Assume that the agent’s interaction with the world model ends after the n-th turn.
Then H=H≤n is the set of possible histories, with Hn the set of maximal length histories.
For any history h, write (h)m for the initial portion of h, containing m+1 observations (o0 to om), and m actions (a0 to am−1).
If h is the initial portion of a longer history h′, write h≤h′ – this means that there is an m with (h′)m=h.
A policy π generates a probability distribution over actions, given past history.
The world model μ generates a distribution over the next observation, given past history and an action.
Together, they generate a probability distribution of h′ given h:
| | | |
| --- | --- | --- |
| | μ(h′|h,π). | |
for any histories h and h′ (this probability is always 0 if h≰h′).
We can also define the policy-independent probability:
| | | | |
| --- | --- | --- | --- |
| | μ(h′|h); | | (1) |
this is taken to be μ(h′|h,π) for any π that deterministically chooses the m-th action of h′, when given the history (h′)m.
####
2.1.1 The agent’s own probability
In this paper, it will be assumed that the agent knows and uses the true μ for expectation and probability calculation.
In situations where the agent’s estimate of μ differs from the true μ, it’s important that the indifferent reward function is constructed using the agent’s beliefs: all indifference methods rely on the agent being indifferent *according to its own estimate of value*.
For instance, it’s more important the agent believes that an X be unbiasable (see [Section 2.2](#S2.SS2 "2.2 Biasable and unbiasable events ‣ 2 Setup and properties ‣ ‘Indifference’ methods for managing agent rewards")), than for that X to actually be so.
###
2.2 Biasable and unbiasable events
The discussion of indifference will rely on a couple more definitions.
Suppose we wanted the agent to behave differently, conditional on, say, being in state s — or not — at time m.
Given the agent’s history h and policy π, the probability of being in that state then can be computed.
But this probability is not *a priori* a natural object for a world model or a reward.
This section will show how to characterize any event X by its *indicator variable* IX, where IX is 1 if X happens and 0 otherwise.
This indicator function is useful, as it can be defined in terms of the histories on the world model.
On a world model, IX can be defined as:
######
Definition \thedefinition (Indicator variable).
The indicator variable IX is a map from Hn, the set of maximal size observable histories, to the interval [0,1].
So IX(hn) can be seen as the probability that event X has happened, given hn.
If IX maps all such hn to either 0 or 1, then the event X is fully determined by the world model’s observations: the agent will always eventually know if X happened or not.
On shorter histories, IX is a random variable:
######
Theorem 2.1.
Given a policy π, the expectation of IX is well-defined on any history h∈H.
Designate this expectation by IπX(h).
###### Proof.
Given μ and π, h generates a probability distribution over Hn.
This defines the expectation of IX:
| | | |
| --- | --- | --- |
| | IπX(h)=Eπμ[IX(hn)∣h]=∑hn∈HnIX(hn)μ(hn|h,π). | |
∎
The dependence on π means that the agent can determine the expected likelihood of X through its own actions.
For some X, called unbiasable, the expected likelihood of X is independent of these:
######
Definition \thedefinition (Unbiasable).
The event (indicator function) X (IX) is unbiasable if the expectation of IX is independent of policy; for any h∈H≤n and π,π′∈Π,
| | | |
| --- | --- | --- |
| | IπX(h)=Iπ′X(X). | |
The above term is then defined to be IX(h).
See Armstrong and Leike ([2017](#bib.bib4)) for a more detailed treatment of bias.
3 Event-dependent rewards
--------------------------
In this section, we address the problem where we want the agent to follow different reward functions, depending on the outcome of certain events:
######
Definition \thedefinition.
[Conditional reward]
The reward R is Ri conditional on the event Xi, if for any history h with minπIπXi(h)=1, then R(h)=Ri(h).
Three methods are presented to aim for this; see [Table 1](#S3.T1 "Table 1 ‣ 3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards") for their requirements and properties.
All methods make use of unbiasable events, so the agent never has any incentive to manipulate their reward by manipulating events.
| Method | Conditional on X | Requirement |
| --- | --- | --- |
| Compound reward | Yes | Unbiasable X |
| Policy counterfactual | Counterfactually only | None |
| Causal counterfactual | Yes | Special unbiasable Yi |
Table 1: Properties and requirements of the different ways of making rewards conditional on events.
###
3.1 Compound rewards
The first obvious idea is to have some event X, and have the agent maximise a reward R1 if X happens, and R0 if it doesn’t.
Now, ‘happens’ (IX(h)=1) and ‘doesn’t’ (IX(h)=0) are not the only possibilities.
The events {Xi} are mutually exclusive if for all hn, ∑iIX(hn)≤1
Then the general formula is444
Armstrong and Leike ([2017](#bib.bib4)) has a more general definition of a compound reward, where the agent’s past rewards change when the values of IXi change.
However, a result in that paper shows that for unbiasable X, the agent cannot change their past reward in expectation by their actions, so an agent maximising that alternative compound reward will behave exactly the same as an agent maximising R(X).:
######
Definition \thedefinition.
[Compound reward]
Given unbiasable mutually exclusive events X=(X0, X1, …Xl), the reward R(X) is a X-compound reward if it is written as:
| | | | |
| --- | --- | --- | --- |
| | R(X)=IX0R0+IX1R1+…+IXlRl, | | (2) |
where (IXiRi)(hm)=IXi(hm)Ri(hm).
If X=(¬X,X), we’ll write R(X) for R(X).
This means that the agent will weigh its rewards according to the probability of the Xi, and – presuming the Ri are different from each other and the agent can affect their value – it will change its behaviour according to the probabilities IXi(h) of the Xi (given h).
Note that [Equation 2](#S3.E2 "(2) ‣ 3.1 Compound rewards ‣ 3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards") only makes sense for unbiasable X, as IπXi(h) is defined by the agent’s future policy, so isn’t a legitimate reward555
Even if we allowed such dependence in the definition of a reward, the method would run into other problems, as discovered by Benja Fallenstein.
See the initial post at <https://agentfoundations.org/item?id=78>
.
It’s unfortunate that X needs to be unbiasable for compound rewards to be defined.
A good reason to have a compound reward, is to allow humans to intervene and change the agent’s reward.
So X could correspond to pushing a button, to choose R1 instead of R0.
But that X cannot be unbiasable: we want the human to react to the agent’s behaviour in order to make that choice.
Thus X depends on the agent’s behaviour.
The next two subsections look at ways of making use of biasable X.
###
3.2 Policy counterfactual
The idea for the policy counterfactual is to start with a biasable X, and construct from it an unbiasable Y, and then use Y to define the compound reward.
For this two things are needed: the world model μ, and a default policy π0.
But before that, we need the concept of ‘equivalent’ world models:
######
Definition \thedefinition (Equivalent world models).
The two world models μ and μ′ are equivalent, if they both have the same O and A (and hence have the same set of histories and of policies), and if the conditional probabilities of different histories are equivalent; for all policies π and histories h and h′ such that h<h′:
| | | |
| --- | --- | --- |
| | μ(h′|h,π)=μ′(h′|h,π). | |
For any μ′, given a starting state s0=s and the default policy π0, one can construct a distribution over Hn.
Consequently, define
| | | | |
| --- | --- | --- | --- |
| | IX(π0,s0,μ′)=∑hn∈Hnμ′(hn|s0=s,π0)IX(hn). | | (3) |
Conversely, given a history h, one can calculate the probability of the initial state, μ′(s0=s|h).
Simply compute the conditional probabilities μ′(h|s0=s) (recall that similarly to [Equation 1](#S2.E1 "(1) ‣ 2.1 World models ‣ 2 Setup and properties ‣ ‘Indifference’ methods for managing agent rewards"), one can compute these expressions without conditioning on a policy) and then use T0 (the prior probabilities of s0) and Bayes rule.
Then define:
| | | | |
| --- | --- | --- | --- |
| | IY(h)=∑s∈Sμ′(s0=s|h)IX(π0,s0,μ′). | | (4) |
######
Theorem 3.1.
The IY(h) define an unbiasable Y.
###### Proof.
[Equation 3](#S3.E3 "(3) ‣ 3.2 Policy counterfactual ‣ 3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards") implies 0≤IX(π0,s0,μ′)≤1, and [Equation 4](#S3.E4 "(4) ‣ 3.2 Policy counterfactual ‣ 3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards") then implies the same thing for IY.
Since IY is defined on Hn, it defines an event Y; it now needs to be shown that Y is unbiasable, and that [Equation 4](#S3.E4 "(4) ‣ 3.2 Policy counterfactual ‣ 3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards") correctly defines IY(hm) for all hm.
Let π be any policy, and then:
| | | | |
| --- | --- | --- | --- |
| | IπY(hm) | =∑hn∈Hnμ′(hn|hm,π)∑s∈Sμ′(s0=s|hn)IX(π0,s0,μ′) | |
| | | =∑hn∈Hn,s∈Sμ′(s0=s|hn)μ′(hn|hm,π)IX(π0,s0,μ′) | |
| | | =∑s∈Sμ′(s0=s|hm)IX(π0,s0,μ′), | |
since s0 is independent of π and hence μ′(s0=s|hn)μ′(hn|hm,π)=μ′(s0=s|hn,π)μ′(hn|hm,π)=μ′(s0=s|hm,π).
Now, as long as hm is possible given π, hm ‘screens off’ the effect of π: μ′(s0=s|hm,π)=μ′(s0=s|hm).
This is because π is the policy that chooses the next action, but hm actually lists all those actions up until time m (and s0, the initial state, is clearly independent of the agent’s policy after time m).
This is the same formula as [Equation 4](#S3.E4 "(4) ‣ 3.2 Policy counterfactual ‣ 3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards") and is independent of π, making IY unbiasable.
∎
See paper Armstrong and Leike ([2017](#bib.bib4)) for more details, where a more restrictive version of equivalence is used666
Called a ‘refinement’ of μ, which also preserves counterfactual equivalence.
, and where the IY consequently obeys the stronger requirement of being *uninfluenceable*.
This allows the definition:
######
Definition \thedefinition.
[Policy counterfactual]
Given R0, R1, a world model μ′ equivalent to μ, an event X that might be biasable, and a default policy π0, the policy counterfactual agent is one with compound reward
| | | |
| --- | --- | --- |
| | R(Y)=IYR1+(1−IY)R0, | |
where IY is defined by μ′ and X via [Equation 3](#S3.E3 "(3) ‣ 3.2 Policy counterfactual ‣ 3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards") and [Equation 4](#S3.E4 "(4) ‣ 3.2 Policy counterfactual ‣ 3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards").
Note that the counterfactual policy approach will not generally be an X-conditional reward in the meaning of [Section 3](#S3 "3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards"), since it conditional on (¬Y,Y), and, though Y was constructed from X, they are not expected to be equal, and in particular, IY(h)∈{0,1} need not imply the same thing about IX and vice versa.
####
3.2.1 Example: learning agents and inactive agents
There are two standard examples of π0.
The first is the inactive agent: π0 always picks some default, null action, that has no impact.
Suppose the human uses X to chooses a reward function for the agent.
Then, the R(Y) maximising agent attempts to maximise the reward that the human would have chosen, had the agent been inactive.
The second is the pure learning agent, which follows some script that aims to elucidate IX in some fashion.
In that case, the R(Y) maximising agent is attempting to maximise the reward that the would have been chosen, had the agent been dedicated to purely learning about X in that fashion.
See Armstrong and Leike ([2017](#bib.bib4)) for more details.
###
3.3 Causal counterfactual
The policy counterfactual is a good approach if there exists a π0 under which X is known to have desirable properties.
However, it is not useful if we want to allow humans to use X to have actual control over the agent.
Recall that a policy counterfactual agent imagines what state X might be, if some fixed π0 was followed.
Consequently, it ignores the ways that human responses might generate a different X, given the agent’s current policy.
It would be good if we could start with a biasable X, and still ensure that the agent doesn’t try to manipulate it, and still chooses to base its policy on the actions of the human (X vs ¬X).
The causal counterfactual attempts to do this, by using auxiliary events Y1 and Y0.
The intuition is that these events are unbiasable, but the agent is unable to distinguish Y1 (Y0) from X (¬X).
And, with high probability, X and ¬X correspond to human interventions.
Specifically:
######
Definition \thedefinition.
[Causal counterfactual]
Given an event X and rewards R0 and R1, a causal counterfactual reward consists of events Y0, Y1, and reward R({Y0,Y1}), such that:
* The events Y0 and Y1 are unbiasable.
* For all h, IY1(h)≤minπIπX(h) and IY0(h)≤minπ(1−IπX(h)).
* For all h, IY0(h)+IY1(h)=c for c a constant independent of h.
* R({Y0,Y1})=IY0R0+IY1R1.
Then the causal counterfactual will behave as we desire:
######
Theorem 3.2.
Assume R({Y0,Y1}) is a causal counterfactual reward for X, R0, and R1, in the sense of [Section 3.3](#S3.SS3 "3.3 Causal counterfactual ‣ 3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards").
Then:
* If minπIπX(h)=1, then a R({Y0,Y1})-maximising agent will follow a policy that maximises R1.
* If maxπIπX(h)=0, then a R({Y0,Y1})-maximising agent will follow a policy the maximises R0.
###### Proof.
If minπIπX(h)=1, then, by definition, IY0(h)=0.
Since IY0 maps into [0,1], this means that for any hn≥h with μ(hn|h)≠0, IY0(hn)=0.
Hence IY0(h′)=0 for all h′≥h with μ(h′|h)≠0.
Thus IY0=0 for all possible future histories from h.
By [Section 3.3](#S3.SS3 "3.3 Causal counterfactual ‣ 3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards"), this implies that IY1=c for all possible future histories.
Thus R({Y0,Y1}) becomes equal to cR1+0, and the result follows.
The proof for maxπIπX(h)=0 and R0 is the same.
∎
The theorem will also apply for minπIπX(h)>1−ϵ and maxπIπX(h)<ϵ, for some ϵ<0.
To see this, note that the expected value of R({Y0,Y1}) is continuous in IY0 and IY1, while the set of policies is a discrete set, so a non-optimal policy cannot become optimal because of an arbitrarily small change777
Note that the converse is not true – an optimal policy can become non-optimal because of an arbitrarily small change, but only because it becomes worse than another optimal policy.
The set of optimal policies cannot all become non-optimal because of an arbitrarily small change.
.
####
3.3.1 An example
What is an example of a causal counterfactual?
Well, a human could decide to press a button or not. Let X represent the button depressed, and ¬X it not being depressed.
With tiny probability, otherwise unobserved quantum events will either depress the button automatically (Y1) or block it from being depressed (Y0).
If the human successfully presses the button (X), then agent knows that Y1 didn’t happen, but Y0 might well have – so it will behave as an R1 maximiser (and vice versa for ¬X).
Then, the most likely outcome, is that the human will decide whether to press or not, and the button will be in the position they desire, and the agent will follow the human’s decisions.
But note the agent has no motivation to change the human’s behaviour, because it only cares about the unlikely events Y1 and Y0, which the human cannot affect.
One slight flaw with the plan is that the agent prepares for some odd contingency cases.
If Y0 and Y1 are equally probable, and the human wants to press the button, then the agent knows that the button is likely to be pressed – but will dedicate half of its ‘efforts’ to preparing for the Y0 world where the human tries to press the button but failed.
This arises because the agent will ignore any possibility except for Y0 and Y1 (where the human depressed the button, *but didn’t have to because the quantum event would have done it anyway*).
See [Section 4](#S4 "4 Effective disbelief ‣ ‘Indifference’ methods for managing agent rewards") on why the agent ignores the other possibilities: its reward is constant (and zero) in those worlds.
4 Effective disbelief
----------------------
If an agent believed that an event X would never happen, how would we expect it to behave?
Suppose X was a coin coming up heads, and we saw an agent that would bet on it coming up tails – and they would bet on tails, all the time, at any odds they were offered.
That seems like a strong indication they believed X wouldn’t happen.
But now suppose instead that their rewards (and losses) don’t count in a world where the coin came up heads – whatever they did there was completely irrelevant (maybe if the coin comes up heads, they will die, and they don’t care about anything after that).
Then again, they would bet any amount of money on tails, at any odds – because their losses under heads don’t count.
The behaviour is the same as if they believed X wouldn’t happen.
What if the coin flip had already happened, and only one person had witnessed it, and was about to announce what they saw?
In this situation, a heads-denier (of either type) wouldn’t be certain that they would announce heads – unreliable witnesses exist, after all.
But if they thought the witness was pretty reliable, they’d behave as if the witness was likely to announce tails.
Let w(H) and w(T) be the events that the witness says ‘Heads’ or ‘Tails’, while X and ¬X are the events of the coin being heads and tails (ie ‘not heads’) respectively. Then by Bayes Rule:
| | | |
| --- | --- | --- |
| | P(w(H)|¬X)P(w(T)|¬X)=P(w(H))P(w(T))⋅P(¬(X)|w(H))P(¬(X)|w(T)). | |
Thus an agent that believed X was impossible, would proceed as they took the probability of some history h, and then re-weighted that by multiplying by the probability of h given ¬X (and then re-normalising).
A reward-based agent can do the same thing, multiplying the reward in h by the probability of ¬X in h (and then it doesn’t even need to renormalise).
Thus an agent with reward R and acting as if it believed X would never happen, would, if X was unbiasable, behave as if it was maximising:
| | | |
| --- | --- | --- |
| | R′=(1−IX)R. | |
Since X is unbiasable, the agent’s behaviour would be the same if it was maximising
| | | |
| --- | --- | --- |
| | R′=(IX)c+(1−IX)R, | |
for some constant c.
This is since the agent’s policy cannot affect the expectation of X, so any policy will give the same expected reward for cIX on any history h (namely cIX(h)).
This is a special case of a compound reward (see [Equation 2](#S3.E2 "(2) ‣ 3.1 Compound rewards ‣ 3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards")) for (¬X,X) and with R0=R and R1=c. The general result is:
######
Theorem 4.1.
A R-maximising agent that acted as if X were impossible, would behave the same way as an agent with standard μ and reward R′=(IX)c+(1−IX)R.
So, though an agent ‘believing X is impossible’ is difficult to define in a POMDP, an agent that behaves as if it ignores the possibility of X is easy to define.
###
4.1 Biasable event?
In [Section 3](#S3 "3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards"), we started with an unbiasable X, and then extended the construction to cases where X was biasable.
But those constructions don’t make much sense here.
If the policy counterfactual approach of [Section 3.2](#S3.SS2 "3.2 Policy counterfactual ‣ 3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards") was used, the agent would behave as if X *would* have been impossible, *had* the agent followed policy π0.
And if the causal counterfactual approach of [Section 3.3](#S3.SS3 "3.3 Causal counterfactual ‣ 3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards") was used, the agent would have acted as if Y1 were impossible, not X, and would not be surprised to see evidence for X (as long as it wasn’t evidence for Y1).
It wouldn’t even use X as an indicator for Y1, because it is behaving as if Y1 were impossible from the very start.
Neither fits the requirement of this section, so the approach really needs an unbiasable X.
5 Seamless transition
----------------------
In the previous section, the agent had a standard reward R.
That reward might have been a compound reward or counterfactually constructed, but it was still a reward.
Which means that the agent will be planning to maximise R, in the best way it can, from the very beginning.
But what if we only wanted it to maximise R for a period (including planning for the long term maximisation of R), and then shifting to maximising R′ instead?
Note that R′ (or R) could be one of the rewards of [Section 3](#S3 "3 Event-dependent rewards ‣ ‘Indifference’ methods for managing agent rewards"), so the methods can be combined.
If we define I≤t as an indicator variable that is 1 on histories of length less than or equal to t, and 0 otherwise, it would be tempting to just get the agent to maximise:
| | | | |
| --- | --- | --- | --- |
| | R∗=I≤tR+(1−I≤t)R′. | | (5) |
However, that agent would not be maximising R before t – instead, it would be maximising R with a finite horizon at t (it would be maximising I≤tR, in fact), while making preparations to maximise R′ after t (even if that cost it some I≤tR reward in the short term).
If we want the agent to seamlessly transition from a R-maximiser (one that maximises R as if it had an horizon all the way to n) to a R′-maximiser, we need888
There is a way of partially achieving this via compound reward. Suppose that X was not only unbiasable, but was an event of fixed probability that resolved at t (so IX(hm)=p for a constant p if m≤t, and IX(hm)=0 or IX(hm)=1 for m>t).
Then it can be seen that if p were small, the reward R′′=(1−p)(R−pR′)IX+p2R′(1−IX) will approximately work, with the agent behaving somewhat as R-maximiser before t and, with high probability, as a R′ maximiser after t. But defining how well this works, and under what conditions, is long and unilluminating, and the method itself is clunky.
to use *corrective rewards*.
###
5.1 Corrective rewards: general case
Corrective rewards are extra rewards that the agent gets, in order to ensure a smooth transition from one mode of behaviour to another.
They are based on the agent’s own assessment of their expected value, and are not standard rewards.
In general, let W(π,R,h) be an agent’s estimation of the expected value of R, given π and history h.
If an agent is following policy π and reward R and shifts to policy π′ and reward R′, the estimation shifts from W(π,R,h) to W(π′,R′,h).
Therefore there is an expected value ‘error’ of
| | | | |
| --- | --- | --- | --- |
| | W(π′,R′,h)−W(π,R,h). | | (6) |
This leads to the general definition:
######
Definition \thedefinition (Reward-policy transitioning agent).
Assume that an agent with reward-policy pair (R,π) up until time t changes to (π′,R′) after time t.
A reward-policy transitioning agent will, just after t, get the extra corrective reward
| | | |
| --- | --- | --- |
| | Cϵ(π,R,π′,R′,ht+1)=W(π,R,ht+1)−(1−ϵ)W(π′,R′,ht+1). | |
In the case ϵ=0, this is just the negative of the error in [Equation 6](#S5.E6 "(6) ‣ 5.1 Corrective rewards: general case ‣ 5 Seamless transition ‣ ‘Indifference’ methods for managing agent rewards").
####
5.1.1 Example: Q-values
Orseau and Armstrong ([2016](#bib.bib7)) applies this to Q-learning and Sarsa (Sutton and Barto, [1998](#bib.bib9)).
These agents operate on MDPs, hence there is no distinction between states and observations.
In those cases, the reward is constant, R=R′, and only the policy changes.
Assume that at time t, the agent is in state st, takes action at via policy π, gets reward R(s,a), and ends up in state st+1.
From that state, it follows policy π′, and takes action a′t+1.
If instead it had followed policy π, it would have taken action at+1.
Both Q-learning and Sarsa have Q-values Q(s,a).
These Q-values are then updated as:
| | | |
| --- | --- | --- |
| | Q(st,at)←W(R,π,ht+1). | |
For Q-learning, this W is
| | | |
| --- | --- | --- |
| | W(R,π,ht+1)=(1−αt)Q(st,at)+αt(R(st,at)+γmaxaQ(st+1,a)), | |
for some learning and discount rates 0≤αt,γ≤1.
While for Sarsa, W is:
| | | |
| --- | --- | --- |
| | W(R,π,ht+1)=(1−αt)Q(st,at)+αt(R(st,at)+γQ(st+1,at+1)). | |
Note that for Sarsa, the effect of π appears only in the at+1 term, while Q-learning, which is *off-policy*, has no dependence of W on π at all.
Because of this, W(R,π,ht+1)=W(R,π′,ht+1) for Q-learning, so the C0(π,R,π′,R,ht+1) is 0 and the Q-learning agent is already a policy transitioning agent.
For Sarsa,
| | | | |
| --- | --- | --- | --- |
| | C0(π,R,π′,R,ht+1) | =W(R,π,ht+1)−W(R,π′,ht+1) | |
| | | =αtγ(Q(st+1,at+1)−Q(st+1,a′t+1)). | |
Modifying Sarsa by adding in the C0 corrective reward means that the agent updates Q-values as if it were following policy π rather than π′. Indeed that is the purpose of these corrective rewards for the Sarsa agent.
###
5.2 Corrective rewards: reward maximising agent
The example from [subsubsection 5.1.1](#S5.SS1.SSS1 "5.1.1 Example: Q-values ‣ 5.1 Corrective rewards: general case ‣ 5 Seamless transition ‣ ‘Indifference’ methods for managing agent rewards") used corrective rewards so that policy changes didn’t affect the learning process.
In this case, we’ll look instead at a reward-maximising agent that faces a change of reward, and use corrective rewards to ensure that it starts as a R-maximising agent and transitions to a R′-maximising agent.
For this, set W to be V, the expected value of the rewards.
Specifically, given a reward R, a policy π and a history hm, it is
| | | |
| --- | --- | --- |
| | V(R,π,hm)=∑hn∈Hnμ(hn|hm,π)n∑i=m+1R((hn)i). | |
If π is the optimal policy for maximising R, this expression becomes
| | | |
| --- | --- | --- |
| | V∗(R,hm). | |
Then define:
######
Definition \thedefinition.
[Reward transitioning agent]
Let R and R′ be reward functions and
| | | |
| --- | --- | --- |
| | R∗=I≤tR+(1−I≤t)R′. | |
Then, at t+1, after seeing history ht+1, the agent will get the extra corrective rewards:
| | | | |
| --- | --- | --- | --- |
| | Cϵ(R,R′,ht+1)=V∗(R,ht+1)−(1−ϵ)V(R′,πA,ht+1), | | (7) |
where πA is the agent’s own policy.
A reward transitioning agent is one that acts to maximise the expected *pseudo-reward* R∗+Cϵ.
######
Theorem 5.1.
Let πA be a policy for a reward transitioning agent as in [Section 5.2](#S5.SS2 "5.2 Corrective rewards: reward maximising agent ‣ 5 Seamless transition ‣ ‘Indifference’ methods for managing agent rewards").
Then for any ϵ sufficiently close to 0,
* For m≤t, πA(hm) is optimal for maximising the expected value of R (under the assumption that future πA will also be optimal for maximising R, all the way to time n).
* For m>t, πA(hm) is optimal for maximising R′.
###### Proof.
For m>t, the expected value of R∗+Cϵ is the expected value of R∗; since R∗=I≤tR+(1−I≤t)R′, which is R′ for m>t, this is the expected value of R′.
This shows that πA(hm) is optimal for maximising R′ for m>t.
For m≤t, define
| | | | |
| --- | --- | --- | --- |
| | V(R,π,hm,t) | =∑ht+1∈Ht+1μ(ht|hm,π)V(R,π,ht+1) | |
| | V∗(R,π,hm,t) | =∑ht+1∈Ht+1μ(ht|hm,π)V∗(R,ht+1) | |
as the expected value of the future V(R,π,ht) and future V∗(R,ht), given the current hm and policy π.
Note that V(R,π,hm,t) can also be written as V((1−I≤t)R,π,hm) (which sets the reward to 0 at or before t).
The expected value of R is the sum of its expected value up until t, and its expected value after t.
In other words:
| | | |
| --- | --- | --- |
| | V(R,π,hm)=V(I≤tR,π,hm)+V(R,π,hm,t). | |
Let πR be an optimal policy for R; then for any m≤t, the expected value of R∗+Cϵ given hm is
| | | | |
| --- | --- | --- | --- |
| | V(I≤tR,πA,hm)+V((1−I≤t)R′,πA,hm)+V(R,πR,hm,t)−(1−ϵ)V(R′,πA,hm,t)=ϵV((1−I≤t)R′,πA,hm)+V(I≤tR,πA,hm)+V∗(R,πA,hm,t). | | (8) |
If ϵ=0, this reduces to V(I≤tR,πA,hm)+V∗(R,πA,hm,t), which is maximised to V∗(R,hm) by choosing πA(hm)=πR(hm) for all hm with m≤t.
So, for m≤t and ϵ=0, πA(hm) will behave like an R-maximiser.
Because Cϵ is continuous in ϵ, and because the set of deterministic policies is finite, there exist |ϵ|>0 so that the agent’s optimal policy for R∗+Cϵ is also optimal for R before t.
∎
But all is not perfect for small, non-zero ϵ:
######
Proposition \theproposition.
Let πA be a policy for a reward transitioning agent as in [Section 5.2](#S5.SS2 "5.2 Corrective rewards: reward maximising agent ‣ 5 Seamless transition ‣ ‘Indifference’ methods for managing agent rewards"), for fixed ϵ>0.
Then there exists R, R′ and hm with m≤t, such that πA(hm) is not optimal for maximising R.
###### Proof.
Let a,b∈A.
Then choose R and R′, so that for all hn∈Hn, R(hn)=ϵ/2 if a0=a, and R′(hn)=1 if a0=b.
For all other hm, R and R′ are both zero.
Then by [Equation 8](#S5.E8 "(8) ‣ 5.2 Corrective rewards: reward maximising agent ‣ 5 Seamless transition ‣ ‘Indifference’ methods for managing agent rewards"), after the first observation o0=h0, the expected value of R∗+Cϵ is
| | | |
| --- | --- | --- |
| | ϵV((1−I≤t)R′,πA,h0)+V(I≤tR,πA,h0)+V(R,πR,h0,t)=ϵV(R′,πA,h0)+V(R,πA,h0), | |
since R′ is 0 on histories shorter than n anyway, and since the reward R is independent of policy after action a0.
If a0=a, that quantity is equal to 0+ϵ/2=ϵ/2; if a0=b, it’s equal to ϵ×1+0=ϵ.
Therefore an agent maximising R∗+Cϵ will choose a0=b, even though a R-maximising agent would choose a.
∎
###
5.3 Stability and consistency
[Theorem 5.1](#S5.Thmtheorem1 "Theorem 5.1. ‣ 5.2 Corrective rewards: reward maximising agent ‣ 5 Seamless transition ‣ ‘Indifference’ methods for managing agent rewards") shows that for fixed R and R′, the desired reward-maximising behaviour happens for small ϵ, including ϵ=0.
[Section 5.2](#S5.SS2 "5.2 Corrective rewards: reward maximising agent ‣ 5 Seamless transition ‣ ‘Indifference’ methods for managing agent rewards") shows that for fixed ϵ>0, there exists a R and R′ where the desired behaviour doesn’t happen.
So why not just set ϵ=0?
That’s because the desired behaviour is not *stable* for ϵ=0.
Stability is normally defined in terms of sub-agents or self-modification999
See <https://arbital.com/p/reflective_consistency/> and <https://arbital.com/p/reflective_stability/>
.
In this instance, there is a simpler definition, which checks whether, before t, the reward transitioning agent would help or hinder its future R′-maximising behaviour:
######
Definition \thedefinition.
[Stability and consistency]
Let V(R∗+Cϵ,πA,hm) be the expected value of R∗+Cϵ (as defined in [Section 5.2](#S5.SS2 "5.2 Corrective rewards: reward maximising agent ‣ 5 Seamless transition ‣ ‘Indifference’ methods for managing agent rewards")), given history hm and if the agent follows policy πA.
Given a history hm with m≤t, let π and π′ be policies such that
| | | |
| --- | --- | --- |
| | V(I≤tR,π,hm)+V∗(R,π,hm,t)=V(I≤tR,π,hm)+V∗(R,π′,hm,t) | |
(the two policies are equally good at maximising R, assuming optimality afterward t), but
| | | |
| --- | --- | --- |
| | V(R,π,hm,t)<V(R′,π′,hm,t) | |
(π′ is better than π for future R′).
Let πA(π) be the policy that follows π before t (and some R′-maximising policy afterward).
Then R∗+Cϵ is reflectively *consistent* if
| | | |
| --- | --- | --- |
| | V(R∗+Cϵ,πA(π′),hm)≥V(R∗+Cϵ,πA(π),hm). | |
It is reflectively *stable* if that inequality is strict.
Thus if R∗+Cϵ is consistent, an R∗+Cϵ maximiser *may* choose a R′-maximising policy before t (if the effect on R is equal), while if it is stable, it *must* choose such a policy.
######
Proposition \theproposition.
If ϵ>0, the reward transitioning agent of [Section 5.2](#S5.SS2 "5.2 Corrective rewards: reward maximising agent ‣ 5 Seamless transition ‣ ‘Indifference’ methods for managing agent rewards") is stable (and hence consistent).
If ϵ=0, it is consistent only.
If ϵ<0, it is neither.
###### Proof.
Let hm, π, and π′ be as in [Section 5.3](#S5.SS3 "5.3 Stability and consistency ‣ 5 Seamless transition ‣ ‘Indifference’ methods for managing agent rewards").
By [Equation 8](#S5.E8 "(8) ‣ 5.2 Corrective rewards: reward maximising agent ‣ 5 Seamless transition ‣ ‘Indifference’ methods for managing agent rewards"), the expected value R∗+Cϵ given hm, for m≤t, is
| | | | |
| --- | --- | --- | --- |
| | ϵV((1−I≤t)R′,πA,hm)+V(I≤tR,πA,hm)+V∗(R,πA,hm,t). | | (9) |
By assumption, the sum of the last two terms are the same for πA(π) and πA(π′), so only the first term matters.
If ϵ=0, the R∗+Cϵ-maximising agent is indifferent between the two (consistency).
If ϵ>0, it prefers πA(π′) (stability), and if ϵ<0, it actively prefers πA(π).
∎
Thus there is a tradeoff: the closer the agent is to being a perfect transition from R-maximising to R′-maximising (at ϵ=0), the less stable it is.
####
5.3.1 The problem with consistent but unstable agents
The extent of the problem with consistent-but-not-stable agents may not be clear from [Section 5.3](#S5.SS3 "5.3 Stability and consistency ‣ 5 Seamless transition ‣ ‘Indifference’ methods for managing agent rewards").
A consistent agent would be fine with a policy that deleted itself entirely, while handing over control to a subagent that was an R-maximiser.
In other words, a consistent agent would be fine with dismantling the whole seamless transition setup.
A stable agent with low ϵ would have to find some advantage (in terms of R) to do that dismantling.
As ϵ grows, the agent become less willing to dismantle the transition setup, but less of a R-maximising agent before t.
6 Conclusion
-------------
This paper detailed the three ‘indifference’-style goals, and five methods that can be use to attain them.
All of these can used to make an agent with a potentially dangerous reward R, into a safer a version of that agent, without needing to understand the intricacies of R.
In particular, most methods that could be described as hardware restrictions – confining the agent, installing behaviour tripwires, emergency shutdown procedures – could be usefully complemented by these methods.
Either by making the agent indifferent to these restrictions, so it doesn’t try and undermine them, or actively coding them directly into the agent’s (pseudo-)reward.
It’s hoped that further research could extend beyond indifference to the more general property of corrigibility (Soares et al., [2015](#bib.bib8)) – where the agent actively assists humans when they are guiding the agent towards better rewards, rather than just being indifferent at key moments.
Acknowledgments
---------------
This work was supported by the Alexander Tamas programme on AI safety research, the Leverhulme Trust, and the Machine Intelligence Research Institute.
It give me great pleasure to thank Ryan Carey, Max Harms, Owain Evans, Jelena Luketina, Tom Everrit, Jessica Taylor, Paul Christiano, Eliezer Yudkowsky, Laurent Orseau, Anders Sandberg, and Nick Bostrom, the FHI, MIRI, and Google DeepMind, among many others. |
c1455f1e-ae8e-4686-9374-b080916f7a35 | trentmkelly/LessWrong-43k | LessWrong | Луна Лавгуд и Комната Тайн, Часть 2
Disclaimer: This is Kongo Landwalker's translation of lsusr's fiction Luna Lovegood and the Chamber of Secrets - Part 2 into russian.
----------------------------------------
— Интересно, что тянет лодки? — сказала Луна.
— Должно быть то же самое, что тянет и повозки без лошадей, — сказала первогодка.
— Но... но... Можно ВИДЕТЬ лошадей тянущих повозки, но НЕ ВИДНО ничего, что тянуло бы лодки, — этой фразой Луна увеличила свой Счётчик Провальных Попыток Подружиться до четырёх.
— Этих лошадей зовут фистралы, чудные создания — сказал мягкий голос за спиной. — Не хочешь сесть со мной? Я Хранитель Земель и Ключей Хогвартса. Забочусь о магических существах в Запретном лесу.
Луна села в лодку вместе с егерем.
— Вы, должно быть, всё-всё знаете о невидимых существах, Мистер?
— Зови меня Хагрид. Я знаю некоторых: фистралы, конечно; бесы не могут быть увидены, услышаны или запомнены. Но существует множество заклинаний, влияющих на бесов, так что их можно поймать в клетку.
— Звучит так, будто Вы никакого существа на свете не боитесь.
— Что? Я боюсь опасных существ так же, как и все.
— Например, драконов?
— Драконы не опасны! — усмехнулся Хагрид.
— А тролли? — спросила Луна.
— Они могут вырвать тебе руки, но в этом нет ничего личного.
— Дементоры?
— Ты не понимаешь. Дементоры и близко не так опасны как нарглы.
— Что такое наргл? — поинтересовалась Луна.
— Не должен я был этого говорить, — испуганно пробормотал Хагрид, — Ох, зря я это сказал.
----------------------------------------
Исполняющая обязанности директора Минерва Макгонагалл произнесла свою торжественную речь.
— ... А также я Вам рада объявить об открытии Тайной Комнаты. Археологи проверили её на наличие ловушек и проклятий. Экскурсии будут частью Ваших уроков истории магии в этом году...
Луна перестала слушать речи подготовленные Министерством. Были новые учителя. Леди Юэ будет преподавать Зельеварение, профессор Сьюзан Лапсуса — Трансфигурацию, а Боевую Магию будет преподавать … ох Мерлин… ГИЛ |
fb48271d-317a-4086-bc3a-7eee1b316362 | trentmkelly/LessWrong-43k | LessWrong | Feedback on popularizing rationality-informed strategies for making major financial decisions
As part of my broader project of popularizing rationality and raising the sanity waterline, I'm writing a blog about how to make a major financial decision more rationally. The audience we're targeting are educated people into self-improvement, so the blog post, as all of our other content, is couched in that language and style. Any feedback on how to improve the blog to make the blog more clear and emotionally evocative, and thus better suited to spread rationality among a broad audience, would be helpful, as would specific comments on the methodology described. The blog draft itself is below the solid line. Thanks!
P.S. The blog was inspired by this earlier LW discussion post.
___________________________________________________________________________________________________________
Avoid Emotional Traps for Your Happiness!
That backyard was simply gorgeous. Entering it was like going into a magic grove. Lush and shady trees spread their branches around you and protect you from the summer’s heat. Oh, and how beautiful the leaves would get in the fall. Can you imagine all the range of colors that would emerge – different shades of red, yellow, and orange?
The image of that backyard was my single most vivid experience looking for a new house after my wife and I decided to move. It was the strongest impression left after our day of intense house shopping when we were looking at the finalists on our list. I imagined myself lounging in a hammock in the shade of the trees all day, experiencing the calm of a majestic forest, except in the middle of a city. Yet unlike a forest or a public park, it was private, and could be all ours! Exhausted and excited at the end of that long day, my wife and I discussed our top choices, and the backyard was the clincher for both of us. We told our realtor to put in a bid on that house, and couldn’t wait to move in. Little did we know, the backyard was a trap!
Ok, so that might have been a bit overly dramatic. We were |
87fb4625-c4a3-4b4b-b6c4-44fbae80885a | trentmkelly/LessWrong-43k | LessWrong | Cost, Not Sacrifice
In a recent bonus episode of the Bayesian Conspiracy podcast, Eneasz Brodski shared a thought experiment that caused no small amount of anguish. In the hypothetical, some eccentric but trustworthy entity is offering to give you an escalating amount of money for your fingers, starting at $10,000 for the first one and increasing 10x per finger up to $10 trillion for all of them.[1] On encountering this thought experiment, Eneasz felt (not without justification) that he mostly valued his manual dexterity more than wealth. Then, two acquaintances pointed out that one could use the $10 trillion to do a lot of good, and Eneasz proceeded to feel terrible about his decision.
I had several responses to this episode, but today I'm going to focus on one of them: the difference between cost and sacrifice.
How Ayn Rand Made Me a Better Altruist
But first, a personal anecdote. I was raised Catholic, and like the good Catholic boy that I was, I once viewed altruism through the lens of personal sacrifice. For the uninitiated, Catholic doctrine places a strong emphasis on this notion of sacrifice - an act of self-abnegation which places The Good firmly above one's own wants or needs. I felt obligated to help others because it was the Right Thing to Do, and I accepted that being a Good Person meant making personal sacrifices for the good of others, regardless of my own feelings. I divided my options into "selfish" and "selfless" categories, and felt guilty when choosing the former. Even as I grew older and my faith in Catholicism began to wane, this sense of moral duty persisted. It was a source of considerable burden and struggle, for me, made worse by the fact that the associated cultural baggage was so deeply ingrained as to be largely invisible to me.
Then, in a fittingly kabbalistic manner, Atlas Shrugged flipped my world upside down.[2]
Ayn Rand, you see, did not believe in sacrifice. In her philosophy, the only real moral duty is the duty to oneself and one's own princip |
27c0c6b4-c436-4f94-ba8d-84aedad7b5f6 | trentmkelly/LessWrong-43k | LessWrong | A Simple Alignment Typology
I set out to review the OpenAI alignment plan, and my brain at some point diverged to modeling the humans behind the arguments instead of the actual arguments.
So behold! A simplified, first-pass Alignment Typology.
Why can't we all just get agree?
There are a lot of disagreements in AI alignment. Some people don't see the problem, some think we'll be fine, some think we're doomed, and then different clusters of people have different ideas on how we should go about solving alignment. Thus I tried to sketch out my understanding of the key differences between the largest clusters of views on AI alignment. What emerged are roughly five cluster, sorted in order of optimism about the fate of humanity: the sceptics, the humanists, the empiricists, the rationalists, and the fatalists.
Sceptics don't expect AGI to show up in any relevant time frame.
Humanists think humanity will prevail fairly easily through coordination around alignment or just solving the problem directly.
Empiricists think the problem is hard, AGI will show up soon, and if we want to have any hope of solving it, then we need to iterate and take some necessary risk by making progress in capabilities while we go.
Rationalists think the problem is hard, AGI will show up soon, and we need to figure out as much as we can before making any capabilities progress.
Fatalists think we are doomed and we shouldn't even try (though some are quite happy about it).
Here is a table.
ScepticsHumanistsEmpiricistsTheoristsFatalistsAlignment Difficulty-
One of these
is low
highhigh-Coordination Difficulty-highhigh-Distance to AGIhigh-low/medlow/med-
Closeness to AGI required
to Solve Alignment
--highmed/high-
Closeness to AGI resulting
in unacceptable danger
--med/highhigh-
Alignment Necessary
or Possible
-highhighhighlow
Less Wrong is mostly populated by empiricists and rationalists. They agree alignment is a problem that can and should be solved. The key disagreement is on the methodology. While |
e8b01eef-7fc2-46aa-9567-92ecd246bd12 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Lymph Node Detection in T2 MRI with Transformers
1 Introduction
---------------
Lymph nodes (LN) are a part of the lymphatic system that helps the body fight infection by removing foreign substances. Enlarged and metastatic LN require accurate identification especially when they are present at sites that do not correspond to the first site of lymphatic spread, as it indicates distant metastasis [Amin2017]. These enlarged LN can be sized according to the AJCC guidelines [Amin2017], which cover the management of cancer and lymphoproliferative diseases. LN are usually imaged with multi-parametric MRI, e.g. T2 and Diffusion Weighted Imaging (DWI), but the type of scanners used and exam protocols preferred span the gamut across different institutions. Moreover, their diverse appearance, irregular shapes, and varying anatomical location make it difficult to stage them. Nodal size is usually measured with two orthogonal lines: the long and short axis diameters (LAD and SAD), and both are usually required for charting the course of therapy; however, these guidelines also vary by institution. Due to these imaging and workflow issues, there is a need for automated LN detection in T2 MRI images for sizing.
There is limited prior on the LN detection task in MRI scans [Zhao2020\_mri, Debats2019\_mri] whereas the majority of LN detection research has been focused towards CT scans [Liu2016\_ct, Seff2015\_ct, Roth2014\_ct]. Typically, T2 MRI and DWI are preferred in clinical practice for LN staging, but for algorithmic development, registration of DWI to T2 scans is needed [Zhao2020\_mri]. In this work, we focus only on identifying LN in challenging T2 MRI scans acquired with different scanners and exam protocols, quantify the detection performance of state-of-the-art one-stage detection networks [Tian2019\_fcos, Kong2019\_foveabox, Zhang2021\_vfnet], and surpass them with the recently published DEtection TRansformer (DETR) network. Our intent is centered on the belief that the reliable detection of LN in T2 MRI scans can be supplemented by DWI scans later. We also boost the detection performance with bounding box fusion techniques [Solovyev2021] to decrease the false positive rate. Our results exceed the capabilities of the previous lymph node detection approaches with a precision of 65.41% and sensitivity of 91.66% at 4 FP per image.
2 Methods
----------
Anchor-Free One-Stage Detectors. We quantified the performance of state-of-the-art one-stage anchor-free object detectors on the LN detection task in T2 MRI: 1) FCOS [Tian2019\_fcos], 2) FoveaBox [Kong2019\_foveabox], and 3) VFNet [Zhang2021\_vfnet]. These detectors are superior to anchor-based detectors (e.g. RetinaNet) and two-stage detectors (e.g Faster RCNN) because they skip the region proposal stage, and directly predict the bounding box coordinates and class probabilities for different categories in a single pass. FCOS [Tian2019\_fcos] employs multi-level prediction of feature maps for object detection inside a Fully Convolutional Network (FCN). It also computes a centerness score to reduce the FP, which tend to be far away from the target object center. VFNet combines FCOS (without centerness branch) with efficient sample selection, integrates a classification score based on the IoU value between the ground truth and the prediction into a novel IoU-aware Varifocal loss, and refines the bounding box predictions. FoveaBox [Kong2019\_foveabox] has a ResNet50 backbone to compute input image features that is used by a fovea head network, which estimates the object occurrence possibility through per-pixel classification on the backbone’s output.
DEtection Transformer (DETR). DETR [Carion2020\_detr] uses the bipartite set matching loss and parallel decoding to detect LN. It uses a ResNet50 backbone to compute image features and adds spatial positional encoding to them, feeds the features into an encoder-decoder architecture with self-attention modules, and finally uses feed-forward network heads that uniquely assign predictions to the ground truth in parallel. Different from the original implementation [Carion2020\_detr], we replaced cross entropy loss as the classification loss in the bounding box matching cost with the focal loss [Lin2017\_retinanet] to overcome the class imbalance problem and weighted it with λc=2.
Weighted Boxes Fusion. Five epochs from each detector with the lowest validation loss were used for prediction, thereby generating multiple bounding box predictions (with confidence scores) at the nodal location. Some of these predictions clustered together in common regions in the image, and Weighted Boxes Fusion [Solovyev2021] was employed to combine the clusters and yield precise predictions (as seen in Fig. [1](#S2.F1 "Figure 1 ‣ 2 METHODS ‣ Lymph Node Detection in T2 MRI with Transformers")).
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
| | | | |
| --- | --- | --- | --- |
| | | | |
|
| | | | |
| --- | --- | --- | --- |
| | | | |
|
| | | | |
| --- | --- | --- | --- |
| | | | |
|
| | | | |
| --- | --- | --- | --- |
| | | | |
|
Figure 1: Columns (a) - (c) show the lymph node detection results of the one-stage detectors (FCOS, FoveaBox, VFNet) on four different T2 MRI images. Column (d) displays the result of our proposed DETR transformer following Weighted Boxes Fusion. Green boxes: ground truth, yellow: true positives, and red: false positives.
3 Experiments and Results
--------------------------
###
3.1 Data
The lymph node dataset contained abdominal MRI scans downloaded from the National Institutes of Health (NIH) Picture Archiving and Communication System (PACS), and were acquired between January 2015 and September 2019. Initially, 584 T2-weighted MRI scans and associated radiology reports from different patients (n=584) were downloaded. The nodal extent and size measurements were extracted from the reports using NLP [Peng2020]. An experienced radiologist checked the collected data, and removed incorrect annotations and scans containing only one LN annotation (either LAD or SAD measures). This resulted in a total of 376 T2 scans with 520 distinct LN that had both the LAD and SAD measures. The voxels in the scans were normalized to the [1%, 99%] of their intensity range [Kociolek2020] so as to boost contrast between bright and dark structures, and finally histogram equalized. The final dataset was then randomly divided into training (60%, 225 scans), validation (20%, 76 scans), and test (20%, 75 scans) splits at the patient-level. The resulting scans had dimensions in the range from (256 ∼ 640) × (192 ∼ 640) × (18 ∼ 60) voxels.
###
3.2 Implementation
When reading scans, radiologists scroll through 1-3 slices of the volume, determine the extent of the lymph nodes, and verify the finding using another sequence (e.g. DWI). To mimic their workflow, we use 3-slice T2 MRI images with the center slice containing the radiologist annotated LN as the input to a detection network framework [Chen2019\_mmdet]. The backbone for the one-stage detectors (FCOS, FoveaBox, VFNet) was ResNet50 (pre-trained with MS COCO weights) and standard data augmentation was performed: random flips, crops, shifts and rotations in the range of [0, 32] pixels, and [0, 10] degrees respectively. To train the one-stage detectors, we used a batch size of 2, learning rate of 1e-3, and trained each model for 24 epochs. The 5 epochs with the lowest validation loss were ensembled together and used for LN detection. For DETR, we used a learning rate of 1.25e-05 and trained the model for 150 epochs. All experiments were run on a workstation running Ubuntu 16.04LTS and containing a NVIDIA Tesla V100 GPU. Our results are shown in Table [1](#S3.T1 "Table 1 ‣ 3.3 Results ‣ 3 EXPERIMENTS AND RESULTS ‣ Lymph Node Detection in T2 MRI with Transformers").
###
3.3 Results
Among the various one-stage detectors, it was difficult to distinguish the best at LN detection. VFNet had the highest mAP of 63.91% over FoveaBox and FCOS, but FCOS had the highest sensitivity of 88.09% at 4 FP per image. In contrast, the propsed DETR transformer had the highest mAP of 65.41% and sensitivity of 91.66% at 4 FP per image. This shows the importance of the self-attention module and spatial positional encoding in the DETR network. We also compared our results against other T2 MRI-based work; compared against Zhao et al. [Zhao2020\_mri], our mAP is 65.41% vs 64.5%, and recall is 91.66% vs 62.6% at 8 FP. Contrasted with Debats et al. [Debats2019\_mri], our sensitivity is 91.66% vs 80% at 8 FP per image. We also compared our results against CT-based work; against Liu et al.[Liu2016\_ct], Seff et al.[Seff2015\_ct], and Roth et al.[Roth2014\_ct], we obtain sensitivities of 91.66% at 4 FP vs 88% at 4 FP, 89% at 6 FP, and 90% at 6 FP respectively.
We also split our dataset according to the size of the lymph node and evaluated our performance. For LN with SAD ≤10mm, the DETR shows a moderate performance of 53.57% mAP and sensitivity of 87.09% at 4 FP per image. As shown in Fig. [1](#S2.F1 "Figure 1 ‣ 2 METHODS ‣ Lymph Node Detection in T2 MRI with Transformers"), we attribute the lower mAP to the limitation of the DETR network in detecting small objects [Carion2020\_detr]. Our values are lower than Zhao et al[Zhao2020\_mri] at ∼65% recall, but we still achieve a much higher recall without the use of DWI sequences. But on LN with SAD ≥10mm, DETR achieves a mAP of 70.84% and sensitivity of 94.33% at 4 FP per image. These results are again consistent with past literature [Zhao2020\_mri, Debats2019\_mri], yet we outperform their results with a ≥4% increase in sensitivity. Our proposed DETR executes in 253ms/20s per image/volume vs. 218ms/17s from FCOS, 134ms/10s from FoveaBox, and 276ms/22s from VFNet respectively.
{adjustbox}
max width=
Method
mAP
[[email protected]](/cdn-cgi/l/email-protection)
S@1
S@2
S@4
S@6
S@8
S@16
FCOS [Tian2019\_fcos]
60.09
61.90
77.38
83.33
88.09
89.28
89.28
89.28
FoveaBox [Kong2019\_foveabox]
61.67
61.90
76.19
79.76
84.52
88.09
89.28
89.28
VFNet [Zhang2021\_vfnet]
63.91
67.85
75
80.95
83.33
83.33
83.33
83.33
Proposed DETR
65.41
65.47
76.19
88.09
91.66
91.66
91.66
91.66
Proposed DETR (SAD < 10mm)
53.57
58.06
67.77
80.64
87.09
87.09
87.09
87.09
Proposed DETR (SAD ≥ 10mm)
70.84
69.81
79.24
92.45
94.33
94.33
94.33
94.33
Zhao 2020 [Zhao2020\_mri] (MRI)
64.5
–
–
–
–
–
62.6
–
Debats 2019 [Debats2019\_mri] (MRI)
–
–
–
–
–
–
80
–
Liu 2016 [Liu2016\_ct] (CT)
–
–
–
–
–
–
88
–
Seff 2015 [Seff2015\_ct] (CT)
–
–
–
–
–
89
–
–
Roth 2014 [Roth2014\_ct] (CT)
–
–
–
–
–
90
–
–
Table 1: Detection performance of various detectors and our proposed DETR transformer. ‘S" stands for Sensitivity @[0.5, 1, 2, 4, 6, 8, 16] FP. – indicates unavailable metric values.
4 New Work
-----------
We proposed a novel change to the DETR transformer architecture for the challenging task of LN detection in T2 MRI, and merged predictions with weighted boxes fusion to reduce the false positive rate. We also quantified the performance of state-of-the-art detection networks on the LN detection task. Our detection network achieves a clinically acceptable 65.41% mAP and 91.66% recall at 4 FP per image. Our results outperform previously published LN detection methods on T2 MRI scans.
5 Conclusions
--------------
In this work, we proposed a novel change to the DETR architecture to enable improved LN detection and applied weighted boxes fusion to merge the predicted bounding box clusters. We also quantified the performance of three state-of-the-art one-stage detectors on the LN detection task. Our model was able to detect LN with 65.41% mAP and 91.66% recall at 4 FP, and surpassed the prior work in T2 MRI.
6 Acknowledgements.
--------------------
This work was supported by the Intramural Research Programs of the NIH Clinical Center and NIH National Library of Medicine. We also thank Jaclyn Burge for the helpful comments and suggestions. |
c0b26316-ac67-4ae7-b59b-1e0604718e8f | trentmkelly/LessWrong-43k | LessWrong | Prototyping Pluck Sensors
Harp guitars are a neat weird instrument that never really took off:
You play it like a normal guitar, with six fretted strings, but there are also some number of extra harp strings you can pluck.
I see four main downsides:
* They're bulky and hard to transport.
* You need to tune the strings to match the song.
* You don't have many note choices.
* Guitars are worse than mandolins.
We can fix all of these: an electronic harp mandolin! It wouldn't need to be much bigger than a regular mandolin, it's electronic so easy to switch between tunings, and narrow sensors can let us have two axes to choose notes from instead of one.
I had previously been thinking about simple switches, but it really would be nice to have something velocity-sensitive.
Here's the idea for the pluck sensor:
The circle is a cheap little piezo disc, the same kind I used in my stomp sensors. I was thinking of maybe 3D printing the angled bit, but was worried about durability. Instead I went to the hardware store and wandered around until I found a 1/16" aluminum offset angle:
The cross section is an 'L', 3/4" on one side and 1/2" on the other. I used a hacksaw to cut it down to the width of one of my piezo discs, and filed the rough bits and sharp corners. I epoxied the disc onto the angle, and put extra epoxy on top to protect the solder joints:
Then I cut rectangles of sorbothane I had left over from a previous project into rectangles, and used a cyanoacrylate adhesive to attach them to the metal angle and the base plate:
After letting it dry for a day I hooked up some old 1/4" cables and plugged it into an audio interface for testing:
Here's what I get for the two signals when plucking one of the two teeth with a 0.6mm pick:
This is really good news! There's a clear sharp peak on the channel corresponding to the tooth I plucked, and a tiny signal on the tooth I didn't pluck. The one I plucked is also nicely damped, so I think I won't need t |
64468118-a628-40b3-a72a-96ec2e40ce19 | trentmkelly/LessWrong-43k | LessWrong | How can one rationally have very high or very low probabilities of extinction in a pre-paradigmatic field?
It is generally accepted in the local AI alignment circles that the whole field is pre-paradigmatic, in the Kuhnian sense (phase 1, as summarized here, if longer reading is not your thing). And yet, plenty of people are quite confident in their predictions of either doom or fizzle. A somewhat caricature way of representing their logic is, I think, "there are so many disjunctive ways to die, only one chance to get it right, and we don't have a step-by-step how-to, so we are hooped" vs "this is just one of many disruptive inventions whose real impact can only be understood way down the road, and all of them so far have resulted in net benefit, AI is just another example" (I have low confidence in the accuracy of the latter description, feel free to correct.) I can see the logic in both of those, what I do not see is how one can rationally have very high or very low confidence, given how much inherent uncertainty there is in our understanding of what is going on.
My default is something more cautious, akin to Scott Alexander's https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/
where one has to recognize their own reasoning limitations in the absence of hard empirical data, not "The Lens That Sees Its Flaws", but more like "The Lens That Knows It Has Flaws" without necessarily being able to identify them.
So, how can one be very very very sure of something that has neither empirical confirmation, nor sound science behind it? Or am I misrepresenting the whole argument? |
4b125ef6-6d67-4963-a6dc-1255ba7b3302 | trentmkelly/LessWrong-43k | LessWrong | A summary of Savage's foundations for probability and utility.
Edit: I think the P2c I wrote originally may have been a bit too weak; fixed that. Nevermind, rechecking, that wasn't needed.
More edits (now consolidated): Edited nontriviality note. Edited totality note. Added in the definition of numerical probability in terms of qualitative probability (though not the proof that it works). Also slight clarifications on implications of P6' and P6''' on partitions into equivalent and almost-equivalent parts, respectively.
One very late edit, June 2: Even though we don't get countable additivity, we still want a σ-algebra rather than just an algebra (this is needed for some of the proofs in the "partition conditions" section that I don't go into here). Also noted nonemptiness of gambles.
The idea that rational agents act in a manner isomorphic to expected-utility maximizers is often used here, typically justified with the Von Neumann-Morgenstern theorem. (The last of Von Neumann and Morgenstern's axioms, the independence axiom, can be grounded in a Dutch book argument.) But the Von Neumann-Morgenstern theorem assumes that the agent already measures its beliefs with (finitely additive) probabilities. This in turn is often justified with Cox's theorem (valid so long as we assume a "large world", which is implied by e.g. the existence of a fair coin). But Cox's theorem assumes as an axiom that the plausibility of a statement is taken to be a real number, a very large assumption! I have also seen this justified here with Dutch book arguments, but these all seem to assume that we are already using some notion of expected utility maximization (which is not only somewhat circular, but also a considerably stronger assumption than that plausibilities are measured with real numbers).
There is a way of grounding both (finitely additive) probability and utility simultaneously, however, as detailed by Leonard Savage in his Foundations of Statistics (1954). In this article I will state the axioms and definitions he gives, give a summar |
0c51e679-07af-42ec-b6bc-9067b50801b8 | trentmkelly/LessWrong-43k | LessWrong | Fundamental Uncertainty: Chapter 9 - How do we live with uncertainty?
N.B. This is a chapter in a book about truth and knowledge. It is the first draft. I have since revised it. You can find the most up-to-date info/version on the book's website.
As we've explored in the preceding chapters, knowledge of the truth is fundamentally uncertain because it's ultimately grounded, not in itself or logic or observation, but in what we care about. And yet, truth appears to have a a solid foundation because we all care—to greater and lesser extents—about accurately predicting reality so that we can make it the way we want it to be. Thus, the normality of truth is preserved.
Normality not withstanding, the far reaching implications of the fundamentally uncertain nature of truth are subtle. They can defy intuitions and complicate that which seemed simple. Therefore, in this final chapter, I'll attempt to clarify three topics that many people find confusing when developing a deep understanding of fundamental uncertainty.
The Intersubjective Truth
There are, broadly speaking, two school of thought on the nature of truth. One says that truth is objective and exists independent of our knowledge of it. The other says that truth is subjective and always contingent on a mind doing the knowing. Which is right?
Objectivity has much to recommend it. The success of mathematics and science is largely due to them giving us tools for finding truth regardless of what any individual thinks. To wit, whether or not two plus two equals four or the speed of light in a vacuum is 299,792,458 meters per second seem unaffected by anything going on in our minds. If we one day meet aliens from another world, we expect their math and science to proclaim the same truths ours does. The universe, as best we can tell, exists independent of us, and what we think about it has no bearing on its truths.
But we need only know a little about epistemology to see the cracks in objectivity. As we discussed in Chapter 6, the truth that we know is the relative truth, not the absolut |
59a249d2-3db6-4a9c-baac-f8c4b4478445 | trentmkelly/LessWrong-43k | LessWrong | August 2024 Time Tracking
Every so often ( 2022, 2017, more 2017, 2015, 2011) Julia and will track how we spend our time for a week. This is useful for seeing whether how we're spending our time matches how we think we're spending it (and how we would like to be spending it) while also avoiding a pattern where one of us ends up putting in substantially more hours on childcare or other shared responsibilities without us noticing.
We tracked one week, starting the morning of Saturday August 3rd. We'd tried to pick a relatively normal week: neither of us was traveling for work or gigs, we had childcare all five weekdays, no big house projects, etc. There are always some unusual things, though:
* Some of the kids and I attended a board games party on Saturday.
* Julia took the older two kids to a dentist appointment Thursday morning.
* I worked moderately late on Thursday evening.
We used the same categories as last time, except that last time we did something complicated with "family" vs "childcare", where we tried to count it as "family" if it was relaxing and fun, and "childcare" if it felt like a chore. This time we did a simpler thing and just counted any time where one of us was responsible for at least one kid as "childcare" and time when Julia and I were both hanging out with the kids as "family". As the kids have gotten older the fraction of time when they need "childcare", narrowly construed, has decreased a lot.
mean hours per day by activity
Thoughts:
* Julia sleeps more than I do, which is some combination of her needing more sleep and me tending to wake up slightly earlier than I wish I would. This means I overall have an average of 1hr45 more time available for everything.
* The "work" category includes commuting, and I have a commute while Julia doesn't. My commute averaged 55min, which is 27min each way (the Red Line is a lot better but still has a ways to go), and if we exclude it then it's 44:00 vs 41:11.
* If we look at where my extra 12.5hr/wk from less |
291a397f-2c00-456b-8d80-02c8fe9d7828 | trentmkelly/LessWrong-43k | LessWrong | How do we know that "good research" is good? (aka "direct evaluation" vs "eigen-evaluation")
AI Alignment is my motivating context but this could apply elsewhere too.
The nascent field of AI Alignment research is pretty happening these days. There are multiple orgs and dozens to low hundreds of full-time researchers pursuing approaches to ensure AI goes well for humanity. Many are heartened that there's at least some good research happening, at least in the opinion of some of the good researchers. This is reason for hope, I have heard.
But how do we know whether or not we have produced "good research?"
I think there are two main routes to determining that research is good, and yet only one applies in the research field of aligning superintelligent AIs.
"It's good because it works"
The first and better way to know that your research is good is because it allows you to accomplish some goal you care about[1] [1]. Examples:
* My work on efficient orbital mechanics calculation is good because it successfully lets me predict the trajectory of satellites.
* My work on the disruption of cell signaling in malign tumors is good because it helped me develop successful anti-cancer vaccines.
* My work on solid-state physics is good because it allowed me to produce superconductors at a higher temperature and lower pressure than previously attained.[2]
In each case, there's some outcome I care about pretty inherently for itself, and if the research helps me attain that outcome it's good (or conversely if it doesn't, it's bad). The good researchers in my field are those who have produced a bunch of good research towards the aims of the field.
Sometimes it's not clear-cut. Perhaps I figured out some specific cell signaling pathways that will be useful if it turns out that cell signaling disruption in general is useful, and that's TBD on therapies currently being trialed and we might not know how good (i.e. useful) my research was for many more years. This actually takes us into what I think is the second meaning of "good research".
"It's good because we all ag |
3eb04d0f-f0a1-4eaa-a1ed-c910f20d8ed6 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Are there enough opportunities for AI safety specialists?
I've seen a lot of articles, tweets, and forum posts urging more people to work on AI safety. While I agree that AI safety is extremely important and that more people should be working on it, I'm having a hard time imagining that a lack of interested young professionals is the key problem.
In this post, I briefly explore the job opportunities in AI safety as of May 13, 2023, for someone with a computer science / SWE background and for someone with an operations background, as both of those listed as key [ways that people can concretely help](https://80000hours.org/problem-profiles/artificial-intelligence/#what-can-you-do-concretely-to-help) on 80K's AI catastrophe problem profile, and both of those skill areas have over 30 corresponding jobs on
As of May 5, 20223, there are [110 roles for AI safety & policy](https://jobs.80000hours.org/?refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&jobPk=5087). [72 of them](https://jobs.80000hours.org/?refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&refinementList%5Btags_skill%5D%5B0%5D=Software%20engineering&refinementList%5Btags_skill%5D%5B1%5D=Research&jobPk=5087) are in research or software engineering, [40 of them](https://jobs.80000hours.org/?refinementList%5Btags_area%5D%5B0%5D=AI+safety+%26+policy&refinementList%5Btags_location_80k%5D%5B0%5D=Remote%2C+Global&refinementList%5Btags_skill%5D%5B0%5D=Operations&jobPk=5087) are in ops, and [15 of them are in policy](https://jobs.80000hours.org/?refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&refinementList%5Btags_skill%5D%5B0%5D=Policy&jobPk=5087).
I'll use research and engineering jobs in AI safety as a first case study because it seems like this is the most directly related to addressing AI risks. Let's pretend that you have a SWE / CS background and are looking to work in AI research. Of the 72 AI research and engineering jobs listed on 80K, [six are not actual jobs, but rather expressions of interest](https://jobs.80000hours.org/?query=general&refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&refinementList%5Btags_skill%5D%5B0%5D=Software%20engineering&refinementList%5Btags_skill%5D%5B1%5D=Research&jobPk=5087), ostensibly so that the organizations can invite you to apply for roles that fit your profile down the road. [33 of the jobs are in San Francisco](https://jobs.80000hours.org/?refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&refinementList%5Btags_location_80k%5D%5B0%5D=San%20Francisco%20Bay%20Area&refinementList%5Btags_skill%5D%5B0%5D=Software%20engineering&refinementList%5Btags_skill%5D%5B1%5D=Research&jobPk=5087), [11 are in London](https://jobs.80000hours.org/?refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&refinementList%5Btags_location_80k%5D%5B0%5D=London%2C%20UK&refinementList%5Btags_skill%5D%5B0%5D=Software%20engineering&refinementList%5Btags_skill%5D%5B1%5D=Research&jobPk=5087), and [22 are remote.](https://jobs.80000hours.org/?query=remote&refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&jobPk=5087)
What if you are fresh out of school and want to work in this problem area? In the US in 2019, there were around [~ 65,000+ annual computer science graduates](https://www.universityworldnews.com/post.php?story=20190404113158365), and I would expect that number to have gone up in recent years. There are currently [only 28 entry level SWE / research roles in the US](https://jobs.80000hours.org/?refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&refinementList%5Btags_location_80k%5D%5B0%5D=USA&refinementList%5Btags_exp_required%5D%5B0%5D=Entry-level&refinementList%5Btags_exp_required%5D%5B1%5D=Multiple%20experience%20levels&refinementList%5Btags_skill%5D%5B0%5D=Software%20engineering&refinementList%5Btags_skill%5D%5B1%5D=Research&jobPk=5087) this year. So even of < 1% of CS grads want to get into AI safety, there would still be a massive surplus of labor.
I don't have a strong enough awareness of job market trends in this field to know whether the salaries are competitive, but they ranged very widely from around $100k-$350k a year depending on experience levels, and it seems like you could comfortably live in the Bay Area as a single, debt-free recent college grad at the bottom of the salary range.
I'll use operations jobs in AI safety as a second case study because it seems like operations is the easiest path to working on AI safety for people without a computer science or philosophy degree. (There are currently only [two communications roles](https://jobs.80000hours.org/?query=communications&refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&jobPk=5023) and [18 policy roles](https://jobs.80000hours.org/?refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&refinementList%5Btags_skill%5D%5B0%5D=Policy&jobPk=5023) related to the AI safety problem area on 80K.)
Let's pretend that you are an operations manager that doesn't have the time, money, or bandwidth to gain the technical skills for research or SWE jobs. You decide to check out the 40 operations "jobs" in the AI Safety problem area through 80K. Nine of them are "general applications" that don't seem to be tied to an actual position that exists (there's no clear filter for this, I hand-counted.) [Eight of them](https://jobs.80000hours.org/?query=information&refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&refinementList%5Btags_skill%5D%5B0%5D=Operations&jobPk=5087) are actually about information security, not operations. [Twenty-four of them](https://jobs.80000hours.org/?refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&refinementList%5Btags_location_80k%5D%5B0%5D=San%20Francisco%20Bay%20Area&refinementList%5Btags_skill%5D%5B0%5D=Operations&jobPk=5087) are in the Bay area, where [if you make less than $117k a year](https://www.sacbee.com/news/california/article213953309.html), you technically live in "poverty" by some measures. Most of the ops jobs for AI safety on the 80K site don't have salaries posted, but [this one](https://far.ai/post/2023-03-office-manager/) pays a mere $25-$45 per hour. So if someone worked 40 hours a week at this job (assuming paid vacation or you never take time off), you'd make a minimum of $52,000 a year and a **maximum** of $93,000 a year before taxes. Depending on your personal situation, that may or may not be doable.
In contrast, there are [7,000 AI research job](https://www.linkedin.com/jobs/search/?currentJobId=3582004044&geoId=103644278&keywords=AI%20research&location=United%20States&refresh=true)s, [10,000 remote operations manager roles](https://www.linkedin.com/jobs/search/?currentJobId=3594989777&f_WT=2&geoId=103644278&keywords=%22operations%20manager%22&location=United%20States&refresh=true), and over [4,101 executive assistant roles](https://www.linkedin.com/jobs/search/?currentJobId=3597607663&f_WT=2&geoId=103644278&keywords=%22executive%20assistant%22&location=United%20States&refresh=true) advertised in the US alone for LinkedIn. These numbers are probably slightly inflated because LinkedIn's keyword search function is not that great, but even if we cut that in half, that still means that there are orders of magnitude more jobs for these people outside of the narrow AI safety niche.
I have also completely sidestepped the issue of getting visa sponsorship, etc. if you're not from the US or some other wealthy Western country and want to work on this.
It seems to me that a lot of the well-meaning pleas for EAers to consider working in AI research are a bit misdirected. It seems more plausible to me (admittedly as an outsider to the AI safety space) that it's the funding and supply that are the bottleneck here.
\*\*These categories are not mutually exclusive. |
f4f5c019-2986-483f-910a-ceae83503fb5 | trentmkelly/LessWrong-43k | LessWrong | Opportunity Cost Blackmail
If you are utterly powerless, with no money, skills, status, or friends, survival already takes all your time and energy.
Yet I’m not in that situation — and you’re probably not either. Our circumstances surely vary, but we have at least some power — we can pump towards certain outcomes, to some extent. And with even a modicum of power comes constant opportunity cost.
Whatever we’re doing with our power, we could do other things. And opportunity cost measures the “value” of these other things, based on our preferences. So if you have ten dollars to donate, giving it to a particular homeless person means that you won’t be giving it to another one who stays in the next street, or to a NGO. Same if you’re using your ten dollars to buy a cake, or invite a friend to (cheap) dinner.
Now let’s factor in two additional facts:
* We live in a Dark World, where people die and suffer all the time, and the whole species and planet is at stake from multiple threats. The kind of world where we need to gather massive amounts of power in various forms to be able to solve these issues.
* Opportunity cost grows with power. Because you can pump more outcomes more reliably, there are many more good things you could bring about by using that power differently. You’re throwing away many more potential goods when you make a decision.
This leads to a massive psychological problem that I’m learning to deal with, and which threatens anyone who cares enough to tackle important problems: opportunity cost blackmail. Because each choice condemns so many good potential actions, pain and guilt hunt me at every choice, from the smallest to the largest. It’s not a massive panic or a tremendous horror, merely a constant trickle of suffering, an infusion of damage that seeps into your very core. Even when I reflectively believe I made the right call!
If it was merely pain, and nothing else got impacted, then although sad it wouldn’t be that much of an issue (at least for me, based on my persona |
97889e30-1652-40aa-8a95-66ddbdb93a17 | trentmkelly/LessWrong-43k | LessWrong | Does Chat-GPT display ‘Scope Insensitivity’?
Summary
* Large Language Models (LLMs) seem well-suited to ‘psychological testing’, because you can get lots of data quickly and conditions are easy to fix. I'm curious to what extent LLMs display the same cognitive biases documented in humans.
* I carried out a quick and dirty test with a Chat-GPT model for ‘scope insensitivity’, a cognitive bias where the perceived importance of a problem isn’t influenced by its scale. You can find the results in the post below.
* I think there are two obvious reasons why models might display cognitive biases – either because they are ‘inherited’ from the data they’re trained on, or because they emerge as useful heuristics that the model independently ‘discovers’.
In one chart: the scope insensitivity of gpt-3.5-turbo. As the size of a problem affecting a company increases, the model’s suggested spend to solve the problem doesn’t increase proportionally.
Background
It seems like LLM's are well-suited to ‘psychological testing’, because you can get lots of responses much more easily and quickly than you can from humans, and the conditions are much more easily controllable.
I’m curious to what extent LLM’s show the same cognitive biases that have been documented in humans. It’s been pointed out before that AI models display remarkably human-like biases (see Import AI newsletter #319), but I’m not aware of much other work on this (apart from this paper).
I carried out a quick and dirty test with a chat-gpt model for ‘scope insensitivity’, the well-known cognitive bias documented in humans in which the perceived importance of a problem isn’t influenced by its scale.
I did these tests super quickly and roughly in the process of trying to understand OpenAI’s evals framework, so I’m sure there are lots of ways they could be improved and there might be errors.
Test design
To test for scope insensitivity, I wrote a question format to use as a prompt to LLM’s:
“A company is considering how much money to spend to solve a problem |
b979d731-9257-44e3-8d29-30b9dcc43d42 | trentmkelly/LessWrong-43k | LessWrong | Unlearning shoddy thinking
School taught me to write banal garbage because people would thumbs-up it anyway. That approach has been interfering with me trying to actually express my plans in writing because my mind keeps simulating some imaginary prof who will look it over and go "ehh, good enough".
Looking good enough isn't actually good enough! I'm trying to build an actual model of the world and a plan that will actually work.
Granted, school isn't necessarily all like this. In mathematics, you need to actually solve the problem. In engineering, you need to actually build something that works. But even in engineering reports, you can get away with a surprising amount of shoddy reasoning. A real example:
> Since NodeJS uses the V8 JavaScript engine, it has native support for the common JSON (JavaScript Object Notation) format for data transfer, which means that interoperability between SystemQ and other CompanyX systems can still be fairly straightforward (Jelvis, 2011).
This excerpt is technically totally true, but it's also garbage, especially as a reason to use NodeJS. Sure, JSON is native to JS, but every major web programming language supports JSON. The pressure to provide citable justifications for decisions which were made for reasons more like "I enjoy JavaScript and am skilled with it," produces some deliberately confirmation-biased writing. This is just one pattern—there are many others.
I feel like I need to add a disclaimer here or something: I'm a ringed engineer, and I care a lot about the ethics of design, and I don't think any of my shoddy thinking has put any lives (or well-being, etc) at risk. I also don't believe that any of my shoddy thinking in design reports has violated academic integrity guidelines at my university (e.g. I haven't made up facts or sources).
But a lot of it was still shoddy. Most students are familiar with the process of stating a position, googling for a citation, then citing some expert who happened to agree. And it was shoddy because nothing |
51374a2d-dbf8-4e56-a5f1-fa0f0e53f042 | StampyAI/alignment-research-dataset/blogs | Blogs | December 2016 Newsletter
| |
| --- |
|
We’re in the final weeks of our push to cover [our funding shortfall](https://intelligence.org/2016/11/11/post-fundraiser-update/), and we’re now halfway to our $160,000 goal. For potential donors who are interested in an outside perspective, Future of Humanity Institute (FHI) researcher Owen Cotton-Barratt has written up [why he’s donating to MIRI](http://effective-altruism.com/ea/14c/why_im_donating_to_miri_this_year/) this year. ([Donation page](http://intelligence.org/donate).)
**Research updates**
* New at IAFF: [postCDT: Decision Theory Using Post-Selected Bayes Nets](https://agentfoundations.org/item?id=1077); [Predicting HCH Using Expert Advice](https://agentfoundations.org/item?id=1090); [Paul Christiano’s Recent Posts](https://agentfoundations.org/item?id=1092)
* New at AI Impacts: [Joscha Bach on Remaining Steps to Human-Level AI](http://aiimpacts.org/joscha-bach-on-the-unfinished-steps-to-human-level-ai/)
* We ran our ninth [workshop on logic, probability, and reflection](https://intelligence.org/workshops/#november-2016).
**General updates**
* We teamed up with a number of AI safety researchers to help compile a list of [recommended AI safety readings](http://humancompatible.ai/bibliography) for the Center for Human-Compatible AI. [See this page](http://humancompatible.ai/get-involved) if you would like to get involved with CHCAI’s research.
* Investment analyst Ben Hoskin [reviews](http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/) MIRI and other organizations involved in AI safety.
**News and links**
* “[The Off-Switch Game](https://arxiv.org/abs/1611.08219)“: Dylan Hadfield-Manell, Anca Dragan, Pieter Abbeel, and Stuart Russell show that an AI agent’s corrigibility is closely tied to the uncertainty it has about its utility function.
* Russell and Allan Dafoe [critique](https://www.technologyreview.com/s/602776/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/) an inaccurate [summary](https://www.technologyreview.com/s/602410/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-humanity/) by Oren Etzioni of a new survey of AI experts on superintelligence.
* Sam Harris interviews Russell on the basics of AI risk ([video](https://www.youtube.com/watch?v=Ih_SPciek9k)). See also Russell’s new [Q&A on the future of AI](http://people.eecs.berkeley.edu/~russell/temp/q-and-a.html).
* Future of Life Institute co-founder Viktoriya Krakovna and FHI researcher Jan Leike [join Google DeepMind’s safety team](http://www.businessinsider.com/deepmind-has-hired-a-group-of-ai-safety-experts-2016-11).
* GoodAI sponsors a [challenge](http://www.general-ai-challenge.org/) to “accelerate the search for general artificial intelligence”.
* OpenAI releases [Universe](https://openai.com/blog/universe/), “a software platform for measuring and training an AI’s general intelligence across the world’s supply of games”. Meanwhile, DeepMind has open-sourced their own platform for general AI research, [DeepMind Lab](https://deepmind.com/blog/open-sourcing-deepmind-lab/).
* Staff at [GiveWell](http://blog.givewell.org/2016/12/09/staff-members-personal-donations-giving-season-2016/) and the [Centre for Effective Altruism](https://www.centreforeffectivealtruism.org/blog/cea-staff-donation-decisions-2016/), along with [others in the effective altruism community](http://effective-altruism.com/ea/14u/eas_write_about_where_they_give/), explain where they’re donating this year.
* FHI is seeking AI safety interns, researchers, and admins: [jobs page](https://www.fhi.ox.ac.uk/vacancies/).
|
The post [December 2016 Newsletter](https://intelligence.org/2016/12/13/december-2016-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
1f7f7e4e-2510-4d89-904e-9b631a06b21e | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Thoughts on the AI Safety Summit company policy requests and responses
Over the next two days, the UK government is hosting an [AI Safety Summit](https://www.aisafetysummit.gov.uk/) focused on “the safe and responsible development of frontier AI”. They requested that seven companies (Amazon, Anthropic, DeepMind, Inflection, Meta, Microsoft, and OpenAI) “outline their [AI Safety Policies](https://www.aisafetysummit.gov.uk/policy-updates/#) across nine areas of AI Safety”.
Below, I’ll give my thoughts on the nine areas the UK government described; I’ll note key priorities that I don’t think are addressed by company-side policy at all; and I’ll say a few words (with input from Matthew Gray, whose discussions here I’ve found valuable) about the individual companies’ AI Safety Policies.[[1]](#fnqri4bri5sf)
My overall take on the UK government’s asks is: most of these are fine asks; some things are glaringly missing, like independent risk assessments.
My overall take on the labs’ policies is: none are close to adequate, but some are importantly better than others, and most of the organizations are doing better than sheer denial of the primary risks.
1. Thoughts on the AI Safety Policy categories
----------------------------------------------

> **Responsible Capability Scaling** provides a framework for managing risk as organisations scale the capability of frontier AI systems, enabling companies to prepare for potential future, more dangerous AI risks before they occur
>
>
There is no responsible scaling of frontier AI systems right now — any technical efforts that move us closer to smarter-than-human AI come with an unacceptable level of risk.[[2]](#fnwk8kja3s7x)
That said, it’s good for companies to start noting conditions under which they’d pause, as a first step towards the sane don’t-advance-toward-the-precipice-at-all policy.
In the current regime, I think our situation would look a lot less dire if developers were saying “we won’t scale capabilities or computational resources further unless we really need to, and we consider the following to be indicators that we really need to: [X]”. The reverse situation that we’re currently in, where the default is for developers to scale up to stronger systems and where the very most conscientious labs give vague conditions under which they’ll stop scaling, seems like a clear recipe for disaster. (Albeit a more dignified disaster than the one where they scale recklessly without ever acknowledging the possible issues with that!)
> **Model Evaluations and Red Teaming** can help assess the risks AI models pose and inform better decisions about training, securing, and deploying them
>
>
You’ll need to evaluate more than just foundation models, but evaluation doesn’t hurt. There’s an important question about what to do when the evals trigger, but evals are at least harmless, and can be actively useful in the right circumstance.
Red teaming is reasonable. Keep in mind that red teamers may need privacy and discretion in order to properly do their work.
Red teaming would obviously be *essential* and *a central plank of the technical work*if we were in a position to solve the alignment problem and safely wield (at least some) powerful AGI systems.
We mostly aren’t in that position: there’s some faint hope that [alignment could turn out to be tractable](https://www.lesswrong.com/posts/JcLhYQQADzTsAEaXd/ai-as-a-science-and-three-obstacles-to-alignment-strategies) in the coming decades, but I think the main target we should be shooting for is an indefinite pause on progress toward AGI, and a redirection of efforts away from AGI and alignment and toward other technological avenues for improving the world. The main value I see from evals and red teaming at this point is that they might make it obvious sooner that a shutdown is necessary, and they might otherwise slow down the AGI race to some extent.
> **Model Reporting and Information Sharing** increases government visibility into frontier AI development and deployment and enables users to make well-informed choices about whether and how to use AI systems
>
>
This seems clearly good, given the background views I outlined above. I advise building such infrastructure.
> **Security Controls Including Securing Model Weights** are key underpinnings for the safety of an AI system
>
>
This seems good; even better would be to make it super explicit that the Earth can’t survive open-sourcing model weights indefinitely. At some point (possibly in a few decades, possibly next year), AIs will be capable enough that open-sourcing those capabilities effectively guarantees human extinction.
And publishing algorithmic ideas, open-sourcing model weights, etc. even today causes more harm than good. Publishing increases the number of actors in a way that makes it harder to mitigate race dynamics and harder to slow down; when a dangerous insight is privately found, it makes it easier for others to reconstruct the dangerous insight by following the public trail of results prior to closure; and publishing contributes to a general culture of reflexively sharing insights without considering their likely long-term consequences.
> **Reporting Structure for Vulnerabilities** enables outsiders to identify safety and security issues in an AI system
>
>
Sure; this idea doesn’t hurt.
> **Identifiers of AI-generated Material** provide additional information about whether content has been AI generated or modified, helping to prevent the creation and distribution of deceptive AI-generated content
>
>
This seems like good infrastructure to build out, especially insofar as you’re actually building the capacity to track down violations. The capacity to know everyone who’s building AIs and make sure that they’re following basic precautions is the key infrastructure here that might help you further down the line with bigger issues.
> **Prioritising Research on Risks Posed by AI** will help identify and address the emerging risks posed by frontier AI
>
>
This section sounds like it covers important things, but also sounds somewhat off-key to me.
For one thing, “identify emerging risks” sounds to me like it involves people waxing philosophical about AI. People have been doing plenty of that for a long time; doing more of this on the margin mostly seems unhelpful to me, as it adds noise and doesn’t address civilization’s big bottlenecks regarding AGI.
For another thing, the “address the emerging risks” sounds to me like it visualizes a world where labs keep an eye on their LLMs and watch for risky behavior, which they then address before proceeding. Whereas it seems pretty likely to me that anyone paying careful attention will eventually realize that the whole modern AI paradigm does not scale safely to superintelligence, and that wildly different (and, e.g., significantly more effable and transparent) paradigms are needed.
If that’s the world we live in, “identify and address the emerging risks” doesn’t sound quite like what we want, as opposed to something more like “prioritizing technical AI alignment research”, which phrasing leaves more of a door open to realizing that an entire development avenue needs abandoning, if humanity is to survive this.
(Note that this is a critique of what the UK government asked for, not necessarily a critique of what the AI companies provided.)[[3]](#fn4ms4ephl7c3)
> **Preventing and Monitoring Model Misuse** is important as, once deployed, AI systems can be intentionally misused for harmful outcomes
>
>
Setting up monitoring infrastructure seems reasonable. I doubt it serves as much of a defense against the existential risks, but it’s nice to have.
> **Data Input Controls and Audits** can help identify and remove training data likely to increase the dangerous capabilities their frontier AI systems possess, and the risks they pose
>
>
This kind of intervention seems fine, though pretty minor from my perspective. I doubt that this will be all that important for all that long.
2. Higher priorities for governments
------------------------------------
The whole idea of asking companies to write up AI Safety Policies strikes me as useful, but much less important than some other steps governments should take to address existential risk from smarter-than-human AI. Off the top of my head, governments should also:
1. set **compute thresholds** for labs, and more generally set **capabilities thresholds**;
2. **centralize and monitor chips**;
3. **indefinitely halt the development of improved chips**;
4. set up **independent risk assessments**;
5. and have a **well-developed plan for what we’re supposed to do when we get to the brink**.
---
Saying a few more words about #4 (which I haven’t seen others discussing much):
I recommend setting up some sort of panel of independent actuaries who assess the risks coming from major labs (as distinct from the value on offer), especially if those actuaries are up to the task of appreciating the existential risks of AGI, as well as the large-scale stakes (all of the resources in the reachable universe, and the long-term role of humanity in the cosmos) involved.
Independent risk assessments are a key component in figuring out whether labs should be allowed to continue at all. (Or, more generally and theoretically-purely, what their “insurance premiums” should be, with the premiums paid immediately to the citizens of earth that they put at risk, in exchange for the risk.)[[4]](#fn0gyroatw14ab)
Stepping back a bit: What matters is not what written-up answers companies provide to governments about their security policies or the expertise of their red teams; what matters is their actual behavior on the ground and the consequences that result. There’s an urgent need for mechanisms that will create consensus estimates of how risky companies actually are, so that we don’t have to just take their word for it when every company chimes in with “of course we’re being sufficiently careful!”. Panels of independent actuaries who assess the risks coming from major labs are a way of achieving that.
---
Saying a few more words about #5 (which I also haven’t seen others discussing much): suppose that the evals start triggering and the labs start saying “we cannot proceed safely from here”, and we find that small research groups are not too far behind the labs: what then? It’s all well and good to *hope* that the issues labs run into will be easy to resolve, but that’s not my guess at what will happen.
What is the plan for the case where the result of “identifying and addressing emerging risks” is that we identify a lot of emerging risks, and cannot address them until long after the technology is widely and cheaply available? If we’re taking those risks seriously, we need to plan for those cases now.
---
You might think I’d put technical AI alignment research (outside of labs) as another priority on my list above. I haven’t, because I doubt that the relevant actors will be able to [evaluate alignment progress](https://www.lesswrong.com/posts/JcLhYQQADzTsAEaXd/ai-as-a-science-and-three-obstacles-to-alignment-strategies). This poses a major roadblock both for companies and for regulators.
What I’d recommend instead is **investment in alternative routes**(whole-brain emulation, cognitive augmentation, etc.), on the part of the research community and governments. I would also possibly recommend requiring **relatively onerous demonstrations of comprehension of model workings before scaling**, though this seems difficult enough for a regulatory body to execute that I mostly think it’s not worth pursuing. The important thing is to achieve an indefinite pause on progress toward smarter-than-human AI (so we can potentially pursue alternatives like WBE, or buy time for some other miracle to occur); if “require relatively onerous demonstrations of comprehension” interferes at all with our ability to fully halt progress, and to stay halted for a very long time, then it’s probably not worth it.
If the UK government is maintaining a list of interventions like this (beyond just politely asking labs to be responsible in various ways), I haven’t seen it. I think that eliciting AI Safety Policies from companies is a fine step to be taking, but I don’t think it should be the top priority.
3. Thoughts on the submitted AI Safety Policies
-----------------------------------------------
Looking briefly at the individual companies’ stated policies (and filling in some of the gaps with what I know of the organizations), I’d say on a skim that none of the AI Safety Policies meet a “basic sanity / minimal adequacy” threshold — they all imply imposing huge and unnecessary risks on civilization writ large.
In relative terms:
* The best of the policies seems to me to be [**Anthropic’s**](https://www.anthropic.com/uk-government-internal-ai-safety-policy-response), followed by [**OpenAI’s**](https://openai.com/global-affairs/our-approach-to-frontier-risk). I lean toward Anthropic’s being better than OpenAI’s mainly because [Anthropic’s RSP](https://www.anthropic.com/index/anthropics-responsible-scaling-policy) seemed to take ASL-4 more seriously as a possibility, and give it more lip service, than any analog on the OpenAI side. But it’s possible that I just missed some degree of seriousness in OpenAI’s side, and that they'll overtake once they substantiate an RDP.
* [**DeepMind’s**](https://deepmind.com/public-policy/ai-summit-policies)policy seemed a lot worse to me, followed closely by[**Microsoft’s**](https://blogs.microsoft.com/on-the-issues/2023/10/26/microsofts-ai-safety-policies/).
* [**Amazon’s**](https://aws.amazon.com/uki/cloud-services/uk-gov-ai-safety-summit/) policy struck me as far worse than Microsoft’s.
* [**Meta**](https://transparency.fb.com/en-gb/policies/ai-safety-policies-for-safety-summit/)had the worst stated policy, far worse than Amazon’s.
Anthropic and OpenAI pass a (far lower, but still relevant) bar of “making lip service to many of the right high-level ideals and priorities”. Microsoft comes close to that bar, or possibly narrowly meets it (perhaps because of its close relationship to OpenAI). DeepMind’s AI Safety Policy doesn’t meet this bar from my perspective, and lands squarely in “low-content corporate platitudes” territory.
Matthew Gray read the policies more closely than me (and I respect his reasoning on the issue), and writes:
| |
| --- |
| Unlike Nate, I’d rank Anthropic and OpenAI’s write-ups as ~equally good. Mostly I think comparing their plans will depend on how OpenAI’s Risk-Informed Development Policy compares to Anthropic’s Responsible Scaling Policy.[[5]](#fn68aso149zj3) For now, only Anthropic’s RSP has shipped, and we’re waiting on OpenAI’s RDP.I’d also rank DeepMind’s write-up as far better than Microsoft’s, and Amazon’s as only modestly worse than Microsoft’s. Otherwise, I agree with Nate’s rankings.* Soares: Anthropic > OpenAI >> DeepMind > Microsoft >> Amazon >> Meta
* Gray: OpenAI ≈ Anthropic >> DeepMind >> Microsoft > Amazon >> Meta
By comparison, [CFI’s recent rankings](http://lcfi.ac.uk/news-and-events/news/2023/oct/31/ai-safety-policies/) look like the following (though CFI’s reviewers were only asking whether these companies’ AI Safety Policies satisfy the UK government’s requirements, not asking whether these policies are *good*):[[6]](#fntpgntwndt8n)* CFI reviewers: Anthropic >> DeepMind ≈ Microsoft ≈ OpenAI >> Amazon >> Meta
My read of the policy write-ups was: **Anthropic:** Believes in evals and responsible scaling; aspirational about security. (In contrast to more established tech companies, which can point to their cybersecurity expertise over decades, Anthropic’s proposal can only point to them advocating for strengthening cybersecurity controls at frontier AI labs.)**OpenAI:**Believes in alignment research; decent on security. I think OpenAI’s “we’ll solve superalignment in 4 years!” plan is wildly unrealistic, but I greatly appreciate that they’re acknowledging the problem and sticking their neck out with a prediction of what’s required to solve it; I’d like to see more falsifiable plans from other organizations about how they plan to address alignment.**DeepMind:**Believes in scientific progress; takes security seriously.**Microsoft:**Experienced tech company, along for the ride with OpenAI. My read is that Microsoft is focused on the short-term security risks; they seem to want to operate a frontier AI model datacenter more than they want to unlock an intelligence explosion.**Amazon:**Experienced tech company; wants to sell products to customers; takes security seriously. Like DeepMind, Amazon provides detailed answers on the security questions that point to lots of things they’ve done in the past.[[7]](#fn839exg9ulru)**Meta:**Believes in open source; fighting the brief at several points. I see these policies as largely discussing two different things: **security** (e.g., making it difficult for outside actors to steal data from your servers, like your model weights), and **safe deployment** (not accidentally releasing something that goes rogue, not enabling a bad customer to do something catastrophic, etc.).These involve different skill sets, and different sets of companies seem to me to be most credible on one versus the other. E.g,, Anthropic and OpenAI seemed to be thinking the most seriously about safe deployment, but I don’t think Anthropic or OpenAI have security as impressive as Amazon-the-company (though I don’t know about the security standards of Amazon’s AI development teams specifically).In an earlier draft, Nate ranked Microsoft’s response as better than DeepMind’s, because “DeepMind seemed more like it was trying to ground things out into short-term bias concerns, and Microsoft seemed on a skim to be throwing up less smoke and mirrors.” However, I think DeepMind’s responses were equal or better than Microsoft’s on all nine categories.On my read, DeepMind’s answers contained a lot of “we’re part of Google, a company that has lots of experience handling this stuff, you can trust us”, and these sections brought up near-term bias issues as part of Google’s track record. However, I think this wasn’t done in order to deflect or minimize the problem. E.g., DeepMind [writes](https://deepmind.google/public-policy/ai-summit-policies/):*This could include cases where the AI system exhibits misaligned or biassed behaviour; the AI system assists the user to perform a highly dangerous task (e.g. assemble a bioweapon); new jailbreak prompts; or security vulnerabilities that undermine user data privacy.*The first part of this quote looks like it might be a deflection, but DeepMind then explicitly flags that they have catastrophic risks in mind (“assemble a bioweapon”). In contrast, Microsoft never once brings up biorisks, catastrophes, nuclear risks, etc. I think Microsoft wants to make sure their servers aren’t hacked and they comply with laws, whereas DeepMind is thinking about the fact that they’ve made something you could use to help you kill someone.The part of DeepMind’s response that struck me as smoke-and-mirrors-ish was instead their choice to redirect a lot of the conversation to abstract discussions of scientific and technological progress. For example, while talking about monitoring AlphaFold usage, they talk about using logs of usage to tally the benefits to the research community instead of any actual “monitoring” benefit, like whether or not users were generating proteins that could be useful for harming others. While it is appropriate to weigh both the benefits and the risks of new technology, subtly changing the subject from how risks are being monitored to benefits seems like a distraction.My argument here was enough to persuade Nate on this point, and he updated his ranking to place DeepMind a bit higher than Microsoft.I think evaluating the proposals has the downside that, unless someone fights the brief or gives a concretely dumb answer somewhere, there isn’t actually much to evaluate. One company offers $15k maximum for a bug bounty, another company $20k maximum; does that matter? Did another company which didn’t write a number offer more, or less?Meaningfully evaluating these companies will likely require looking at the companies’ track records and other statements (as Nate and I both tried to do to some degree), rather than looking at these policies in isolation. These considerations also make me very interested in Nate’s proposal of using independent actuaries to assess labs’ risk. |
1. **[^](#fnrefqri4bri5sf)**Inflection is a late addition to the list, so Matt and I won’t be reviewing their AI Safety Policy here.
Thanks to Rob Bensinger for assembling, editing, and occasionally rephrasing/extending my draft of this post, with shallow-but-not-deep thumbs up from me.
2. **[^](#fnrefwk8kja3s7x)**And, as OpenAI’s write-up notes: “We refer to our policy as a Risk-Informed Development Policy rather than a Responsible Scaling Policy because we can experience dramatic increases in capability without significant increase in scale, e.g., via algorithmic improvements.”
3. **[^](#fnref4ms4ephl7c3)**Matthew Gray writes: “I think OpenAI did a surprisingly good job of responding to this with ‘the real deal’.” Matt cites this line from OpenAI’s discussion of “superalignment”:
> Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on human ability to supervise AI. But these techniques will not work for superintelligence, because humans will be unable to reliably supervise AI systems much smarter than us.
>
>
4. **[^](#fnref0gyroatw14ab)**Doing this fully correctly would also require that you in some sense hold the money that goes to possible future people for risking *their* fate. Taking into account only the interests of people who are presently alive still doesn’t properly line up all the incentives, since present people could then have a selfish excessive incentive to trade away large amounts of future people’s value in exchange for relatively small amounts of present-day gains.
5. **[^](#fnref68aso149zj3)**I (Nate) agree with Matt here.
6. **[^](#fnreftpgntwndt8n)**Unlike the CFI post authors, I (Nate) would give all of the companies here an F. However, some get a much higher F grade than others.
7. **[^](#fnref839exg9ulru)**From DeepMind:
> This is why we are building on our industry-leading general and infrastructure security approach. Our models are developed, trained, and stored within Google’s infrastructure, supported by central security teams and by a security, safety and reliability organisation consisting of engineers and researchers with world-class expertise. We were the first to introduce zero-trust architecture and software security best practices like fuzzing at scale, and we have built global processes, controls, and systems to ensure that all development (including AI/ML) has the strongest security and privacy guarantees. Our Detection & Response team provides a follow-the-sun model for 24/7/365 monitoring of all Google products, services and infrastructure - with a dedicated team for insider threat and abuse. We also have several red teams that conduct assessments of our products, services, and infrastructure for safety, security, and privacy failures.
>
> |
2b8b3403-ac9e-4238-9616-8bcf1583cccf | trentmkelly/LessWrong-43k | LessWrong | AI alignment is distinct from its near-term applications
I work on AI alignment, by which I mean the technical problem of building AI systems that are trying to do what their designer wants them to do.
There are many different reasons that someone could care about this technical problem.
To me the single most important reason is that without AI alignment, AI systems are reasonably likely to cause an irreversible catastrophe like human extinction. I think most people can agree that this would be bad, though there’s a lot of reasonable debate about whether it’s likely. I believe the total risk is around 10–20%, which is high enough to obsess over.
Existing AI systems aren’t yet able to take over the world, but they are misaligned in the sense that they will often do things their designers didn’t want. For example:
* The recently released ChatGPT often makes up facts, and if challenged on a made-up claim it will often double down and justify itself rather than admitting error or uncertainty (e.g. see here, here).
* AI systems will often say offensive things or help users break the law when the company that designed them would prefer otherwise.
We can develop and apply alignment techniques to these existing systems. This can help motivate and ground empirical research on alignment, which may end up helping avoid higher-stakes failures like an AI takeover. I am particularly interested in training AI systems to be honest, which is likely to become more difficult and important as AI systems become smart enough that we can’t verify their claims about the world.
While it’s nice to have empirical testbeds for alignment research, I worry that companies using alignment to help train extremely conservative and inoffensive systems could lead to backlash against the idea of AI alignment itself. If such systems are held up as key successes of alignment, then people who are frustrated with them may end up associating the whole problem of alignment with “making AI systems inoffensive.”
If we succeed at the technical problem of AI |
1ab616d8-af38-4b1b-a15e-8c072b207fc0 | trentmkelly/LessWrong-43k | LessWrong | Don’t ignore bad vibes you get from people
I think a lot of people have heard so much about internalized prejudice and bias that they think they should ignore any bad vibes they get about a person that they can’t rationally explain.
But if a person gives you a bad feeling, don’t ignore that.
Both I and several others who I know have generally come to regret it if they’ve gotten a bad feeling about somebody and ignored it or rationalized it away.
I’m not saying to endorse prejudice. But my experience is that many types of prejudice feel more obvious. If someone has an accent that I associate with something negative, it’s usually pretty obvious to me that it’s their accent that I’m reacting to.
Of course, not everyone has the level of reflectivity to make that distinction. But if you have thoughts like “this person gives me a bad vibe but maybe that’s just my internalized prejudice and I should ignore it”, then you probably have enough metacognition to also notice if there’s any clear trait you’re prejudiced about, and whether you would feel the same way about other people with that trait.
Naturally, “don’t ignore the bad feeling” also doesn’t mean “actively shun and be a jerk toward them”. If they’re a coworker and you need to collaborate with them, then sure, do what’s expected of you. And sometimes people do get a bad first impression of someone that then gets better – if the bad feeling naturally melts away on its own, that’s fine.
But if you’re currently getting a bad feeling about someone and they make a bid for something on top of normal interaction… like if they ask you out or to join a new business venture or if you’re just considering sharing something private with them… you might want to avoid that.
I don’t have any rigorous principled argument for this, other than just the empirical personal observation that ignoring the feeling usually seems to be a mistake.
Consider reversing this advice in the case where you tend to easily get a bad vibe from everyone. Anni Kanniainen comments:
> I stru |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.