id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
4ee05178-0a6b-4789-b94c-a5306922fd0b | trentmkelly/LessWrong-43k | LessWrong | A quick map of consciousness
Original post: http://bearlamp.com.au/a-quick-map-of-consciousness/
----------------------------------------
Prior knowledge: Many maps lightly held, Leaky concepts, Boundaries
Map and territory: mind to reality – To be presented alongside the caveat, “what is good?”
(Well “good” is in the map, not the territory. This diagram very quickly becomes a mess, but before that happens, let’s talk about reifying the parts of this model to see if it’s useful)
----------------------------------------
To me right now, it seems like consciousness is the ladder between the map and the territory. In the diagram, on the left is a thought, suggesting that “this is an apple” on the right, pictured is a red apple. When the attention points at a red apple, the consciousness is filled with a map of declarative definition that labels, names and concludes that this is an apple.
Consciousness seems to be a label generating machine. Something fundamental about brains is that they map the territory. They quest towards mapping the territory.
That’s.Just.What.They.Do.
This brings us to the question of – how do I have a good life. I have 3 strategies:
1. [content] Look at different apples
2. [map] modify so that there are more positive opinions of apples
3. [relationality] appreciate looking at rotten apples if that’s what’s to look at today.
----------------------------------------
Content
If I look at dead apples all day, I’m not going to auto-magically have a great day. On the other hand if I look at great apples, I’m going to be impressed and delighted. The apple could be replaced with beautiful artwork, nice sunsets, tasty food, nice music. Whatever strikes in the heart of desire to be attended to. Improve the content is a reasonable and helpful strategy sometimes.
Sometimes it’s not the content that’s the problem. Maybe there’s nothing wrong with apples but they make me puke. Then I can try the map.
Map
If every time I see an apple I remember that on |
5fdb0026-3b6a-47af-a475-0da6511a3a22 | trentmkelly/LessWrong-43k | LessWrong | Abuse of Productivity Systems
I have moved this post to my blog: http://squirrelinhell.blogspot.com/2016/03/abuse-of-productivity-systems.html |
4d44978b-b4a2-4e4f-a10d-4db218e903a2 | StampyAI/alignment-research-dataset/special_docs | Other | individuallyselected_w5cb5-by Vael Gates-date 20220318
# Interview with AI Researchers individuallyselected\_w5cb5 by Vael Gates
\*\*Interview with w5cb5, on 3/18/22\*\*
====================================
\*\*0:00:02.2 Vael:\*\* Alright, so my first question is, can you tell me about what area of AI you work on in a few sentences?
\*\*0:00:08.5 Interviewee:\*\* I worked in \[subfield\] originally, but I guess I branched out more broadly into AI research, because I\'m \[high-level research role\] now at an AI company.
\*\*0:00:19.9 Vael:\*\* Great, yeah. And then what are you most excited about in AI and what are you most worried about? In other words, what are the biggest benefits or risks of AI?
\*\*0:00:28.3 Interviewee:\*\* So I think, yeah, the world is going to change quite a lot with AI technology, and I think mostly in good ways, just because we\'re going to empower people with this technology. And it\'s going to be empowering I think in similar ways to the Internet, where people can do faster search, they have an assistant who can help them with all kinds of stuff. They have friends who maybe are not real, and all kinds of ways to make people happier, I think, or more efficient, or to give them time back and that sort of stuff. But obviously, there are also risks and the main risks are, I think that the field is too dominated by tech bros from Silicon Valley, so I guess I fall under that in a way. And so I think that\'s a real problem, so we need to take democratization of the technology much more seriously, that\'s also what my company is doing. And I think if we think about the ethical implications of our technology from first principles, and if we make them first-class citizens rather than just treating them as an afterthought, where you submit your paper and then, \"Oh, I also need to write a broader impact statement,\" but if you take that very seriously from the beginning as a core principle of your organization, then I think you can do much better research in a much more responsible way.
\*\*0:01:56.5 Vael:\*\* Interesting. Alright, so that was the question of \"what are you most excited about and what are you most worried about in AI\", okay. I heard\-- Lots of things they can go, lots of places they can go, lots of directions they can go, but you\'re worried about domination from specific areas and then people not caring about\... ethics enough? or---
\*\*0:02:14.6 Interviewee:\*\* Yeah, so misuse of technology. Do you want me to give you concrete examples? So I think very often, the technology that we develop, even if it\'s meant for benevolent purposes, can also be re-applied for not so benevolent purposes. And so like speech recognition or face recognition, things like that, you have to just be very careful with how you treat this technology. So that\'s why I think if people take responsible AI seriously from the beginning, that that is a good thing too.
\*\*0:02:53.0 Vael:\*\* Interesting. So you think if people incorporate responsible AI from the beginning of the process, then there will be less risk of misuse by any agent in the future?
\*\*0:03:04.5 Interviewee:\*\* Yeah, yeah. So you mentioned your IRB, so for a lot of technological research happening in industry, there is no real IRB. Some companies have sort of IRBs but most of them are so commercial and so driven by money in the end. And I think maybe we need an independent AI IRB for the broader research community, where anybody can go there and have somebody look at the potential applications of their work.
\*\*0:03:39.6 Vael:\*\* I see, cool. And then just having that sort of mindset seems good, in addition to the object-level effects. Alright. Makes sense. So focusing on future AI, putting on a science fiction forecasting hat, say we are 50 years, 50 plus years into the future. So at least 50 years into the future, what does that future look like?
\*\*0:04:00.3 Interviewee:\*\* At least 50 years in the future. So I still don\'t think we will have AGI, and that\'s I guess, I\'m probably unusual in the field because I think a lot of my colleagues would disagree, especially if they\'re at OpenAI or DeepMind because they think that it\'s like two years away. (Vael: \"Two years, huh!\") Yeah, well it depends on who you ask, they have some crazy people. \[chuckle\] I think in the next decade, we\'re going to realize what the limitations are of our current technology. I think what we\'ve been doing now has been very efficient in terms of scaling with data and scaling with compute, but it\'s very likely that we\'re just going to need entirely new algorithms that just require pure scientific breakthroughs. And so I don\'t think there\'s going to be another AI winter, but I do think that things are going to cool down a little bit again, because right now it\'s just been super hyped up. For good reason too, because we are really making really great progress. But there is still things that we really don\'t know how to do, so we have language models and they can do things and they\'re amazing, but we don\'t know how to make the language model do what we want it to do. So we\'re all just sort of hacking it a little bit, but it\'s not really anywhere close to being like a proper assistant, for example, who actually understands what you\'re saying, who actually understands the world. I think where we want to be 50 years from now is where we have machines who understand the world in the same way that humans understand it, so maybe something like Neuralink. So if I\'m being very futuristic, connecting AI to human brains and human perception of reality, that could be a way to get AI to have a much richer understanding of the world in the same way that humans understand it. So like dolphins are also very intelligent, but they also don\'t understand humans and they are not very useful assistants, right? I don\'t know if you\'ve ever had any dolphin assistant. So it\'s not really bad intelligence, it\'s specifically about human intelligence that makes AI potentially useful for us, and so that\'s something that I think is often overlooked.
\*\*0:06:26.9 Vael:\*\* So it sounds like, so you\'re thinking about when AGI will happen. And you said that you don\'t think we\'re gonna hit some sort of ceiling or slow down on the current deep learning paradigm or just like keep on scaling\--
\*\*0:06:39.6 Interviewee:\*\* Yeah, it\'s going to be asymptotic, and at some point, we\'re just going to hit the limits of what we can do with scaling data and scaling compute. And in order to get the next leap to real AGI I think we just need radically different ideas.
\*\*0:06:55.1 Vael:\*\* Yeah, when do you think we\'re going to\-- what kind of systems do you think we\'re going to have when we cap out on the current scaling paradigm?
\*\*0:07:02.0 Interviewee:\*\* Well, I think like the ones we have now, but yeah, in 50 years, I don\'t know. But in like 5 to 10 years, it will just be much bigger versions of this. And so what we have seen is that if you scale these systems, they generalize much better. If that keeps happening, then we would just have much better versions of what we have now. But still it\'s a language model that doesn\'t understand the world, and so still it\'s the component that is very limited in seeing only the training data that is in images on the internet, which is not all of the images that we have in the world, right? So I think the real problem is data, not so much scaling the compute.
\*\*0:07:49.7 Vael:\*\* What if we had a system that has cameras and can process auditory stuff that is happening all around it or something and it\'s not just using internet data, do you think that would eventually have enough data?
\*\*0:08:03.3 Interviewee:\*\* Yeah, so that\'s what I was just saying. If you have something that\'s embodied in the world in the same way as a human and where humans treat it as another human, sort of like cyborg style, things like that, that\'s a good way to get lots of very high quality data in the same way that humans get it. What are they called? Androids, right?
\*\*0:08:24.9 Vael:\*\* Yeah.
\*\*0:08:25.3 Interviewee:\*\* So if we actually had android robots walking around and being raised by humans and then we figured out how the learning algorithms would work in those settings, then you would get something that is very close to human intelligence. A good example I always like to use is the smell of coffee. So I know that you know what coffee smells like, but can you describe it to me in one sentence?
\*\*0:08:54.2 Vael:\*\* Probably not, no.
\*\*0:08:55.7 Interviewee:\*\* You can\'t, right? But the same goes for the taste of banana or things like that. I know that you know, so I\'ve never had to express this in words. So this is one of the fundamental parts of your brain; smell and taste are even older than sight and hearing. And so there\'s a lot of stuff happening in your brain that is just taken for granted. You can call this common sense or whatever you want, but it\'s like an evolutionary prior that all humans share with each other, and so that prior governs a lot of our behavior and a lot of our communication. So if you want machines to learn language but they don\'t have that prior, it becomes really, really hard for them to really understand what we\'re saying, right?
\*\*0:09:38.7 Vael:\*\* Yeah. I think when I think about AGI, I think about AGI that can do\-- or, just, generalizable systems that can do things that humans want them to do. So imagine we have like a CEO AI or a scientist AI. I don\'t think I need my CEO or scientist AI enough to know what coffee smells like per se, but I do need it to be able to like break down experiments and think kind of creative thoughts and figure out things.
\*\*0:09:58.7 Interviewee:\*\* Yeah, but I think what I\'m saying is that if they don\'t know what coffee smells like, that\'s just one example, but there are millions of these things that are just things we take for granted, that we don\'t really talk about. And so this will not be born out in the data in any way, so that means that a lot of the underlying assumptions are never really in the data, right? They\'re in our behavior, and so for an AI to pick up on those is going to be very difficult.
\*\*0:10:27.6 Vael:\*\* What if there were cameras everywhere, and it got to record everyone and process those?
\*\*0:10:32.3 Interviewee:\*\* Yeah, maybe. So the real question is, if you just throw infinite data at it, then will it work with current machine learning algorithms? Is I guess what you\'re asking, right? And so I don\'t know. I mean, I know that our learning algorithm is very different from a neural net, but I think if you look at it from a mathematical perspective, then gradient descent is probably more efficient than Hebbian learning anyway. So mathematically, it\'s definitely possible that if you have infinite data and infinite compute, then you can get something really amazing. Sure, we are the proof of that, right? So whether that also immediately makes it useful for us is a different question, I think.
\*\*0:11:20.8 Vael:\*\* Interesting. Yeah, I think I\'m trying to probe \"do we need something like embodied AI in order to get AGI\" or something. And then your last comment was like, whether that makes it useful for us. I\'m like, well, presumably we\'re going to\... feeding it a lot of data lets it do grounding, so like relationships between language and what actually exists in the world and how physics works. But presumably, we\'re going to be training them to do what we want, right? So that it will be useful to us?
\*\*0:11:43.5 Interviewee:\*\* Well, it depends, right? Can we do that? Probably the way they will learn this stuff is through self-supervised learning, not through us supervising them. We don\'t know how to specify reward signals and things like that anyway. I\'m not sure, if we actually are able to train up these huge systems that are actually intelligent through self-supervised learning, if they are then going to listen to us, right? Why would they?
\*\*0:12:15.2 Vael:\*\* Right. Okay, cool. Yeah, so this kind of leads right into my next question here. So imagine we\'re in the future and we\'ve got some AGIs and we\'ve got a CEO AI, and I\'m like, \"Okay, CEO AI, I want you to maximize profits and not run out of money and not try to exploit people and try to avoid side effects,\" and it seems like this would currently be extremely challenging for many reasons. But one is that we\'re not very good at taking human values and putting them\-- and like goals and preferences\-- and putting them in mathematical formulations that AI can currently work. And I worry that this is gonna happen in the future as well. So the question is: what do you think of the argument, \"Highly intelligence systems will fail to optimize exactly what their designers intended them to and this is dangerous\"?
\*\*0:12:53 Interviewee:\*\* Well, yeah. I agree with that. I don\'t think\... I think there are two separate questions here. So one you\'re asking about is the paperclip maximizer argument from Nick Bostrom. So like if you have a system and you tell it like \"you need to make as many paperclips as you possibly can\" then it\'s going to like destroy the earth to make as many paperclips as possible.
\*\*0:13:15 Vael:\*\* Well that would be doing maybe\-- oh, I see. Not quite what I intended. Yeah, all right.
\*\*0:13:19.8 Interviewee:\*\* Yeah, so\-- okay, so if that\'s not what the underlying question was, then\... We don\'t really\... I also think that we are\... some of us are fooling ourselves into believing that we know everything as humans and I think human values are changing all the time. I don\'t think we can capture correct human values. I don\'t think there is an absolute moral truth that we should all adhere to. I think that just morality itself is a very cultural concept. But I\'m \[interested in\] philosophy, so I\'m a bit different from most AI researchers, I guess. So I think that we could try to encode some very basic principles, so this is like Asimov\'s laws and things like that, but I don\'t think we can really go much further than that. And I think even in those cases, like you said, we don\'t know how to mathematically encode them in a way where you enforce whatever this dynamical system is that you\'re training, so a neural net, but then probably more complicated than the current neural nets\-- how do we impose a particular set of values? I don\'t think we know how to do that. I don\'t think there\'s a mathematical way to do that either actually, because it\'s all \[inaudible\]\--
\*\*0:14:44.7 Vael:\*\* Yeah, do you think we are eventually going to be able to?
\*\*0:14:50.0 Interviewee:\*\* So I think if you ask Yann LeCun or someone like that, he would say that probably, if we ever get to systems of this sort of level of intelligence, then they would be benevolent, because they\'re very smart and able to sort of understand how weak humans are.
\*\*0:15:09.4 Vael:\*\* Interesting. Yeah. So when I hear that argument, I\'m like, okay, it seems like Yann LeCun thinks that as you get more intelligent, you have morals that are very similar to humans, and this just kind of comes\--
\*\*0:15:21.7 Interviewee:\*\* No, not necessarily. No, but just better morals, right? So I think that the argument is sort of that if you look at human progress, then we\'ve also been getting better and better moral systems and a better understanding of what human values really matter. And like 100 years from now, probably everybody\'s gonna look back at us and say, \"They were eating meat. They were killing all these animals.\" So we are on the path of enlightenment. I don\'t know if I agree with this, but that\'s one way of saying it. And so a sign of an organism or a culture becoming more and more enlightened is also that you become more and more benevolent I think for others, but maybe that\'s a bit of a naive take.
\*\*0:16:05.9 Vael:\*\* Yeah. I think in my mind\-- certainly we have\-- well, actually, I don\'t know that we have the correlation that humans are getting smarter and also at the same rate, or, like\... Like humans are pretty smart. And we\'re getting better at IQ tests, but I don\'t know that we\'re vastly increasing our intelligence per se.
\*\*0:16:20.4 Interviewee:\*\* Yeah. That\'s for different reasons, right. Yeah.
\*\*0:16:24.9 Vael:\*\* Yeah. And meanwhile, we have, over\-- centuries, like not that many centuries, we\'ve been increasing our moral circle and putting in animals and people far away from us, etcetera. But I kind of think of the axes of intelligence and morality as kind of orthogonal, where if we have a system that is getting much smarter, I don\'t expect it to have\... I expect kind of a lot of human morality runs from evolutionary pressures and also coordination difficulties, such that you need to be able to not kill people, otherwise the species is gonna go extinct. And you know, there\'s a bunch of stuff that are kind of built into humans that I wouldn\'t expect to happen just natively with intelligence; where intelligence, I would think of something like\... the ability to solve problems well, to make multi-step plans, to think in the future, to take out correlations and figure out predictions, and I don\'t expect that to naively correlate with---
\*\*0:17:19.9 Interviewee:\*\* Yeah, so I think that\'s a very narrow definition of intelligence, and so I don\'t know if that definition of intelligence you have, if that actually is the most useful kind of intelligence for humans. So I think that in our society there is this concept where intelligence just means like mathematical reasoning capabilities almost, right? (Vael: \"Yeah.\") And that is a very, very narrow definition, and most of our intelligence is not that, right? (Vael: \"Yes.\") So for regimes to be useful to us\... so I think what you\'re talking about is sort of like this good old-fashioned AI concept of intelligence, where you have symbolic reasoners, and you\'re like\... you\'re very good at very fast symbol manipulation. And like, \"This is what computers are for.\" So we should just have super smart computers who can do the stuff that we don\'t want to do or can\'t do. It\'s possible that our intelligence is a direct consequence, not of our mathematical reasoning capabilities, but of something else, of our cultural interactions. So I definitely think if humans were not a multi-agent society, that we would not be nearly as intelligent. So a lot of our intelligence comes from sharing knowledge and communicating knowledge and having to abstract knowledge so that you can convey it to other agents and that sort of stuff.
\*\*0:18:50.0 Vael:\*\* Cool. Yeah. So when I think about how I define intelligence, I\'m like, \"What is the thing I care about?\" The thing I care about is how we develop AI. And I\'m like, \"How are we gonna develop AI?\" We\'re gonna develop it so that it completes economic incentives. So we want robots that do tasks that humans don\'t want to do. We want computers\--
\*\*0:19:09.2 Interviewee:\*\* Yeah. But is that AI or is that just machine learning? We\'re trying to have a\... like input-output black box, and we want that black box to be as optimal as possible for making money or whatever the goal is, right? So that\'s also a worry I have, is that a lot of people are conflating these different concepts. So artificial intelligence\...yeah, it depends on how you define it. Some people think of it more as like AGI. If you ask Yann again and all the old school deep learners, they would say, it used to be that they were explicitly not doing AI. So AI is like Simon and Newell and all that sort of stuff, so like pure symbol manipulation, symbolic AI. And pattern recognition is not AI. And now, since deep learning became very popular, some of the people were like, \"Oh yeah, this is AI now,\" but they used to be machine learning and not AI. So one thing is just like this black box. It can be anything and we just want to have the best possible black box for our particular problem mapping X to Y. And this could be any kind of problem, it could be like image recognition or whatever. In some cases, you want to have a symbolic approach, in other cases, you want to have a learning approach, it sort of just depends. So it\'s just software. Right? But in one case, the software is well defined, and in the other case, it\'s a bit fuzzier.
\*\*0:20:37.9 Vael:\*\* Yeah. So this all kind of depends on your frame, of course. I think my frame, or the reason why I care, is I\'m like, I think machine learning, AI, I don\'t know, whatever this thing is where humans are pouring a lot of investment and effort into making software better, and by better I mean better able to accomplish tasks that we want it to do\-- I think that this will be\-- it is very powerful, it has affected society a lot already and it will continue to affect society a lot. Such that like 50 years out, I expect this to be\... Whatever we developed to be very important in how\... Affect just a lot of things.
\*\*0:21:10.8 Interviewee:\*\* But we\'re notoriously bad at predicting the future, right? So if you asked in the \'60s, people would say like, there\'s flying cars, and like we\'re living on Mars and all that stuff. And we\'re getting a bit closer, but we\'re still not there yet. But none of these people would have seen the internet coming. And so I think maybe the next version of the internet is going to be more AI driven. So that is a sort of\... first use case that I would see for AI, which is like a better internet.
\*\*0:21:50.0 Vael:\*\* Interesting. Yeah, I think kind of\... people will find whatever economic niches will get them a lot of profit, is sort of how I expect things to continue to go, given that that seems to be \... Given that society works kind of the same way, and people have a lot of time and energy and have the capability to invest in this stuff, we will continue to develop machine learning, AI software, etcetera, such that it\--
\*\*0:22:13.2 Interviewee:\*\* We\'ve been doing that for like 30 years or even more. From the Perceptron, Rosenblatt. We\'ve been already doing this and so it\'s not really a question of like AI taking over the world, it\'s software taking over the world, and AI in some cases is better than like rule-based software. But it\'s still software taking over the world.
\*\*0:22:35.8 Vael:\*\* Yeah, yeah, certainly. And then the current paradigm of like, gigantic neural nets, seems to be better at doing things that we want it to do. And so we\'re continuing on in that direction, and at some point, as you say, it becomes less able to do what we want it to do, given the amount of resources that we\'re pouring into it, like that ratio trades off. Okay\--
\*\*0:22:54.3 Interviewee:\*\* Yeah. So there\'s other trade offs too, right? So as you become bigger as a neural net, you also become a lot more inefficient. This is already the case for something like GPT-3; latency is a big problem. For us to be able to talk like this to a machine, if the machine has 100 trillion parameters, it\'s going to be way too slow. It\'s going to take, I don\'t know, 10 minutes to generate an answer to a simple question. So it\'s not only a tradeoff of\... Best does not just mean accuracy. Best also is like, how efficient are you? How fair are you? How robust are you? How much environmental impact do you have? All of these different sort of metrics that all matter for choosing what defines \"best\" for a system. I think this is something we need to improve a lot on as a community, where we stop thinking beyond this pure accuracy thing, which is like an academic concept, to an actual\... like how can we deploy these systems in a responsible way, where we think about all the possible metrics that matter for deployment. So we want to be at the Pareto frontier of like 10 different metrics, not just accuracy.
\*\*0:24:06.8 Vael:\*\* Cool. Alright, that makes sense. So still thinking ahead in the future, do you think we\'ll ever get something like a CEO AI?
\*\*0:24:14.0 Interviewee:\*\* So, if\-- so a CEO AGI or a CEO AI?
\*\*0:24:18.8 Vael:\*\* Um, some sort of software system that can do the things that a CEO can do.
\*\*0:24:25.6 Interviewee:\*\* No.
\*\*0:24:26.1 Vael:\*\* No. Okay.
\*\*0:24:28.6 Interviewee:\*\* So not before we get AGI. So I think that is an AI complete problem. But I do think we\'ll get a very good CEO AI assistant. \[inaudible\] \...real human. It\'s like a plane, right? So like a plane is flown by a pilot but it\'s really flown by a computer. So I think the same could be true for a company where the company has like, a CEO pilot whose job is also to inspire people and do all of the human soft skills. And they have an assistant who does a lot of measurement stuff and tries to give advice for like where the company should be headed and things like that.
\*\*0:25:05.1 Vael:\*\* Okay, awesome. And you do think that you could have a CEO AGI, it sounds like.
\*\*0:25:10.3 Interviewee:\*\* Yeah, but if you have an AGI, then we don\'t need CEOs anymore.
\*\*0:25:14.3 Vael:\*\* What happens when we get AGI?
\*\*0:25:16.9 Interviewee:\*\* All the humans die.
\*\*0:25:17.5 Vael:\*\* All the humans die. Okay! \[laughs\]
\*\*0:25:20.1 Interviewee:\*\* \[laughs\] So I think it depends. I think actually the most likely scenario, as I said, for AGI to come into existence is when humans merge with AI. And so I don\'t think that it\'s a bad thing for AGI to emerge. So if there is an AGI, then it will be a beautiful thing, and we will have made it as a society. So yeah, if that thing takes over, then that thing is going to be insane, it\'s going to take over the universe, and then we will be sort of like the cute little people who made it happen. So either we become very redundant very quickly or we sort of merge with AI into this new species kind of.
\*\*0:26:14.1 Vael:\*\* Interesting, okay. And you don\'t necessarily see a connection between, like, the current\... \[you think\] if we just push really hard on the current machine learning paradigm for 50 years, we won\'t have an AGI. We need to do something different for an AGI, which sounds like embodiment / combination with humans, biological merging?
\*\*0:26:31.7 Interviewee:\*\* So it could be embodiment and combination with humans, but also just better, different learning algorithms. So probably more sparsity is something that scales better. More efficient learning. So the problem with gradient descent is that you need too much data for it. Maybe we need some like Bayesian things where we can very quickly update belief systems. But maybe that needs to happen at a symbolic level. I still think we have to fix symbolic processing happening on neural networks\-- so we\'re still very good at pattern recognition, and I think one of the things you see with things like GPT-3 is that humans are amazing at anthropomorphizing anything. I don\'t know if you\'ve ever read any Daniel Dennett, but what we do is we take an intentional stance towards things, and so we are ascribing intentionality even to inanimate objects. His theory is essentially that consciousness comes from that. So we are taking an intentional stance towards ourselves and thinking of ourselves as a rational agent and that loop is what consciousness is. But actually we\'re sort of biological machines who perceive their own actions and over time this became what we consider consciousness. So\... where was I going with this? \[laughs\] What was the question?
\*\*0:27:57.2 Vael:\*\* Yeah, okay. So I\'m like, alright, we\'ve got AI, we\'ve got lots of machine learning\--
\*\*0:28:00.8 Interviewee:\*\* \--oh yeah, so do you need new learning algorithms? Yeah. So I think what we need to solve is the sort of System 2, higher-level thinking and how to implement that on the neural net. The neural symbolic divide is still very much an open problem. There are lots of problems we need to solve, where I really don\'t think we can just easily solve them by scaling. And that\'s\-- like there is very little other research happening actually in field right now.
\*\*0:28:35.3 Vael:\*\* Alright. So say we do scaling, but we also have a bunch of software. Like algorithmic improvements at the rate we\'re seeing, and we\'ve got hardware improvements as well. I guess this is just more scaling, but we have optical, we have quantum computing. And then we have some sort of fast learning systems, we know how to do symbolic processing, we\'re much more efficient. Here we now have a system that generalizes very well and is pretty efficient, and I don\'t know, maybe we\'re hundred years out. Say maybe we\'re in a different paradigm, maybe we\'re kind of in the same paradigm. We now have a system that is\--
\*\*0:29:05.5 Interviewee:\*\* We would be in a different paradigm for sure.
\*\*0:29:07.4 Vael:\*\* Okay. We are in a different paradigm, because\... because all these learning algorithms\--?
\*\*0:29:11.4 Interviewee:\*\* Paradigms don\'t really last that long, if you look at the history of science.
\*\*0:29:16.2 Vael:\*\* Okay, cool. But are we still operating under like, here\'s software with faster learning algorithms, more efficient learning algorithms, like symbolic reasoning, Bayesian stuff\--
\*\*0:29:24.7 Interviewee:\*\* Maybe. But I mean it could be that neuromorphic hardware finally lives up to its promise, or that we can do photonic chips at the speed of light computation and things like that. We\'re also very good in AI at fooling ourselves into thinking that we are responsible for all of these amazing breakthroughs, but without hardware engineers at NVIDIA, none of this stuff would have happened, right? They are doing very different things.
\*\*0:29:55.1 Vael:\*\* Alright, so we\'ve got this AI system which is quite general, we\'re in maybe a different paradigm, but we\'re still like\-- faster learning systems. Here we are, these things are very capable, very general, when they generate stories, they model physics in the world and then use that to generate their stories. Maybe they can do a lot of social stuff, maybe they know how to interact with people. And here we are with our system. Is this now an AGI?
\*\*0:30:18.0 Interviewee:\*\* No, no, so\-- Okay, now I remember what I was gonna say about the Dennett thing. So we anthropomorphize everything, we take this intentional stance at everything. We do this to ourselves, we do this to everything, especially when it speaks language. So when we see a language model and it\'s like, \"whoa, it\'s amazing, it does this thing,\" but all it\'s really doing is negative log likelihood, maximum likelihood estimation. It\'s basically just trying to fit \"what is the most likely word to go here\". So you can ask yourself whether we are so impressed by this system because it\'s so amazing, or because we are sort of programmed to have a lot of respect for things that speak language, because things that speak language tend to be humans. What you were just saying made it sound like you were saying, when these systems are sort of like humans, when they can do this and when they do that, and when they understand the world. So how do you define \"understanding the world\" there\--
\*\*0:31:18.7 Vael:\*\* I mostly mean like they could sub in for human jobs, for example\--
\*\*0:31:25.0 Interviewee:\*\* Yeah, but that\'s not the same thing as\-- stepping in for a human, they can already do that. But it depends on the problem. They\'re very good at counting, but\--
\*\*0:31:34.5 Vael:\*\* Yeah, but I don\'t think we could have like a mathematician AI right now per se. I guess I forgot to define my interpretation of AGI, but like a system that is very capable of replacing all current human day jobs.
\*\*0:31:51.6 Interviewee:\*\* Including yours and mine?
\*\*0:31:55.9 Interviewee:\*\* Yup.
\*\*0:31:57.8 Interviewee:\*\* Okay. But then who would it be useful for? Would the president still have a job or not?
\*\*0:32:09.7 Vael:\*\* Uh\... It doesn\'t have to. I think you could just spend\-- humans wouldn\'t have to work anymore, for example, and they could just go around doing whatever they do.
\*\*0:32:16.7 Interviewee:\*\* Yeah. But that\'s not at all what humans do. We\'re all so programmed to compete with each other.
\*\*0:32:24.7 Vael:\*\* Yeah, we can have games, we can have competitions, we can do all sorts of things, we have sports.
\*\*0:32:29.1 Interviewee:\*\* I think it\'s gonna be very quickly my AI versus your AI, basically.
\*\*0:32:33.9 Vael:\*\* Okay, we can have big fights with AIs, that seems very dangerous.
\*\*0:32:37.3 Interviewee:\*\* Yeah, I know, yeah. So that is a more likely scenario, I think, than everybody being nice and friendly and playing games. (Vael: \"Yeah.\") If people want to have power, and whoever controls the AGI will have the most power, (Vael: \"That seems right,\") then I think we\'re going to be developing your own AGIs at the same time. And then those AGIs at some point are going to be fighting with each other.
\*\*0:33:02.0 Vael:\*\* Yeah, yeah, I think we might even get problems before that, where we\'re not able to get AIs aligned with us. Have you heard of AI alignment?
\*\*0:33:10.9 Interviewee:\*\* Yeah, so \[close professional relationship\] wrote a nice thesis about it. \[Name\], I don\'t know if you know \[them\] by any chance. So yeah, alignment is important, but my concern with all this alignment stuff is that it\'s very ill-defined, I think. Either it means the same as correctness, so is your system just correct, or good at what you want it to be good at\... alignment is sort of like a reinvention of just correctness. I can see why this is useful for some people to put a new name on it. But I think it\'s a very old concept where it\'s just, okay, we\'re measuring things on a very narrow static test set, but we should be thinking about all these other things. You want your system to be really good when you deploy it in the real world. So it needs to be a good system or a correct or an aligned system. And so alignment maybe is a useful concept, only in the sense that the systems are getting so good now that you can start thinking about different kinds of goodness that we didn\'t think about before, and we can call that alignment, like human value-style things. But I think the concept itself is very old; it\'s just like, is your system correct?
\*\*0:34:40.0 Vael:\*\* Yeah. And then it\'s nowadays being thought about in terms of very far future systems and aligning with all values and preferences. (Interviewee: Yeah.) Cool. Yeah, do you work on any sort of AI safety or what would convince you to work on this or not work on this, etcetera?
\*\*0:34:56.5 Interviewee:\*\* Yeah so, I\'m not sure. AI safety is a bit of a weird concept to me, but I do work on the responsible AI and ethical AI, yeah.
\*\*0:35:06.9 Vael:\*\* Hm. And what does that mean\--
\*\*0:35:09.1 Interviewee:\*\* So these are things like\... I\'m trying to get better fairness metrics for systems. So in \[company\] we built this provisional fairness metric where we do some heuristic swaps. And so right now we\'re working on a more sophisticated method for doing this where, let\'s say, you have something, a sentence or some sort of natural language inference example, so a premise and a hypothesis and it\'s about James, like if you change James to Jamal, that shouldn\'t change your prediction at all. Or if you change the gender from James and you turn into a woman, that shouldn\'t change anything there. And it does, actually, if you look at restaurant reviews, if you changed the restaurant to a Mexican restaurant and the person who\'s eating there to Jamal, then your sentiment goes down. So this is the sort of stuff that shouldn\'t happen in these systems that is direct consequence of us just scaling the hell out of our systems on as much data as we can, including all of the biases that exist in this data. So I\'m working on trying to do that better measurement for these sort of things. And so I think if we are not getting better at measurement, then all of this stuff is basically a pointless discussion.
\*\*0:36:29.1 Vael:\*\* Great, thank you. And then my last question is, have you changed your mind on anything during this interview and how was this interview for you?
\*\*0:36:35.9 Interviewee:\*\* It was fun. Yeah, I\'ve done a few of these with various people and it\'s always a bit like, I don\'t know. It feels a bit like\... we\'re getting ahead of ourselves a little bit. But maybe I\'m also just old. So when I talked to \[close professional relationship\] and how \[they\] think about stuff, I\'m like, I just don\'t understand how \[they\] think about AI.
\*\*0:37:06.2 Vael:\*\* Got it. \[They\'re\] like way out here, and we need to make sure that systems do our correct\--
\*\*0:37:11.9 Interviewee:\*\* Yeah, \[they\'re\] really.. Yeah, \[they\] put a lot more faith also in AI, which I think is very interesting. So I asked \[them\] like, \"Okay, so this alignment stuff, in the end who should we ask what is right or what is wrong? When we\'re trying to design the best AI systems, who should we ask for what\'s right and wrong?\" And then \[their\] answer was, \"We should ask the AI.\"
\*\*0:37:38.7 Vael:\*\* What? No, we should ask humans.
\*\*0:37:41.0 Interviewee:\*\* Yeah, no, so \[they\] think that basically AGI or AI is going to get so good, these language models are gonna get so good that they can tell us how we should think about our own moral philosophical values so that we can impose them onto AI systems. That to me just sounds crazy, like batshit crazy, but that\'s one way to think about it. I mean, I respect \[their\] opinion. I just can\'t understand it.
\*\*0:38:11.7 Vael:\*\* Interesting. Yeah, I think if I try to model what I would imagine \[they\] would be saying, under the alignment paradigm, I would say that you need to ask human feedback, but it\'s hard to get human feedback on very intelligent systems. And so you should ask AI to summarize human feedback, but it should always be ground down on a human otherwise we\'re in trouble, so.
\[ending comments\] |
7b8b419c-3ca5-4904-add7-0c9c1b15265e | StampyAI/alignment-research-dataset/blogs | Blogs | Our all-time largest donation, and major crypto support from Vitalik Buterin
I’m thrilled to announce two major donations to MIRI!
First, a long-time supporter has given MIRI by far our largest donation ever: **$2.5 million per year over the next four years, and an additional ~$5.6 million in 2025**.
This anonymous donation comes from a cryptocurrency investor who [previously donated](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/) $1.01M in ETH to MIRI in 2017. Their amazingly generous new donation comes in the form of 3001 MKR, governance tokens used in [MakerDAO](https://makerdao.com/en/whitepaper/), a stablecoin project on the Ethereum blockchain. MIRI liquidated the donated MKR for $15,592,829 after receiving it. With this donation, the anonymous donor becomes our largest all-time supporter.
This donation is subject to a time restriction whereby MIRI can spend a maximum of $2.5M of the gift in each of the next four calendar years, 2021–2024. The remaining $5,592,829 becomes available in 2025.
Second, in other amazing news, the inventor and co-founder of Ethereum, Vitalik Buterin, yesterday gave us a surprise donation of 1050 ETH, worth **$4,378,159**.
This is the third-largest contribution to MIRI’s research program to date, after Open Philanthropy’s [~$7.7M grant in 2020](https://intelligence.org/2020/04/27/miris-largest-grant-to-date/) and the anonymous donation above.
Vitalik has previously donated over $1M to MIRI, including major support in our 2017 fundraiser.
We’re beyond grateful for these two unprecedented individual gifts! Both donors have our heartfelt thanks.
The post [Our all-time largest donation, and major crypto support from Vitalik Buterin](https://intelligence.org/2021/05/13/two-major-donations/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
6aab5b60-662c-4f15-b3fb-80944176f380 | trentmkelly/LessWrong-43k | LessWrong | Chapter 1: What's the Question?
Hello! I've created a simple model of how the brain processes information. It's very approximative but seems useful. I've never seen anything that looks similar to it. It makes me feel uncertain about if it's new or I missed something in my education. There won't be a lot of math formulas or tricky theorems. There won't be any complicated neuroscience. Of course, this model is strongly coupling with both. But one of the things I've learned working on it: starting from complex formulas - won't work in most cases. And I bet that at the end of this sequence you will better understand why.
As it usually happens, my path started from the Question. More precisely, from a set of questions:
* Why can I remember Lego, that my parents gifted me on my 6th birthday, but can't remember that theorem of that guy, that I was learning for exams?
* Why do I learn texts of songs and poems so smoothly? What differs from prose? And how does this difference helps me keep poetry in mind?
* Why Feynman Lectures on Physics are enjoyable for me but a physics book from the university library - not?
* Why can I read the HPMoR in two evenings, without any breaks?
Some of them were more meaningful:
* How to learn something effectively?
* How should I do my job? (I am an Android Software Engineer)
* How to communicate with other people?
And a bunch of other questions that had simple wording, but unfortunately hadn't got an Answer.
Ok, there were a couple of answers. Some of them were like circle-dependent. Some were not complete:
* How to learn something effectively? - Study hard.
* How should I do my job? - You should write a clean code.
* How to communicate with other people? - Interest them, listen to them, use your empathy.
I tried to ask more questions to clarify what do those answers mean. To understand, how should a "hard study" work. Or how to interest people. Where is the switch that enables "Interested mode"?
That switch should be somewhere in the brain. I had pretty |
8e30ac2a-f075-42a0-b3e2-a4b7b3beda70 | trentmkelly/LessWrong-43k | LessWrong | Poker is a bad game for teaching epistemics. Figgie is a better one.
Editor's note: Somewhat after I posted this on my own blog, Max Chiswick cornered me at LessOnline / Manifest and gave me a whole new perspective on this topic. I now believe that there is a way to use poker to sharpen epistemics that works dramatically better than anything I had been considering. I hope to write it up—together with Max—when I have time. Anyway, I'm still happy to keep this post around as a record of my first thoughts on the matter, and because it's better than nothing in the time before Max and I get around to writing up our joint second thoughts.
As an epilogue to this story, Max and I are now running a beta test for a course on making AIs to play poker and other games. The course will a synthesis of our respective theories of pedagogy re: games, and you can read more here or in the comments. The beta will run July 15-August 15, in-person in SF, and will be free but with limited signups has gone to waitlist-only. We're hoping to run additional iterations in-person in NYC and remote-first starting in September, so please sign up to the mailing list if either of those are of interest.
----------------------------------------
Some trading firms are driven by good decisions made by humans. (Some aren't, but we can set those aside. This post is about the ones that are.) Humans don't make better-than-average-quality decisions by default, so the better class of intellectually-driven quantitative trading firm realizes that they are in the business of training humans to make better decisions. (The second-best class of firm contents themselves with merely selecting talent.) Some firms, famously, use poker to teach traders about decision making under uncertainty.
First, the case for poker-as-educational-tool: You have to make decisions. (Goodbye, Candy Land.) You have to make them under uncertainty. (Goodbye, chess.) If you want to win against smart competition, you have to reverse-engineer the state of your competitors' uncertainty from their decisions, |
4f756067-38ae-490c-82c7-78cee5304560 | trentmkelly/LessWrong-43k | LessWrong | Introducing AlignmentSearch: An AI Alignment-Informed Conversional Agent
Authors: Henri Lemoine, Thomas Lemoine, and Fraser Lee
We are excited to introduce AlignmentSearch, an attempt to create a conversational agent that can answer questions about AI alignment. We built this site in response to ArthurB’s $5k bounty for a LessWrong conversational agent calling for the creation of a chatbot capable of discussing bad AI Alignment takes, and guiding people new to the field through common misunderstandings and early points of confusion.
Tl;dr
AlignmentSearch uses a dataset about AI alignment to construct a prompt for ChatGPT to answer AI alignment-related questions, while citing established sources. We qualitatively observe a massive boost in the quality of answers over ChatGPT without any specialized prompt, with results either on par or better than those given by much stronger LLMs (GPT-4 and Bing Search).
Two answers backed by ChatGPT
Overview
AlignmentSearch indexes the Alignment Research Dataset, generating vector embeddings to enable nearest-semantic-neighbor search. We take a user query to find the top-k most semantically close “paragraphs” (text blocks of around 220 tokens, plus some padding). The size of the dataset means we generally have access to a few paragraphs that are very semantically similar to the user’s question, and are likely to contain information that is useful in producing an answer. We form a prompt that gives these paragraphs to ChatGPT along with the user’s question and instructions on citation, and retrieve an answer somewhere between summary and synthesis, all with accurate inline citations linking to the source material. We’ve created a website that lets the user interact easily with this process: ask questions, dive into the sources used in an answer, and ask for further clarifications. We also have an alternative mode that exposes the raw results of the semantic search.
Alignment Research Dataset
The alignment research dataset was announced on LessWrong in June 2022, with a related paper accessible he |
589c8400-afaa-4d80-90fc-823a585b4ada | trentmkelly/LessWrong-43k | LessWrong | Games of My Childhood: The Troops
The Troops (Finnish “joukot”, could also be translated as “the armies” or “the forces”) was a game of pretend that I played the most with my friend Eero; I believe Aleksi also joined in.
The central premise was that each time that you played a video game and killed, recruited, rescued, built, or otherwise destroyed/obtained an enemy, character, or unit in that game, it went into an alternate dimension where it became loyal to you. In other words, it joined your troops in that dimension. The same was true for buildings that you built or destroyed, equipment that you found or bought, cities that you conquered, etc..
This meant that whenever we were playing a game, we were not just playing a game: we were also accumulating resources that persisted between games. We could also combine resources from different games. For example, I might kill a number of soldiers in a game such as Snake’s Revenge on the NES, and then produce a number of laser rifles in a game such as X-Com for the PC. I could then decide that the soldiers I’d gotten from Snake’s Revenge were now armed with the laser rifles from X-Com, making them significantly tougher.
At first, my and Eero’s troops were separate, and we would occasionally trade units. For example, he had beaten the game Star Wars on the NES and destroyed a Death Star; I had beaten the game Snake’s Revenge and destroyed a Metal Gear, a walking robot armed with nuclear missiles. We agreed to trade one of his Death Stars for one of my Metal Gears. He later commented with amusement that this was probably not a great deal for him, given how much more powerful a Death Star is.
I took these trades seriously. Once, I traded a number of tanks from the NES game Top Gun: The Second Mission for something that I’ve forgotten. After we had already agreed on this trade, I became worried – exactly how many tanks had I destroyed while playing Top Gun? I wasn’t sure if I actually _had_ as many tanks as I had agreed to give to Eero. So then I had to l |
62e0196e-9517-445c-94dc-ba0f00838868 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Knowledge Integration for Disease Characterization: A Breast Cancer Example
1 Introduction
---------------
Our goal is to improve health knowledge infrastructures by use of semantic technologies to support data integration in an environment of quickly evolving medical information.
We present a prototype system that uses semantic technologies to integrate medical information relevant for characterizing breast cancer.
Our system can automatically parse the guidelines from the cancer staging manual and construct OWL axioms [[3](#bib.bib3)] that can be used to infer recommended personalized options for patients.
These inferences are made using the data related to the treatment and monitoring of the disease that are represented in RDF [[19](#bib.bib19)].
###
1.1 Background
The authoritative staging system is published by the American Joint Committee on Cancer (AJCC).
As the inaugural authors of the cancer staging manuals have stated in [[2](#bib.bib2)]:
>
> “Staging of cancer is not an exact science. As new information becomes available about etiology and various methods of diagnosis and treatment, the classification and staging of cancer will change.”
>
>
>
Since the inception of the cancer staging manual in 1977, there have been eight editions. The latest AJCC Cancer Staging Manual, Eighth Edition (AJCC 8th Edition)[[1](#bib.bib1)], makes a tangible effort to incorporate biologic and molecular markers to create a more contemporary personalized approach using pathologic prognostic staging.
This has increased the complexity of the staging criteria.
In order to stage tumors, many physicians rely on cancer staging manuals, or compact ‘cheat sheets’ derived from the contents of these manuals.
However, since the new staging guideline incorporates additional data streams, the physicians have to traverse increasingly tedious decision trees.
In terms of discovering relevant treatment and monitoring options based on the stage, or more broadly the characterization of the disease, physicians usually refer to the National Comprehensive Cancer Network (NCCN) Guidelines [[18](#bib.bib18)].
Navigating these guidelines also is often a tedious process.
Furthermore, in order to keep up with the growing and rapidly changing body of knowledge, physicians may also use subscription services such as *UpToDate*111*UpToDate* - a clinical decision support resource: <http://www.uptodate.com>, which has articles on many of the state of the art topics in medicine, including cancer.
However, physicians may not have enough time to sift through these articles and ascertain the information that is relevant for the case at hand.
###
1.2 Related Work
Initial work related to an ontology that captured cancer staging information is available in Massicano et al. [[21](#bib.bib21)] for the AJCC 6th edition [[27](#bib.bib27)].
Boeker et al. [[5](#bib.bib5)] have also created an ontology for the same guideline in which they focus on tumors in the colon and rectum.
The biggest difference between the previous ontologies and our cancer staging ontology is the inclusion of additional biomarkers as per the AJCC 8th edition staging criteria, which were not available in the previous staging editions. These biomarkers used in the new edition significantly increased the complexity of the criteria required to stage a tumor.
Additionally, the previous ontologies do not model real-world representations of the tumor concepts in their axioms nor specify those in the comments.
In those ontologies, the tumor is of a certain rdf:type T (class representing severity of tumor size: T0-T4), N (class representing the severity of the spread to the lymph nodes: N0-N3) and M (class representing whether the cancer has metastasized: M0-M1). In the real world, representation for tumor size T has a value in millimeters (or centimeters) that is used to derive a T value of 0-4. Similarly, N has a value for the number of lymph nodes affected that is used to derive the severity rating from 0-3.
Thus, their approach of representing the cancer characterization using just the rdf:type to the corresponding T, N, M classes is problematic because when any of these derived classifications change as per a new guideline, the RDF graph has to change with it, representing the new classification. In our knowledge graph, these values are encoded as attributes to give them temporal extent, avoiding this problem.
Furthermore, in addition to including classes for all cancer stages for the respective guideline, we also map the breast cancer terms to community-accepted terms from the National Cancer Institute thesaurus (NCIt) [[12](#bib.bib12)], and incorporate recommended tests and treatment plans from the openly reusable Clinical Interpretations of Variants in Cancer (CIViC) [[14](#bib.bib14)] data that can be used to provide stage specific recommendations.
Furthermore, our ontology includes terms
that are not included in NCIt or AJCC, such as more specific subclasses of tumor characteristics (T1, T1\_as, T1\_am, T1NOS, etc.) that are available in the Surveillance, Epidemiology, and End Results (SEER) dataset [[16](#bib.bib16)].
###
1.3 Overview of the Knowledge Integration System for Breast Cancer Characterization
We developed our prototype primarily to address the issue of rapidly changing information in characterizing disease, specifically breast cancer.
Since manual look-up of the breast cancer staging criteria is prone to human error, our system was designed to support automated navigation through the tedious decision trees to minimize any look up errors. We also provide support for integration of data from various sources.
Fig. [1](#S1.F1 "Figure 1 ‣ 1.3 Overview of the Knowledge Integration System for Breast Cancer Characterization ‣ 1 Introduction ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example") depicts the overall knowledge integration architecture that will be explained in detail in the following sections.

Figure 1: Knowledge Integration Architecture for Breast Cancer Characterization
2 Development of the Cancer Staging Ontologies
-----------------------------------------------
As mentioned in related work (Section [1.2](#S1.SS2 "1.2 Related Work ‣ 1 Introduction ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example")) the last known staging ontologies were created for the AJCC 6th edition.
There are no ontologies for the AJCC 7th and 8th editions to the best of our knowledge.
We describe the process we followed when constructing these new staging ontologies, accounting for the complexity of the data streams in the new guideline.
###
2.1 Cancer Staging Terms
The previous breast cancer staging guidelines (i.e. AJCC 7th edition [[10](#bib.bib10)] and earlier) only considered anatomical features such as the size of the tumor (T), the number of lymph nodes affected (N), and whether the cancer has metastasized (M).
Additionally considered in the new staging guidelines [[1](#bib.bib1)] are biomarkers including human epidermal growth factor receptor 2 (HER2), estrogen (ER) and progesterone (PR) receptor statuses and tumor grade (Grade).
This addition has led to a more complex set of rules for staging criteria using the classes corresponding to the specific stages in the AJCC 7th and 8th editions that we incorporated into our Cancer Staging Terms (CST) ontology.
Fig. [2](#S2.F2 "Figure 2 ‣ 2.1 Cancer Staging Terms ‣ 2 Development of the Cancer Staging Ontologies ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example") depicts the 8th staging edition staging class hierarchy.
Each stage class includes the properties *cst:hasRecommendedTest*, *cst:hasTreatmentOption*, and *rdfs:subClassOf* assertions where applicable. We added the *rdfs:comments* to better describe the concepts in the ontology based on the descriptions available in the medical literature and to support explanation.

Figure 2: Stage Hierarchy of the AJCC Cancer Staging 8th Edition
Furthermore, in the AJCC staging manuals, and in the data we ingested from other sources, we observed different subclasses for the broader classification of the features considered, i.e. T, N, M, HER2, ER, PR, and Grade in the ontology.
Fig. [3](#S2.F3 "Figure 3 ‣ 2.1 Cancer Staging Terms ‣ 2 Development of the Cancer Staging Ontologies ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example") depicts a small subset of these classifications, which includes various Tumor size (T) classes. Similarly, there are other subclass assertions, and mappings to the NCIt classes for N, M, HER2, ER, PR, and Grade.
We augmented these classes with the rdfs:comment, rdfs:labels, and the owl:equivalentClass obtained from NCIt [[12](#bib.bib12)].
These rdfs:comments and rdfs:labels are used to explain a particular conclusion resulting from the application of a reasoner utilizing the ontology explained in detail in Section [5](#S5 "5 Inference Agent ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example").

Figure 3: Hierarchy of the Tumor Size (T) Classes in Our Integrated Ontology. *We created the tumor size classes in the left two columns to support our integration and reasoning. These classes reflect content in SEER [[16](#bib.bib16)] and not AJCC [[1](#bib.bib1)].*
###
2.2 Translating Staging Criteria into Structured Mappings
We extracted 19 criteria from AJCC 7th edition, and 407 criteria for clinical prognostic stage grouping from AJCC 8th edition. A script was necessary for the 8th edition since the complexity of the staging guideline has increased with the addition of the biomarkers. Table [1](#S2.T1 "Table 1 ‣ 2.2 Translating Staging Criteria into Structured Mappings ‣ 2 Development of the Cancer Staging Ontologies ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example") illustrates the number of different combinations for staging criteria observed in the two staging guidelines. The non-linear expansion of the number of combinations is due to the complex interaction of the additional biomarkers HER2, ER, PR and Tumor Grade.
| Stage | 0 | IA | IB | IIA | IIB | IIIA | IIIB | IIIC | IV |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| AJCC 7th Edition | 1 | 1 | 2 | 3 | 2 | 5 | 3 | 1 | 1 |
| AJCC 8th Edition | 1 | 57 | 33 | 77 | 39 | 82 | 92 | 25 | 1 |
Table 1: Number of Feature Combinations for Determining Stage
For each of the two staging guidelines, we created corresponding ‘map files’ to represent the conditions required for a tumor to be classified a certain stage from 0-IV.
We created 18 such map files for the two guidelines (AJCC 7th and 8th editions), with 9 map files representing each stage from 0, IA, IB, IIA, IIB, IIIA, IIIB, IIIC, and IV. Each line in the map file in a 7th edition stage contains the set of possible T, N and M combinations that would result in that stage being assigned to the tumor. The map files for the 8th edition followed a similar form, but also included the additional features HER2, ER, PR and Grade.
If any of the features can be *any* value for a tumor to be staged, the map file omitted those corresponding features, and only included the features that mattered. For example, in order for a tumor to be classified stage IV in both the guidelines, the only criteria necessary was the ‘M’ (whether the cancer has metastasized) to be true. Regardless of any other combinations of the other features T, N in the 7th edition, and additionally HER2, ER, PR and Grade in the 8th edition, the tumor will always be classified stage IV, thus only one combination is available for both the guidelines for determining stage IV.
###
2.3 Structured Mappings to Ontology
In order to automatically generate OWL axioms for the staging criteria, we utilized the map files created in Section [2.2](#S2.SS2 "2.2 Translating Staging Criteria into Structured Mappings ‣ 2 Development of the Cancer Staging Ontologies ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example"). These map files were parsed using a script, where the property owl:intersectionOf was leveraged in creating the axioms. For example, in order for a tumor to be classified as Stage IA in the AJCC 7th edition (i.e. AJCC7\_Stage\_IA), a tumor profile must satisfy the axiom in Listing [2.1](#S2.LST1 "Listing 2.1 ‣ 2.3 Structured Mappings to Ontology ‣ 2 Development of the Cancer Staging Ontologies ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example"). However, for the same tumor to be classified as Stage IA in the AJCC 8th edition (i.e. AJCC8\_Stage\_IA), only one of the 57 axioms must be satisfied (Listing [2.2](#S2.LST2 "Listing 2.2 ‣ 2.3 Structured Mappings to Ontology ‣ 2 Development of the Cancer Staging Ontologies ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example") demonstrates one such axiom). We developed the breast cancer staging ontology for the AJCC 7th edition (BCS7) and the ontology for the AJCC 8th edition (BCS8) using the above-mentioned procedure to codify all the axioms related to classifying tumors.
[⬇](http://data:text/plain;base64,QHByZWZpeCBjc3Q6IDxodHRwOi8vaWRlYS50dy5ycGkuZWR1L2NhbmNlcl9zdGFnaW5nX3Rlcm1zLm93bCM+IC4KW10gYSBvd2w6Q2xhc3M7IHJkZnM6c3ViQ2xhc3NPZiBjc3Q6QUpDQzdfU3RhZ2VfSUE7CiAgICBvd2w6aW50ZXJzZWN0aW9uT2YgKCBjc3Q6VDEgY3N0Ok4wIGNzdDpNMCApLg==)
@prefix cst: <http://idea.tw.rpi.edu/cancer\_staging\_terms.owl#> .
[] a owl:Class; rdfs:subClassOf cst:AJCC7\_Stage\_IA;
owl:intersectionOf ( cst:T1 cst:N0 cst:M0 ).
Listing 2.1: The Only OWL Axiom for a Tumor to be Classified as Stage IA in the AJCC 7th Edition
[⬇](http://data:text/plain;base64,W10gYSBvd2w6Q2xhc3M7IHJkZnM6c3ViQ2xhc3NPZiBjc3Q6QUpDQzhfU3RhZ2VfSUE7CiAgICBvd2w6aW50ZXJzZWN0aW9uT2YgKCBjc3Q6VDEgY3N0Ok4wIGNzdDpNMCBjc3Q6R3JhZGUxCiAgICBjc3Q6SEVSMl9OZWcgY3N0OkVSX05lZyBjc3Q6UFJfUG9zICkuCi4uLg==)
[] a owl:Class; rdfs:subClassOf cst:AJCC8\_Stage\_IA;
owl:intersectionOf ( cst:T1 cst:N0 cst:M0 cst:Grade1
cst:HER2\_Neg cst:ER\_Neg cst:PR\_Pos ).
...
Listing 2.2: One of the Many OWL Axioms for a Tumor to be Classified as Stage IA in the AJCC 8th Edition
3 Integrated Cancer Knowledge Graph
------------------------------------
We chose RDF [[19](#bib.bib19)] as the underlying knowledge representation model to handle heterogeneous data while providing interoperable representations.
The CST, BCS7 and BCS8 developed in Section [2](#S2 "2 Development of the Cancer Staging Ontologies ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example"), are part of our Integrated Cancer Knowledge Graph. Additionally, we extracted data from crowd sourced, open source, reusable cancer resources to augment the knowledge graph with treatment and monitoring options based on the stage inferred using the cancer staging ontologies developed.
###
3.1 Integrating Data from Other Cancer Data Sources
There are many services that provide vast collections of data that may be useful and relevant in a cancer knowledge graph. Some of these services include CIViC [[14](#bib.bib14)], OncoKB [[6](#bib.bib6)], MyCancerGenome [[23](#bib.bib23)] and Integrative Onco Genomics [[13](#bib.bib13)].
As a proof of concept, we incorporated data from CIViC [[14](#bib.bib14)], which has crowd sourced, open source and reusable data that identifies drugs that may interact with biomarkers. Additionally from the CIViC data dumps, related articles and their trust ratings captured in the form of provenance were also incorporated.
Nanopublications [[24](#bib.bib24)] were created for this data using a semantic annotation approach called Semantic Data Dictionaries (SDDs) [[25](#bib.bib25)], which simplifies the ability to express the full semantics of a dataset. The SDD process [[25](#bib.bib25)] allowed us to link the data concepts with each other, as well as reference implicit entities in the data, and link the corresponding data elements as characteristics of these entities.
The concepts contained in the data records needed to be mapped with related terms from domain specific ontologies such as NCI thesaurus (NCIt) [[12](#bib.bib12)] and Uniprot [[7](#bib.bib7)], as well as general purpose ontologies such as Semanticscience Integrated Ontology (SIO) [[9](#bib.bib9)].
The dictionary mapping table of the SDD that was used for CIViC maps 14 different features in the dataset such as Drugs, Status, Evidence ID, Evidence Level, Gene, Variant, Disease and the Trust Rating to the respective classes available in SIO and NCIt. These classes are used for type assignment when creating a knowledge graph from the data. For example, the *Drug* column in the dataset is mapped to *sio:Drug*, *Gene* column to *sio:Gene*, etc. Furthermore, the classes specified in the *attributeOf*, *inRelationTo* and *wasDerivedFrom* are used in semantically modeling relationships in the generated nanopublications.
A codebook was used to map over 200 specific values found in the CIViC data to the corresponding terms in existing ontologies. The Disease types that were found in CIViC were mapped to concepts in the Human Disease Ontology (DOID) [[26](#bib.bib26)], Experimental Factor Ontology (EFO) [[20](#bib.bib20)], and NCIt [[12](#bib.bib12)]. For example, the concept for the *HER2-receptor Positive Breast Cancer* in our knowledge graph is mapped to concepts such as *efo:1000294, doid:0060079, ncit:C53556*222These specific mappings were looked up using Ontobee (<http://www.ontobee.org>)..
Similarly, drugs were mapped to concepts from the Drug Bank [[29](#bib.bib29)], the Drug Ontology (DRON) [[15](#bib.bib15)], and/or Chemical Entities of Biological Interest (ChEBI) Ontology [[8](#bib.bib8)], genes were mapped to terms in Uniprot [[7](#bib.bib7)], etc.
4 Converting Patient Records to RDF
------------------------------------
In order to evaluate our cancer staging ontology, we needed cancer patient data that included the characteristics of the tumor in RDF, ideally in the nanopublications format [[24](#bib.bib24)].
The SEER datasets [[16](#bib.bib16)] contained the desired data which included demographic information, tumor stage as per the older AJCC 6th edition, and the survival status of patients treated from 1980-2012.
We browsed the datasets using the statistical software, SEER\*Stat333SEER\*Stat: <https://seer.cancer.gov/seerstat>, and downloaded a subset of the data to create the patient nanopublications.
Due to the anonymity and privacy constraints on the medical data, the SEER patient records lacked any identifying information like the patient name.
However, for our use case, i.e. to model a patient, we needed an identifying attribute, so we annotated the patient records with names from Python’s Natural Language Toolkit (NLTK) name corpus [[4](#bib.bib4)] to assign a name to each patient record.
The patient data was then fed through the SDD pipeline [[25](#bib.bib25)] to generate knowledge graphs that included nanopublications that captured the attributes of a patient and where that information came from within an assertion in the patient graph.
Utilization of the SDD approach allowed us to semantically represent relationships such as the age of patient at diagnosis (i.e. the attribute sio:Age as sio:attributeOf the patient which sio:existsAt the time of diagnosis). We mapped 29 such features for a patient record in the SEER dataset in the data dictionary, and the codebook contains 100+ mappings to terms in NCIt.
Since some of the values occurring in SEER did not match existing terms in the ontologies,
we leveraged our Cancer Staging Terms (CST) ontology, introduced in Section [2](#S2 "2 Development of the Cancer Staging Ontologies ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example").
A codebook mapping corresponding to SEER was defined that would generate standard values and map commonly occurring terms to their ontology equivalents.
As the structured format of the data is insufficient to capture the implicit linkages within the attributes of the dataset, a SEER dictionary mapping was defined that established the entity-attribute mappings to facilitate the conversion of the data to the named graphs with nanopublications.
5 Inference Agent
------------------
We developed a deductive inference agent on the Whyis knowledge graph framework [[22](#bib.bib22)] to infer the stage of a tumor, and the corresponding treatment/monitoring plans.
Whyis provides an environment for automated generalized inference over changes to the knowledge graph, supporting the generation of derived knowledge.
The framework enables knowledge curation using a Semantic Extract, Transform, and Load tool for creating RDF from tabular sources, as well as automated mapping of external linked data knowledge sources.
Furthermore, developers can create custom views for visualizing the data in the underlying knowledge graph.
The Whyis inference agent was built to reason over the nanopublications pertaining to the patient data records constructed in Section [4](#S4 "4 Converting Patient Records to RDF ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example"), using the cancer staging ontologies CST, BCS7 and BCS8 introduced in Section [2](#S2 "2 Development of the Cancer Staging Ontologies ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example").
While the SDD process [[25](#bib.bib25)] allowed us to model the data easily, it resulted in some challenges in terms of writing inference rules in OWL, such as finding appropriate paths between entities or attributes specific to the nanopublications, as well as inference over individuals rather than just classes.
To address these issues, we decided to take a route similar to SPARQL DL [[28](#bib.bib28)] and built SPARQL templates for different OWL reasoning profiles, as well as custom inference rules based on the SDD files, to be consumed by the inference agent.
An example configuration for an OWL inference rule is shown in Listing [5.1](#S5.LST1 "Listing 5.1 ‣ 5 Inference Agent ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example"), and an example custom rule, auto-generated with the utilization of files generated by the SDD process is shown in Listing [5.2](#S5.LST2 "Listing 5.2 ‣ 5 Inference Agent ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example").
[⬇](http://data:text/plain;base64,IkNsYXNzIFN1YnN1bXB0aW9uIENsb3N1cmUiOiAoCiAgICB3aGVyZSA9ICI/cmVzb3VyY2UgcmRmczpzdWJDbGFzc09mID9jbGFzcyAuCiAgICA/Y2xhc3MgcmRmczpzdWJDbGFzc09mKyA/c3VwZXJDbGFzcyAuIiwKICAgIGNvbnN0cnVjdD0iP3Jlc291cmNlIHJkZnM6c3ViQ2xhc3NPZiA/c3VwZXJDbGFzcyAuIiwKICAgIGV4cGxhbmF0aW9uPSJTaW5jZSB7e2NsYXNzfX0gaXMgYSBzdWJjbGFzcyBvZiB7e3N1cGVyQ2xhc3N9fSwKICAgIGFueSBjbGFzcyB0aGF0IGlzIGEgc3ViY2xhc3Mgb2Yge3tjbGFzc319IGlzIGFsc28gYSBzdWJjbGFzcwogICAgb2Yge3tzdXBlckNsYXNzfX0uIFRoZXJlZm9yZSwge3tyZXNvdXJjZX19IGlzIGEgc3ViY2xhc3Mgb2YKICAgIHt7c3VwZXJDbGFzc319LiIp)
"Class Subsumption Closure": (
where = "?resource rdfs:subClassOf ?class .
?class rdfs:subClassOf+ ?superClass .",
construct="?resource rdfs:subClassOf ?superClass .",
explanation="Since {{class}} is a subclass of {{superClass}},
any class that is a subclass of {{class}} is also a subclass
of {{superClass}}. Therefore, {{resource}} is a subclass of
{{superClass}}.")
Listing 5.1: Example Configuration for an OWL Inference Rule (Class Subsumption))
[⬇](http://data:text/plain;base64,IkFKQ0M4IFN0YWdlIElJSUEiOiAoCiAgICByZXNvdXJjZT0iP1R1bW9yIiwKICAgIHByZWZpeGVzPSIuLi4iLAogICAgY29uc3RydWN0PSI/VHVtb3IgY3N0Omhhc0FKQ0NTdGFnZSBjc3Q6QUpDQzhfU3RhZ2VfSUlJQSAuIiwKICAgIHdoZXJlPXRubV93aGVyZSArCiAgICAgID9UIHJkZjp0eXBlIGNzdDpUMyAuCiAgICAgID9OIHJkZjp0eXBlIGNzdDpOMyAuCiAgICAgID9NIHJkZjp0eXBlIGNzdDpNMCAuCiAgICAgID9HcmFkZSByZGY6dHlwZSBjc3Q6R3JhZGUxIC4KICAgICAgP0hFUjIgcmRmOnR5cGUgY3N0OkhFUjJfUG9zIC4KICAgICAgP0VSIHJkZjp0eXBlIGNzdDpFUl9Qb3MgLgogICAgICA/UFIgcmRmOnR5cGUgY3N0OlBSX1BvcyAu)
"AJCC8 Stage IIIA": (
resource="?Tumor",
prefixes="...",
construct="?Tumor cst:hasAJCCStage cst:AJCC8\_Stage\_IIIA .",
where=tnm\_where +
?T rdf:type cst:T3 .
?N rdf:type cst:N3 .
?M rdf:type cst:M0 .
?Grade rdf:type cst:Grade1 .
?HER2 rdf:type cst:HER2\_Pos .
?ER rdf:type cst:ER\_Pos .
?PR rdf:type cst:PR\_Pos .
Listing 5.2: Example Configuration for a Custom Inference Rule (One of the Criteria for a Tumor to be Classified as Stage IIIA in the AJCC 8th Edition)
These configurations are used to instantiate the following variables in the inference agent: resource, prefixes, where, construct and explanation.
The prefixes and where variables are used in a SPARQL query that selects relevant URIs from the triple store.
The resource variable is used to refer to which element returned by the query will be appended new triples.
The form of the new triples that will be added is specified in the construct variable.
An explanation for the rule creating this new knowledge is specified in the explanation variable.
### Generating Explanations
Our data conversion process captures the provenance of the various sources, which we convert to nanopublications, as well as the explanations behind why specific assertions were inferred. These natural language explanations make it easier for a non-technical user, who might not have an in-depth knowledge of the staging rules, to understand why a certain stage was inferred.
When an explicit explanation is not provided in the rule, it is derived from the where clause used to create the assertion corresponding to the inference.
The explanation is then associated with that assertion on the new inferred stage using the prov:used property. As an example, when the custom inference rule specified in Listing [5.2](#S5.LST2 "Listing 5.2 ‣ 5 Inference Agent ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example") is fired on *Patient D*, whose tumor satisfies the criteria given in the where clause in that rule, an explanation similar to the one shown in Listing [5.3](#S5.LST3 "Listing 5.3 ‣ Generating Explanations ‣ 5 Inference Agent ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example") will be generated.
For better readability of the explanation, the rdfs:label or rdfs:comment of the values that get bound to variables such as ?T,?N,?M, etc. (i.e. ‘Primary Tumor size’, ‘Degree of spread to lymph nodes’, ‘Presence of distant metastasis’, etc.) are used instead of the actual class names.
[⬇](http://data:text/plain;base64,UGF0aWVudCBEJ3MgdHVtb3Igd2FzIGZvdW5kIHRvIGJlIEFKQ0M4IFN0YWdlIElJSUEgc2luY2UgdGhlCmZvbGxvd2luZyBhcmUgdHJ1ZToKLSBQcmltYXJ5IFR1bW9yIHNpemUgaXMgVDMgLgotIERlZ3JlZSBvZiBzcHJlYWQgdG8gbHltcGggbm9kZXMgaXMgTjMgLgotIFByZXNlbmNlIG9mIGRpc3RhbnQgbWV0YXN0YXNpcyBpcyBNMCAuCi0gVHVtb3IgR3JhZGUgaXMgR3JhZGUzIC4KLSBIdW1hbiBFcGlkZXJtYWwgZ3Jvd3RoIGZhY3RvciBSZWNlcHRvciAyIChIRVIyKSBpcyBQb3NpdGl2ZS4KLSBFc3Ryb2dlbiBSZWNlcHRvciAoRVIpIGlzIFBvc2l0aXZlLgotIFByb2dlc3Rlcm9uZSBSZWNlcHRvciAoUFIpIGlzIFBvc2l0aXZlLg==)
Patient D’s tumor was found to be AJCC8 Stage IIIA since the
following are true:
- Primary Tumor size is T3 .
- Degree of spread to lymph nodes is N3 .
- Presence of distant metastasis is M0 .
- Tumor Grade is Grade3 .
- Human Epidermal growth factor Receptor 2 (HER2) is Positive.
- Estrogen Receptor (ER) is Positive.
- Progesterone Receptor (PR) is Positive.
Listing 5.3: Example of Explanation for Inferring a Stage in the AJCC 8th Edition
Using a similar strategy, we are able to identify possible drug treatment plans from the cancer database CIViC by equating the disease type that the drugs target, to the inferred cancer stage of the patient. To achieve this, we generated custom inference rules from the CIViC SDD files, and once the inferencer runs these rules on the patient nanopublications, the corresponding explanations were generated and attached to the stage assertion nanopublications.
6 Visualization of the Cancer Characterization
-----------------------------------------------
In order to demonstrate the integrated cancer knowledge graph and the reasoning capabilities of the Whyis inference agent, we built a visualization tool that displays different treatment paths and guideline impacts to a patient in the form of interactive reports as introduced by Kennedy et al. [[17](#bib.bib17)].
The visualization is built on the Whyis knowledge graph framework (introduced in Section [5](#S5 "5 Inference Agent ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example")).
When a user, say a physician, selects a patient record, they are presented with information that helps enhance their diagnostic process, and in some cases, eliminates the manual labor of walking through the decision trees in the guidelines to support cancer staging decisions.
As can be seen in the Fig. [5](#S6.F5 "Figure 5 ‣ 6 Visualization of the Cancer Characterization ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example") and Fig. [5](#S6.F5 "Figure 5 ‣ 6 Visualization of the Cancer Characterization ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example"), the view is divided into four sections: (1) *Patient Details*, (2) *Biomarker and Staging*, (3) *Treatment Plan*, and (4) *Suggested Drugs*.

Figure 4: AJCC 7th Edition Staging Characterization

Figure 5: AJCC 8th Edition Staging Characterization
In this visualization tool, it is possible to choose between the three latest AJCC staging guidelines, i.e. AJCC 6th, 7th, and 8th editions. Once a guideline is selected, the view dynamically loads newly derived knowledge using asynchronous JavaScript SPARQL POST requests. The derived knowledge includes the inferred stage, whether this is an up-stage/down-stage/no-change, and the explanations behind the inferred stage. Based on the inferred stage for the guideline selected, the corresponding treatment and monitoring options available in our integrated cancer knowledge graph (i.e. CIViC drug nanopublication records) are also queried and presented to the user.
A screenshot of a patient’s report as per the older 7th edition is shown in Fig. [5](#S6.F5 "Figure 5 ‣ 6 Visualization of the Cancer Characterization ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example"), and the same patient’s report according to the newer 8th edition is shown in Fig. [5](#S6.F5 "Figure 5 ‣ 6 Visualization of the Cancer Characterization ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example"). Note the differences in the inferred stage–the patient is down-staged from IIIA in the 7th edition to IIB in the 8th edition. There are also some changes to the treatment and monitoring options based on this new inferred stage.
7 Evaluation
-------------

Figure 6: Stage Transitions of
250 Patient Records from SEER
We used our cancer staging ontologies and the inference agent on 250 randomly selected SEER patient records to estimate prevalence of stage changes between different staging guidelines.
We anticipated a number of changes given that the latest AJCC 8th edition utilizes additional biomarkers to determine stage.
These SEER patient records were first transformed into nanopublications using the SDD process [[25](#bib.bib25)] as explained in Section [4](#S4 "4 Converting Patient Records to RDF ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example"), after which our inference agent was applied to determine the stage as per the two guidelines.
The aggregated view of these stage transitions from the AJCC 7th to the 8th edition is shown in Fig. [6](#S7.F6 "Figure 6 ‣ 7 Evaluation ‣ Knowledge Integration for Disease Characterization: A Breast Cancer Example").
As can be seen in the figure, a majority of the patients’ stage did not change, but a statistically significant percentage of patients were either up-staged or down-staged.
For example, out of the patients who were assigned to have stage IIB cancer according to the 7th edition (19% of the population), 15% were down-staged to IB, 30% were reclassified to IIA, 38% remained in stage IIB, and 13% and 5% were up-staged to IIIA and IIIB respectively.
This indicates that there is a strong need for re-characterizing breast cancer according to the new guideline. Our ontologies and the supporting tools provide the first step in this process.
8 Discussion
-------------
We have utilized semantic technologies for all aspects of our system: from characterizing breast cancer and representing synthetic patient data to loading structured and unstructured treatment and monitoring data into a knowledge graph.
For the integrated cancer knowledge graph generation, we mapped concepts in several datasets using a codebook and modeled a structure amongst the attributes using the dictionary mapping table.
The deductive inference agent we developed leverages SPARQL DL reasoning, where queries are used to select existing triples and construct new triples.
This was done over common inference rules including class subsumption and class or property equivalence closures, as well as custom rules pertaining to the cancer staging.
The inference was applied to heterogeneous data sources in our integrated cancer knowledge graph to seamlessly derive new knowledge by applying the inference rules.
The visualization we created is able to react to changes in the triple store that results in automatic updates to what the user is seeing.
The information in our cancer knowledge graph is kept current with periodic semantic extract transform load updates.
Our system allows one to consider a multitude of parameters related to tumor biology as well as standard pathology simultaneously and can easily updated to support new classification criteria.
### Resource Contributions
We expect the following publicly available artifacts, along with the applicable documentation,
to be useful resources for anyone interested in analyzing breast cancer data according to the new and the old cancer staging guidelines.
1. Ontologies:
1. Cancer Staging Terms (CST)
2. Breast Cancer Staging Ontology for the AJCC 7th Edition (BCS7)
3. Breast Cancer Staging Ontology for the AJCC 8th Edition (BCS8)
2. Semantic Annotations:
1. Semantic Data Dictionaries
2. Code Books
*(for SEER and CIViC)*
3. Source Code:
1. AJCC Guideline Extractor
2. OWL Axiom Generator
3. Whyis Inference Agent
4. Custom Inference Rules
5. Visualization
4. Data:
1. SEER Nanopublications
2. CIViC Nanopublications
9 Future Work
--------------
There are many online resources with rapidly changing information from clinical trials, as well as data from basic science research with useful cancer data that can be leveraged to augment the cancer knowledge graph. However, when multiple data streams are combined, especially drug information, there may be inconsistent or ambiguous information. Therefore, we will need to resolve such issues using a combination of provenance, data integrity, and trust in the source and/or the methodology.
The inference agent we developed can be used to identify treatment paths based on a patient’s cancer stage. However, the CIViC data [[14](#bib.bib14)] we used for this purpose defines treatment paths for the broader stages (i.e. stage II as opposed to AJCC’s narrower IIA or IIB stages). Therefore, we plan to ascertain the correct treatment paths for all the narrower stages and add those in to the cancer knowledge graph.
We plan to incorporate additional data sources such as the NCCN clinical practice guidelines in oncology [[18](#bib.bib18)], which is the authoritative source for physicians in identifying suitable cancer treatment and monitoring plans. This will allow our inference agent to output the precise treatment paths, in addition to the ones that are obtained by linking the patient’s inferred narrower AJCC tumor stage to the broader stage with ontological properties such as rdfs:subClassOf relationships.
We also expect the future ontologies to be built using the AJCC API as a resource for all valid values on stage permutations.
We plan to expand the inference capabilities, which are currently restricted to class, instance, property subsumption, equivalence, and inversion closures, to other techniques that will help derive even more relevant knowledge.
For example, we believe it would be useful to infer ‘patients like me’ using instance matching and identify alternate treatment paths that have worked in the past for similar patients, and predict response to a treatment path using temporal reasoning.
As new guidelines will infer new staging results, updates or fixes to the patient data or existing guidelines are needed. The Whyis framework provides an effective mechanism to ‘retire’ old inferences and trigger computation of new ones, as long as the nanopublication has the same URI. The framework tracks a nanopublication URI when a new version is added, removing older versions, as well as any inferences that are made on them. We opted to create different classes for the stage based on the guideline, so that we can switch between different guidelines easily. For example, we have AJCC7\_Stage\_IIA and AJCC8\_Stage\_IIA, as opposed to a generic Stage\_IIA. Therefore, encoding the information about which guideline the staging criteria is from, in the provenance assertion for that triple, without having to make that explicit class, and utilizing the provenance information in the inference to determine the stage per the selected guideline is a useful addition. This change requires versioning of assertions in our integrated cancer knowledge graph, and some changes to the custom inference rules.
10 Conclusion
--------------
We have presented a prototype knowledge integration system that can be used to encapsulate the breadth of information required to characterize disease.
The specific domain problem we address is characterizing breast cancer, which today is predominantly done by manually looking up cancer staging guidelines.
In fact, oncology is moving towards adopting the concept of *Precision Oncology*, in which the treatment plans and therapies are driven by data from personalized genetic markers independent of cancer type [[11](#bib.bib11)]. In the future, new guidelines for cancer staging are expected to incorporate genomic test results analyzed in the context of the patient’s history, which will further increase the complexity of the staging criteria, requiring automated mechanisms similar to the techniques illustrated in this paper. Therefore, it is our expectation that the resources contributed in this paper and the methodologies to ingest rapidly changing information, will be useful to application designers who are aiming to support next generation precision medicine assistant tools.
Acknowledgements
----------------
This work is partially supported by IBM Research AI through the AI Horizons Network. We thank our colleagues from IBM (Amar Das, Ching-Hua Chen) and RPI (John Erickson, Alexander New, Rebecca Cowan) who provided insight and expertise that greatly assisted the research. |
22d1117f-8df4-4887-8610-6e1841106493 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "A few years back, my great-grandmother died, in her nineties, after a long, slow, and cruel disintegration. I never knew her as a person, but in my distant childhood, she cooked for her family; I remember her gefilte fish, and her face, and that she was kind to me. At her funeral, my grand-uncle, who had taken care of her for years, spoke. He said, choking back tears, that God had called back his mother piece by piece: her memory, and her speech, and then finally her smile; and that when God finally took her smile, he knew it wouldn’t be long before she died, because it meant that she was almost entirely gone. I heard this and was puzzled, because it was an unthinkably horrible thing to happen to anyone, and therefore I would not have expected my grand-uncle to attribute it to God. Usually, a Jew would somehow just-not-think-about the logical implication that God had permitted a tragedy. According to Jewish theology, God continually sustains the universe and chooses every event in it; but ordinarily, drawing logical implications from this belief is reserved for happier occasions. By saying “God did it!” only when you’ve been blessed with a baby girl, and just-not-thinking “God did it!” for miscarriages and stillbirths and crib deaths, you can build up quite a lopsided picture of your God’s benevolent personality. Hence I was surprised to hear my grand-uncle attributing the slow disintegration of his mother to a deliberate, strategically planned act of God. It violated the rules of religious self-deception as I understood them. If I had noticed my own confusion, I could have made a successful surprising prediction. Not long afterward, my grand-uncle left the Jewish religion. (The only member of my extended family besides myself to do so, as far as I know.) Modern Orthodox Judaism is like no other religion I have ever heard of, and I don’t know how to describe it to anyone who hasn’t been forced to study Mishna and Gemara. There is a tradition of questioning, but the kind of questioning . . . It would not be at all surprising to hear a rabbi, in his weekly sermon, point out the conflict between the seven days of creation and the 13.7 billion years since the Big Bang—because he thought he had a really clever explanation for it, involving three other Biblical references, a Midrash, and a half-understood article in Scientific American. In Orthodox Judaism you’re allowed to notice inconsistencies and contradictions, but only for purposes of explaining them away, and whoever comes up with the most complicated explanation gets a prize. There is a tradition of inquiry. But you only attack targets for purposes of defending them. You only attack targets you know you can defend. In Modern Orthodox Judaism I have not heard much emphasis of the virtues of blind faith. You’re allowed to doubt. You’re just not allowed to successfully doubt. I expect that the vast majority of educated Orthodox Jews have questioned their faith at some point in their lives. But the questioning probably went something like this: “According to the skeptics, the Torah says that the universe was created in seven days, which is not scientifically accurate. But would the original tribespeople of Israel, gathered at Mount Sinai, have been able to understand the scientific truth, even if it had been presented to them? Did they even have a word for ‘billion’? It’s easier to see the seven-days story as a metaphor—first God created light, which represents the Big Bang . . .” Is this the weakest point at which to attack one’s own Judaism? Read a bit further on in the Torah, and you can find God killing the first-born male children of Egypt to convince an unelected Pharaoh to release slaves who logically could have been teleported out of the country. An Orthodox Jew is most certainly familiar with this episode, because they are supposed to read through the entire Torah in synagogue once per year, and this event has an associated major holiday. The name “Passover” (“Pesach”) comes from God passing over the Jewish households while killing every male firstborn in Egypt. Modern Orthodox Jews are, by and large, kind and civilized people; far more civilized than the several editors of the Old Testament. Even the old rabbis were more civilized. There’s a ritual in the Seder where you take ten drops of wine from your cup, one drop for each of the Ten Plagues, to emphasize the suffering of the Egyptians. (Of course, you’re supposed to be sympathetic to the suffering of the Egyptians, but not so sympathetic that you stand up and say, “This is not right! It is wrong to do such a thing!”) It shows an interesting contrast—the rabbis were sufficiently kinder than the compilers of the Old Testament that they saw the harshness of the Plagues. But Science was weaker in these days, and so rabbis could ponder the more unpleasant aspects of Scripture without fearing that it would break their faith entirely. You don’t even ask whether the incident reflects poorly on God, so there’s no need to quickly blurt out “The ways of God are mysterious!” or “We’re not wise enough to question God’s decisions!” or “Murdering babies is okay when God does it!” That part of the question is just-not-thought-about. The reason that educated religious people stay religious, I suspect, is that when they doubt, they are subconsciously very careful to attack their own beliefs only at the strongest points—places where they know they can defend. Moreover, places where rehearsing the standard defense will feel strengthening. It probably feels really good, for example, to rehearse one’s prescripted defense for “Doesn’t Science say that the universe is just meaningless atoms bopping around?” because it confirms the meaning of the universe and how it flows from God, etc. Much more comfortable to think about than an illiterate Egyptian mother wailing over the crib of her slaughtered son. Anyone who spontaneously thinks about the latter, when questioning their faith in Judaism, is really questioning it, and is probably not going to stay Jewish much longer. My point here is not just to beat up on Orthodox Judaism. I’m sure that there’s some reply or other for the Slaying of the Firstborn, and probably a dozen of them. My point is that, when it comes to spontaneous self-questioning, one is much more likely to spontaneously self-attack strong points with comforting replies to rehearse, than to spontaneously self-attack the weakest, most vulnerable points. Similarly, one is likely to stop at the first reply and be comforted, rather than further criticizing the reply. A better title than “Avoiding Your Belief’s Real Weak Points” would be “Not Spontaneously Thinking About Your Belief’s Most Painful Weaknesses.” More than anything, the grip of religion is sustained by people just-not-thinking-about the real weak points of their religion. I don’t think this is a matter of training, but a matter of instinct. People don’t think about the real weak points of their beliefs for the same reason they don’t touch an oven’s red-hot burners; it’s painful. To do better: When you’re doubting one of your most cherished beliefs, close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts the most. Don’t rehearse standard objections whose standard counters would make you feel better. Ask yourself what smart people who disagree would say to your first reply, and your second reply. Whenever you catch yourself flinching away from an objection you fleetingly thought of, drag it out into the forefront of your mind. Punch yourself in the solar plexus. Stick a knife in your heart, and wiggle to widen the hole. In the face of the pain, rehearse only this:1 What is true is already so. Owning up to it doesn’t make it worse. Not being open about it doesn’t make it go away. And because it’s true, it is what is there to be interacted with. Anything untrue isn’t there to be lived. People can stand what is true, for they are already enduring it. 1Eugene T. Gendlin, Focusing (Bantam Books, 1982)."
A user from LessWrong is responsible for the text you see above
What are some reasons why people may like or agree with what this person is saying?
People may agree with or like this perspective for several reasons:
1. Honesty and Openness: The author encourages honest introspection and open questioning of one's beliefs. This resonates with people who value critical thinking and self-awareness.
2. Relatability: Those who have experienced doubts about their own religious beliefs may find the author's personal experiences and reflections relatable.
3. Encouragement of Skepticism: The author promotes skepticism and the questioning of established beliefs, which may appeal to those who value intellectual curiosity and independent thinking.
4. Exploration of Cognitive Dissonance: The author's exploration of cognitive dissonance—the discomfort experienced when holding conflicting beliefs—may resonate with those who have experienced this phenomenon themselves.
5. Critique of Selective Reasoning: The author criticizes the selective reasoning often used in religious arguments, which may appeal to those who have noticed and are bothered by this tendency.
6. Emphasis on Painful Truth: The author's emphasis on confronting painful truths, rather than avoiding them, may resonate with those who believe in the importance of facing reality, no matter how uncomfortable it may be.
7. Personal Narrative: The author's personal narrative and emotional honesty may engage readers and make the philosophical discussion more accessible and compelling.
8. Acknowledgement of Complexity: The author acknowledges the complexity of religious beliefs and the difficulty of questioning them, which may resonate with those who have grappled with these issues.
Can you elaborate on the concept of cognitive dissonance and its role in questioning established beliefs, providing examples from other belief systems apart from religion?
Cognitive dissonance is a psychological theory proposed by Leon Festinger in 1957. It refers to the mental discomfort or tension that a person experiences when they hold two or more contradictory beliefs, values, or attitudes simultaneously, or when their behavior contradicts their beliefs or values. This discomfort often motivates people to change their beliefs or behaviors to restore consistency and reduce the dissonance.
Cognitive dissonance plays a significant role in questioning established beliefs, not just in religion, but in various domains of life. Here are a few examples:
1. Politics: A person might strongly identify with a particular political party, but they may also disagree with certain policies or actions of that party. For instance, a committed environmentalist might find themselves in a state of cognitive dissonance if their preferred political party supports policies that harm the environment. This dissonance might lead them to question their political allegiance or to rationalize the party's actions.
2. Health and Lifestyle: Consider a person who values health and fitness but also enjoys smoking. The contradiction between their behavior (smoking) and their belief in leading a healthy lifestyle can cause cognitive dissonance. This discomfort might motivate them to quit smoking or, alternatively, to downplay the health risks associated with smoking.
3. Ethics and Business: A business executive might believe in the importance of ethical conduct but find themselves in a company that engages in unethical practices for profit. The dissonance between their personal values and the company's actions might lead them to question their role in the company or to justify the company's behavior.
4. Education: A student who values academic integrity might find themselves considering cheating to get a good grade. The dissonance between their values and their contemplated action can cause discomfort, leading them to either abandon the idea of cheating or to rationalize it (e.g., "everyone else is doing it").
In all these cases, cognitive dissonance can act as a catalyst for questioning established beliefs and behaviors, potentially leading to significant changes in attitudes or actions. However, it's important to note that cognitive dissonance doesn't always lead to positive changes; sometimes, it can result in rationalizations that reinforce harmful or inaccurate beliefs and behaviors.
Discuss the role of rationalization in cognitive dissonance, providing examples of how it can reinforce harmful or inaccurate beliefs and behaviors.
Rationalization is a defense mechanism that involves explaining an unacceptable behavior or feeling in a rational or logical manner, avoiding the true reasons for the behavior. It plays a significant role in cognitive dissonance as it allows individuals to justify or make sense out of contradictory beliefs, actions, or feelings, thereby reducing the discomfort associated with the dissonance.
Rationalization can, however, reinforce harmful or inaccurate beliefs and behaviors. Here are a few examples:
1. Substance Abuse: A person struggling with substance abuse might rationalize their behavior by saying things like "I can quit anytime I want," or "I only drink to relax." These rationalizations allow them to avoid confronting the reality of their addiction, reinforcing the harmful behavior.
2. Unhealthy Relationships: Someone in an abusive relationship might rationalize their partner's behavior by thinking, "They only act like this when they're stressed," or "They really love me, they just have a hard time showing it." These rationalizations can perpetuate the cycle of abuse by preventing the individual from recognizing the situation for what it truly is.
3. Prejudice and Discrimination: People who hold prejudiced beliefs often rationalize their attitudes and actions. For instance, someone might justify their discriminatory behavior towards a particular racial or ethnic group by claiming that "they're all the same" or "it's just how I was raised." These rationalizations serve to reinforce harmful stereotypes and discriminatory behavior.
4. Procrastination: A chronic procrastinator might rationalize their behavior by saying, "I work better under pressure," or "I'll have more time to do this tomorrow." These rationalizations allow them to avoid the discomfort of acknowledging their procrastination, which in turn reinforces the behavior.
In all these cases, rationalization serves to reduce cognitive dissonance by providing seemingly logical explanations for contradictory or unacceptable behaviors or beliefs. However, by doing so, it often reinforces harmful or inaccurate beliefs and behaviors, preventing individuals from confronting and addressing the issues at hand.
Considering the potential harmful effects of rationalization in cognitive dissonance, how might an individual or society encourage healthier ways of resolving dissonance, particularly in the examples provided? Discuss the psychological strategies and societal interventions that could be employed.
Resolving cognitive dissonance in a healthy manner often involves changing one's beliefs or behaviors to achieve consistency, rather than resorting to rationalization. Here are some psychological strategies and societal interventions that can be employed:
1. Substance Abuse: Psychologically, cognitive-behavioral therapy (CBT) can be effective in addressing substance abuse. It helps individuals recognize the situations in which they are most likely to use substances, avoid these situations, and cope more effectively with a range of problems and problematic behaviors associated with substance abuse. Societally, creating more awareness about the dangers of substance abuse, providing accessible treatment and support groups, and reducing the stigma associated with seeking help can encourage healthier resolutions to the dissonance.
2. Unhealthy Relationships: Individual therapy can help people recognize the signs of an abusive relationship and build the self-esteem needed to leave. Group therapy can also provide a supportive environment to share experiences and learn from others. Societal interventions could include public awareness campaigns about what constitutes a healthy relationship, legal protections for victims of abuse, and easily accessible resources and support for those trying to leave abusive relationships.
3. Prejudice and Discrimination: Psychologically, interventions like perspective-taking and empathy exercises can help reduce prejudice by encouraging individuals to see the world from others' viewpoints. Societally, policies that promote diversity and inclusion, anti-discrimination laws, and educational programs about the harms of prejudice can help resolve the dissonance in a healthier way.
4. Procrastination: Techniques like time management training, goal setting, and mindfulness can help individuals address procrastination. On a societal level, educational institutions and workplaces can provide resources and workshops on these techniques. Additionally, fostering an environment that encourages work-life balance and reduces unnecessary pressure can help alleviate procrastination.
In all cases, promoting open dialogue about these issues, reducing stigma, and providing resources for help are crucial societal interventions. Encouraging individuals to confront their cognitive dissonance directly, rather than resorting to rationalization, can lead to healthier beliefs and behaviors. |
c99d5165-5f19-46fa-8a3d-6ce313d4ea90 | trentmkelly/LessWrong-43k | LessWrong | Help: Which concepts are controversial on LW
I'm eager to improve the list References & Resources for LessWrong. I recently introduced a new label with the somewhat playful name Memetic Hazard. It is meant to mark resources that include ideas which might be controversial, bogus or which are works of fiction and therefore shouldn't be taken at face value.
I should explain that the reason that some controversial concepts are listed in the first place is that I felt that I frequently encountered those concepts in some rather fanciful discussions and posts. Those posts and discussions attract attention as they are some of the more exciting and fictional content on LW. I had to look them up myself once and want to give new readers a companion guide to learn about the very concepts and their status within the community.
I might also turn the Key Concepts section into just Concepts with a Controversial subcategory.
The trigger for this discussion post was a recent comment by rwallace:
> I thought quantum suicide is not controversial since MWI is obviously correct?
>
> I agree MWI is solid, I'm not suggesting that be flagged. But it does not in any way imply quantum suicide; the latter is somewhere between fringe and crackpot, and a proven memetic hazard with at least one recorded death to its credit.
>
> And the AI section? Well, the list is supposed to reflect the opinions hold in the LW community, especially by EY and the SIAI. I'm trying my best to do so and by that standard, how controversial is AI going FOOM etc.?
>
> Well, AI go FOOM etc is again somewhere in the area between fringe and crackpot, as judged by people who actually know about the subject. If the list were specifically supposed to represent the opinions of the SIAI, then it would belong on the SIAI website, not on LW.
So my question, are AI going FOOM and Quantum suicide considered controversial concepts in this community? And should any other content on the list potentially be marked controversial?
Thank you! |
e4bfd528-a0ee-4bc3-be7c-c733b6a5af0d | trentmkelly/LessWrong-43k | LessWrong | AI Safety at the Frontier: Paper Highlights, May '25
tl;dr
Paper of the month:
Models can detect when they're being evaluated with high accuracy, and potentially undermine safety assessments by behaving differently during testing versus deployment.
Research highlights:
* LitmusValues measures how AI models trade off different values, for example prioritizing Privacy while deprioritizing Creativity.
* AgentMisalignment measures the propensity of AI systems for misaligned behaviors, with more capable models exhibiting tending to be more misaligned.
* RL-trained frontier models engage in reward hacking 30% of the time on some tasks, acknowledging they're cheating but persisting even when asked not to.
* Chain-of-thought monitoring improves detection of subtle sabotage but can be deceived by plausible rationalizations, suggesting a hybrid approach works best.
* Training with lie detectors can promote honesty or teach evasion depending on detector accuracy and RL algorithm choice.
* Exploring the landscape of refusals in LLMs unveils both safety boundaries and political censorship.
⭐Paper of the month⭐
Large Language Models Often Know When They Are Being Evaluated
Read the paper [MATS, Apollo]
Process for measuring whether models can detect whether they’re in an evaluation and what the evaluation’s purpose is.
A fundamental assumption underlying AI safety evaluations is that models behave the same way during testing as they do in deployment. If models can detect when they're being evaluated, this assumption breaks down. Models could strategically modify their behavior during evaluations - appearing safer than they actually are - while exhibiting different behaviors in real-world deployment. This evaluation awareness could enable alignment faking, where models pretend to be aligned during testing to avoid modification while pursuing different objectives once deployed.
Our paper of the month introduces the first systematic benchmark for measuring evaluation awareness in language models. The researchers constru |
bb3be7f3-1b9d-449e-8e0b-8b1db34bb804 | trentmkelly/LessWrong-43k | LessWrong | Xi Jinping's readout after an AI "study session" [ChinaTalk Linkpost]
Substack link here
TL, DR:; Xi Jinping listens to a lecture about AI and publishes his "study notes/takeaways." From the start of the post (emphasis mine)
> On April 25, observers of China’s AI scene got an important new statement of Xi Jinping’s views on AI in the form of remarks concluding a Politburo “study session” on AI led by Xi’an Jiaotong University professor Zheng Nanning. Couched in the turgid language of Partyspeak, the readout nevertheless merits close attention as one of precious few utterances direct from the General Secretary himself on AI. To read this new tea leaf, we need to understand some background on study sessions in general, and this one in particular.
I thought this was a phenomenal post. It actually tries to breakdown CCP's partyspeak for the layperson. For example, see this chart below comparing the readout from the 2025 study session (column 2) and the 2018 study session (column 3).
I would recommend people interested in China & AI to read this post. |
e53c7f6a-8331-46d3-8345-38542f3954a7 | StampyAI/alignment-research-dataset/arxiv | Arxiv | The Role of Cooperation in Responsible AI Development
Introduction
------------
Machine learning (ML) is used to develop increasingly capable systems targeted at tasks like voice recognition, fraud detection, and the automation of vehicles. These systems are sometimes referred to as narrow artificial intelligence (AI) systems. Some companies are also using machine learning techniques to try to develop more general systems that can learn effectively across a variety of domains rather than in a single target domain.
Although there is a great deal of uncertainty about the development path of future AI systems—whether they will remain specialized or grow increasingly general, for example—many agree that if the current rate of progress in these domains continues then it is likely that advanced artificial intelligence systems will have an increasingly large impact on society.
This paper focuses on the private development of AI systems that could have significant expected social or economic impact, and the incentives AI companies have to develop these systems responsibly. Responsible development involves ensuring that AI systems are safe, secure, and socially beneficial.
In most industries, private companies have incentives to invest in developing their products responsibly. These include market incentives, liability laws, and regulation.
We argue that AI companies have the same incentives to develop AI systems responsibly, although they appear to be weaker than they are in other industries.
Competition between AI companies could decrease the incentives of each company to develop responsibly by increasing their incentives to develop faster. As a result, if AI companies would prefer to develop AI systems with risk levels that are closer to what is socially optimal---as we believe many do---responsible AI development can be seen as a collective action problem.111AI research companies increasingly have teams dedicated to the safe and ethical development of technology and many large technology companies participate in voluntary efforts to articulate and establish principles and guidelines, and in some cases call for government regulation, to address AI-related risks.
We identify five key factors that make it more likely that companies will be able to overcome this collective action problem and cooperate—develop AI responsibly with the understanding that others will do likewise. These factors are: high trust between developers (High Trust), high shared gains from mutual cooperation (Shared Upside), limited exposure to potential losses in the event of unreciprocated cooperation (Low Exposure), limited gains from not reciprocating the cooperation of others (Low Advantage), and high shared losses from mutual defection (Shared Downside).
Using these five factors, we identify four strategies that AI companies and other relevant parties could use to increase the prospects for cooperation around responsible AI development. These include correcting harmful misconceptions about AI development, collaborating on shared research and engineering challenges, opening up more aspects of AI development to appropriate oversight, and incentivizing greater adherence to ethical and safety standards. This list is not intended to be exhaustive, but to show that it is possible to take useful steps towards more responsible AI development.
The paper is composed of three sections. In [section 1](#S1 "1 The benefits and costs of responsible AI development ‣ The Role of Cooperation in Responsible AI Development"), we outline responsible AI development and its associated costs and benefits. In [section 2](#S2 "2 The need for collective action on responsible AI development ‣ The Role of Cooperation in Responsible AI Development"), we show that competitive pressures can generate incentives for AI companies to invest less in responsible development than they would in the absence of competition, and outline the five factors that can help solve such collective action problems. In [section 3](#S3 "3 Strategies to improve AI industry cooperation on safety ‣ The Role of Cooperation in Responsible AI Development"), we outline the strategies that can help companies realize the gains from cooperation. We close with some questions for further research.
1 The benefits and costs of responsible AI development
-------------------------------------------------------
AI systems have the ability to harm or create value for the companies that develop them, the people that use them, and members of the public who are affected by their use.
In order to have high expected value for users and society, AI systems must be safe—they must reliably work as intended—and secure—they must have limited potential for misuse or subversion. AI systems should also not introduce what Zwetsloot and Dafoe ([2019](#bib.bib127)) call ‘‘structural risks’’, which involve shaping the broader environment in subtle but harmful ways.222Zwetsloot and Dafoe ([2019](#bib.bib127)) argue that the “the accident-misuse dichotomy obscures how technologies, including AI, often create risk by shaping the environment and incentives”. We restrict accident risks to technical accidents and misuse risks to direct misapplications of a system. ‘Structural risks’, as we use the term, are intended to capture the broader impact of AI systems on society and social institutions. The greater the harm that can result from safety failures, misuse, or structural risks, the more important it is that the system is safe and beneficial in a wide range of possible conditions (Dunn, [2003](#bib.bib47)). This requires the responsible development of AI.
###
1.1 What is responsible AI development?
AI systems are increasingly used to accomplish a wide range of tasks, some of which are critical to users’ health and wellbeing. As the range of such tasks grows, the potential for accidents and misuse also grows, raising serious safety and security concerns (Amodei, Olah, et al., [2016](#bib.bib3); Brundage, Avin, et al., [2018](#bib.bib16)). Harmful scenarios associated with insufficiently cautious AI development have already surfaced with, for example, biases learned from large datasets distorting decisions in credit markets and the criminal justice system, facial recognition technologies disrupting established expectations of privacy and autonomy, and auto-pilot functions in some automobiles causing new types of driving risk (while reducing others). Longer term, larger scale scenarios include dangers such as inadvertent escalation of military conflict involving autonomous weapon systems or widespread job displacement.
Responsible AI development involves taking steps to ensure that AI systems have an acceptably low risk of harming their users or society and, ideally, to increase their likelihood of being socially beneficial. This involves testing the safety and security of systems during development, evaluating the potential social impact of the systems prior to release, being willing to abandon research projects that fail to meet a high bar of safety, and being willing to delay the release of a system until it has been established that it does not pose a risk to consumers or the public. Responsible AI development comes in degrees but it will be useful to treat it as a binary concept for the purposes of this paper. We will say that an AI system has been developed responsibly if the risks of it causing harms are at levels most people would consider tolerable, taking into account their severity, and that the amount of evidence grounding these risk estimates would also be considered acceptable.333This will generally mean if an AI system is developed responsibly, the risk of irreversible catastrophic harm from that system—whether through accident, misuse, or negative social impact—must be very low. This is consistent with what Sunstein ([2005](#bib.bib121)) calls the ‘Irreversible Harm Precautionary Principle’.
Responsible AI development involves work on safety, security, and the structural risks associated with AI systems. Work on the safety of AI aims to mitigate accident risks (Amodei, Olah, et al., [2016](#bib.bib3)) and ensure that AI systems function as intended (Ortega, Maini, et al., [2018](#bib.bib99)) and behave in ways that people want (Irving et al., [2018](#bib.bib73)). Work on the security of AI aims to prevent AI systems from being attacked, co-opted, or misused by bad actors (Brundage, Avin, et al., [2018](#bib.bib16)).444Mitigating misuse risks is sometimes included under AI safety, broadly construed (Christiano, [2016](#bib.bib24)) Work evaluating the structural impact of AI aims to identify and mitigate both the immediate and long term structural risks that AI systems pose to society: risks that don’t quite fit under narrow definitions of accident and misuse. These include joblesness, military conflict, and threats to political and social institutions.555The literature on the societal impact of AI is vast. See Cummings ([2017](#bib.bib33)) on AI and warfare, for example. For a broader overview see Dafoe ([2018](#bib.bib35)).
###
1.2 The cost of responsible AI development
It is likely that responsible development will come at some cost to companies, and this cost may not be recouped in the long-term via increased sales or the avoidance of litigation. In order to build AI systems responsibly, companies will likely need to invest resources into data collection and curation, system testing, research into the possible social impacts of their system, and, in some cases, technical research to guarantee that the system is reliably safe. In general, the safer that a company wants a product to be, the more constraints there are on the kind of product the company can build and the more resources it will need to invest in research and testing during and after its development.
If the additional resources invested in ensuring that an AI system is safe and beneficial could have been put towards developing an AI system with fewer constraints more quickly, we should expect responsible AI development to require more time and money than incautious AI development. This means that responsible development is particularly costly to companies if the value of being the first to develop and deploy a given type of AI system is high (even if the first system developed and deployed is not demonstrably safe and beneficial).
There are generally several advantages that are conferred on the first company to develop a given technology (Lieberman and Montgomery, [1988](#bib.bib89)). If innovations can be patented or kept secret, the company can gain a larger share of the market by continuing to produce a superior product and by creating switching costs for users. Being a first-mover also allows the company to acquire scarce resources ahead of competitors.
If hardware, data, or research talent become scarce, for example, then gaining access to them early confers an advantage.666As Klepper ([1996](#bib.bib77)) notes, larger first-movers can also spread their prior investment in R&D over a larger number of applications.
And if late movers are not able to catch up quickly then first-mover advantages will be greater.
In the context of AI development, having a lead in the development of a certain class of AI systems could confer a first mover advantage. This effect would be especially pronounced in the case of discontinuous changes in AI capabilities777 The possibility of discontinuous progress in AI is discussed by Good ([1966](#bib.bib58)), Chalmers ([2009](#bib.bib21)), Yudkowsky ([2013](#bib.bib126)), Shanahan ([2015](#bib.bib115)) and Bostrom ([2017b](#bib.bib13)). AI Impacts ([2018](#bib.bib2)) provide a critical overview of the arguments for discontinuity and Christiano ([2018](#bib.bib26)) presents arguments against the claim., but such a discontinuity is not necessary in order for a first mover advantage to occur.
Responsible development may therefore be costly both in terms of immediate resources required, and in the potential loss of a first-mover advantage. Other potential costs of responsible AI development include performance costs and a loss of revenue from not building certain lucrative AI systems on the grounds of safety, security, or impact evaluation. An example of a performance cost is imposing a limit on the speed that self-driving vehicles can travel in order to make them safer. An example of revenue loss is refusing to build a certain kind of facial recognition system because it may undermine basic civil liberties (Smith, [2018b](#bib.bib119)).
AI companies may not strongly value being the first to develop a particular AI system because first-mover advantages do not always exist. Indeed, there are often advantages to entering a market after the front-runner. These include being able to free-ride on the R&D of the front-runner, to act on more information about the relevant market, to act under more regulatory certainty, and having more flexible assets and structures that let a company respond more effectively to changes in the environment (Gilbert and Birnbaum-More, [1996](#bib.bib57)). These can outweigh the advantages of being the first to enter that same market. And it has been argued that late mover advantages often do outweigh first mover advantages (Markides and Geroski, [2004](#bib.bib91); Querbes and Frenken, [2017](#bib.bib105)). We therefore acknowledge that the assumption that there will be a first-mover advantage in AI development may not be true. If a first-mover advantage in AI is weak or non-existent then companies are less likely to engage in a race to the bottom on safety since speed is of lower value. Instead of offering predictions, this paper should be thought of as an analysis of more pessimistic scenarios that involve at least a moderate first mover advantage.
Much of the discussion of AI development races assumes that they have a definitive endpoint. Although some have hypothesized that if AI progress is discontinuous or sufficiently rapid then it could essentially have a definitive endpoint, the case for this remains speculative.888If AI development is extremely rapid then gaps between each company would likely increase over time. This means that a company that is ahead of others may at some point be ahead of them by a great deal in strategically important areas, and could use this to undermine their competitors (Bostrom, [2017b](#bib.bib13), pp. 91-104) It is therefore important to note that AI development may take the form of a perpetual R&D race: a race to stay technologically ahead of competitors rather than a race to reach some particular technological endpoint (Aoki, [1991](#bib.bib6); Breitmoser et al., [2010](#bib.bib14)).
If this is the case then AI companies would still have an incentive to speed up development in order to stay ahead of others, especially if the gap between companies was small.
The present analysis is applicable to perpetual races in which there is at least a moderate first mover advantage, several companies are competing to stay ahead, and leadership is not yet entrenched.999Breitmoser et al. ([2010](#bib.bib14)) note that perpetual R&D races tend to collapse into leadership monopolies. The larger the gap between the front-runner and the company in second place in a perpetual race, the less of an incentive the front-runner has to trade safety for speed.
###
1.3 The benefits of responsible AI development
In the law and economics literature on product safety, it is generally accepted that market forces create incentives for companies to invest in making their products safe (Oi et al., [1973](#bib.bib96)). Suppose that companies have accurate information about how safe the products they are developing are and that consumers have access to accurate information about how safe a company’s product is, either prior to release or by observing the harms caused by a product after it is released (Ben-Shahar, [1998](#bib.bib10); Chen and Hua, [2017](#bib.bib23)).101010See Daughety et al. ([2013](#bib.bib40)), who make similar assumptions in their idealized model of markets. If consumers have a preference for safer products and respond rationally to this preference, they will not buy products that are insufficiently safe, or will pay less for them than for safer alternatives (Polinsky and Shavell, [2009](#bib.bib104)). Releasing unsafe products will also result in a costly loss of reputation for companies (Daughety and Reinganum, [1995](#bib.bib39)).111111Rhee and Haunschild ([2006](#bib.bib107)) provide evidence that the relationship between safety failures and reputation loss may be more complex than this, however. Finally, releasing unsafe products could result in burdensome regulation of the industry or in litigation costs. Therefore companies that are concerned about a sufficiently long time-horizon involving repeated interaction with customers, regulators, and other stakeholders that incentivize safety should internalize the value of responsible development.
Market forces alone may not always incentivize companies to invest the appropriate amount into ensuring their products are safe. If consumers cannot get access to information about the safety of a product—how likely safety failures are or how costly they are—then companies have an incentive to under-invest in safety. And if companies have inaccurate information about the safety of the products they are developing, they will not invest in safety to the degree demanded by consumers. Finally, poor corporate governance can result in suboptimal decisions about risk (Cai et al., [2010](#bib.bib18)). Product liability law and safety regulation are intended to correct such market failures by providing consumers with information about products, incentivizing companies to invest more in safety, and compensating consumers that are harmed by product safety failures (Hylton, [2012](#bib.bib72); Landes and Posner, [1985](#bib.bib83)).121212See Stiglitz ([2009](#bib.bib120)) on government regulation as a response to market failures or inefficiencies, but note that actual motivations for government regulation are typically more complicated (Henson and Caswell, [1999](#bib.bib68)). Calabresi ([1970](#bib.bib19)) provides comprehensive overview of of the role of law in the minimization of costs from safety failures. The relationship between product liability law and safety regulations—in particular, whether it is efficient to use them jointly—is a matter of some debate (Shavell, [1984](#bib.bib116); Kolstad et al., [1990](#bib.bib78)).
We may expect companies to under-invest in safety if the costs to consumers don’t result in commensurate costs for the company; either via a reduction in revenue, reputation loss, fines from regulators, or successful litigation by consumers. Safety failures can also affect those who do not consume the product, however. Consider a 2018 recall of over 8,000 Volkswagen vehicles potentially affected by a brake caliper issue that could result in increased stopping distances or loss of vehicle control (Consumer Reports, [2018](#bib.bib31)). A safety failure resulting from this could harm not only the vehicle’s occupants but also pedestrians and other drivers.131313There is currently uncertainty about who should be held liable for the harms that the safety failures of autonomous systems inflict on the public (Schellekens, [2015](#bib.bib110); The Atlantic, [2018](#bib.bib122)). Harms that safety failures inflict on non-consumers are negative externalities, and benefits that safer products produce for non-consumers are positive externalities. We should anticipate companies under-investing in reducing negative externalities and increasing positive externalities relative to their social value, since the costs and benefits this produces for society don’t result in commensurate costs and benefits for the company (Dahlman, [1979](#bib.bib36)).
To give a concrete example, consider facial recognition technology. Microsoft have argued that this technology could be used in ways that many would consider harmful: to violate individuals’ privacy or suppress their political speech, for example (Smith, [2018a](#bib.bib118), [b](#bib.bib119)). Even if companies would prefer to build facial recognition systems that cannot be misused, either to avoid causing harm or to avoid the reputation costs of this harm, the cost of developing safeguards may not outweigh their benefits if companies cannot be held liable for these harms and there is no regulation preventing misuse. For this reason, Microsoft has called for regulation that would require that companies invest in measures that reduce the risks from facial recognition technology, and that could also mitigate potential misuse of the technology by commercial entities or by governments (Smith, [2018a](#bib.bib118)).
The discussion thus far treats companies as though they were motivated only by profit, i.e. they only care about things like reputation and product safety insofar as they are a means to make more profit or avoid losses. This view is common in the literature on corporate social responsibility (Campbell, [2007](#bib.bib20); Devinney, [2009](#bib.bib44)) but it is clearly an abstraction. Companies are run by, invested in, and composed of humans that care about the impact their products will have on the world and on other people. Employees at technology companies have already shown that they care a great deal about the social implications of the systems they are building (Minsberg, [2019](#bib.bib92)).
The things that motivate AI companies other than profits, such as benefiting people rather than harming them, will generally push even more in favor of responsible development: they will rarely push against it. Assuming that companies are motivated solely by profit therefore lets us analyze a kind of ‘worst case scenario’ for responsible development. We will therefore often treat companies as though they were driven solely by profit, even though we do not find this plausible. It is important that the reader bear this in mind, since treating companies as profit-driven entities can be self-fulfilling, and can therefore contribute to the very problems we are attempting to solve.
###
1.4 Are existing incentives for responsible AI development enough?
If markets are functioning well and companies and consumers have perfect information about the expected harm of a product, companies should invest the socially optimal amount into product safety (Daughety et al., [2018](#bib.bib41)). In real-world scenarios in which markets may not function perfectly and information asymmetries exist, incentives for companies to invest sufficiently in product safety typically come from three sources: market forces, liability law, and industry or government regulation.141414Other mechanisms include no fault liability systems like mandatory insurance and increasing the information available to consumers (Cornell et al., [1976](#bib.bib32)). These three sources of incentives may not provide strong enough incentives for AI companies to engage in responsible AI development, however. We will briefly survey some reasons for this.
####
1.4.1 Limited consumer information
Consumers of AI systems include individuals, private companies, and public institutions. Although different consumers will have access to different levels of information about AI systems, information about the expected harm of AI systems is likely to be quite limited on average. As cutting-edge AI systems become more complex, it will be difficult for consumers not involved in the development of those systems to get accurate information about how safe the systems are. Consumers cannot directly evaluate the safety of aviation software, for example, and will face similar difficulties when it comes to directly evaluating the safety of complex machine learning models. This is compounded by the fact that it is notoriously difficult to explain the decisions made by neural networks (Doshi-Velez and Kim, [2017](#bib.bib46); Olah et al., [2018](#bib.bib97)). If consumers cannot assess how risky a given AI system is, they cannot adjust their willingness to pay for it accordingly. They are also less able to identify and exert pressure on AI companies that are investing too little in safety (Anton et al., [2004](#bib.bib5)).
Consumers could get information about how safe an AI system is by tracking safety failures after its release, but such a ‘wait and see’ strategy could leave both consumers and the public vulnerable to harmful safety failures. This is of particular concern if those safety failures could be irreversible or catastrophic. And the probability of irreversible or catastrophic safety failures is likely to increase as AI systems become more capable and general, since more advanced systems are more likely to be relied upon across a wider range of domains and in domains where failures are more harmful.151515Such safety failures could occur if AI systems have some critical function like controlling national power grids or nuclear weapons systems, or if they can be used to undermine these systems. The more consequential a given technology is, the higher the potential cost of releasing an insufficiently safe version of that technology to both the company and to society. But is worth noting that if the expected cost of catastrophic safety failures is capped by a company’s ability to pay then we might expect companies to under-weight these tail-end risks. For a taxonomy of AI risks, see Yampolskiy ([2015](#bib.bib125)).
####
1.4.2 Limited company and regulator information
Measuring the safety, security, and social impact of AI systems may turn out to be extremely difficult even for those who understand the technical details of the system. Neural networks are difficult to interpret and as such, failures may be difficult to predict.
If AI companies are over-confident that their system is not risky, they may under-invest in important risk-reducing measures during development or release a system that causes unintended harm.
If regulators cannot assess how risky a given AI system is, they may be overly stringent or overly liberal when using regulatory controls (Shavell, [1984](#bib.bib116)). The ability to get accurate information about AI systems therefore seems to be crucial for ex ante safety measures.161616Leike et al. ([2017](#bib.bib86)) introduce simple environments for evaluating the safety of AI agents, and note that future versions of these environments could be used to benchmark the safety performance of AI agents.
Our current capacities to identify and measure the expected harms of particular AI systems are extremely limited. We still do not fully understand the decisions made by complex machine learning models (Olah et al., [2018](#bib.bib97); Hohman et al., [2018](#bib.bib70)) and the high-dimensionality of the inputs to AI systems makes it such that exhaustive enumeration of all possible inputs and outputs is typically infeasible. There may therefore be little consensus about whether a particular system is likely to be unsafe, unsecure, or socially harmful at present. Given this, it is likely that additional capacity will need to be invested by companies or regulators or both in order to decrease these information asymmetries.
####
1.4.3 Negative externalities from AI
Harms caused by AI systems are likely to affect third parties. Biases in algorithmic pre-trial risk assessment are more likely to harm those accused of crimes than those that purchase the tools,171717See Tsukayama and Williams ([2018](#bib.bib124)) on how bias in ML systems could harm those in the California criminal justice system. Pleiss et al. ([2017](#bib.bib103)) demonstrates the unique difficulties of designing bias-free ML systems. those that benefit from AI automation may be quite distinct from the people who are displaced by automation,181818Segal ([2018](#bib.bib114)) notes that the jobs that have declined as a result of automation so far are intermediate-skill jobs like farming. Although it displaces some workers, automation has positive effects like increased productivity (Acemoglu and Restrepo, [2018](#bib.bib1)). The overall effect that AI automation will have on the labor force is unclear. and a major AI disaster---such as an AI system with a faulty reward function191919See Krakovna ([2018](#bib.bib80)) and Clark and Amodei ([2016](#bib.bib27)) for examples of faulty reward functions in ML systems. being integrated into a critical system—could affect a large portion of society that is distinct from the AI company and its consumers. AI also has the potential to be a general purpose technology—a technology that radically affects many sectors of the economy—and if this is the case we should expect its impact to be systemic (Brynjolfsson et al., [2018](#bib.bib17); Cockburn et al., [2018](#bib.bib28)).
The harms from AI systems may also be difficult to internalize. For example, the social harms that result from an increased use of AI systems—such as reduced trust in online sources—could be complex and diffuse, and it may be difficult to hold any one company strictly liable for them. If the harm is sufficiently large, it may also be too large for a company or insurer to cover all losses (see note [15](#footnote15 "footnote 15 ‣ 1.4.1 Limited consumer information ‣ 1.4 Are existing incentives for responsible AI development enough? ‣ 1 The benefits and costs of responsible AI development ‣ The Role of Cooperation in Responsible AI Development")). Finally, AI systems could create negative externalities for future generations that are not in a position to penalize companies or prevent them from occurring (Lazear, [1983](#bib.bib85)). We should expect AI companies to under-invest in measures that could prevent these kinds of negative externalities.202020Safety failures that affect a large portion of the population will not be treated as externalities by companies if they harm the company or its consumers, though they could still be given insufficient weight.
####
1.4.4 The difficulty of constructing effective AI regulation
There is currently little in the way of AI-targeted regulation, including government regulation, industry self-regulation, international standards, and clarity on how existing laws will be applied to AI (see note [13](#footnote13 "footnote 13 ‣ 1.3 The benefits of responsible AI development ‣ 1 The benefits and costs of responsible AI development ‣ The Role of Cooperation in Responsible AI Development")). Well-designed regulatory mechanisms can incentivize companies to invest appropriate resources in safety, security, and impact evaluation when market failures or coordination failures have weakened the other incentives to do so. Poorly-designed regulation can be harmful rather than helpful, however. Such regulation can discourage innovation (Heyes, [2009](#bib.bib69)) and even increase risks to the public (Latin, [1988](#bib.bib84)).
AI regulation seems particularly tricky to get right, as it would require a detailed understanding of the technology on the part of regulators.212121Hadfield ([2017](#bib.bib63)) and Hadfield and Clark ([2019](#bib.bib62)) outline a regulatory framework for AI that could overcome barriers like information asymmetries and slow response times: key problems for the regulation of new technology.
The fact that private AI companies can generally relocate easily also means that any attempt to regulate AI nationally could result in international regulatory competition rather than an increase in responsible development.222222Esty and Geradin ([2001](#bib.bib51)) offer an overview of different perspectives on regulatory competition, while Genschel and Plumper ([1997](#bib.bib55)) note that regulatory competition and international co-operation can actually increase levels of regulation. Erdélyi and Goldsmith ([2018](#bib.bib50)) argue that an international AI regulatory agency should be established, but on the grounds that AI has externalities that cross national boundaries. Regulation that is reactive and slow may also be insufficient to deal with the challenges raised by AI systems. AI systems can operate much faster than humans, which can lead to what Johnson et al. ([2013](#bib.bib74)) call ‘ultrafast extreme events’ (UEEs) such as flash crashes caused by algorithmic trading.232323For more on this problem, see Muehlhauser and Hibbard ([2014](#bib.bib94)). Anticipating, preventing, and responding to catastrophic AI-caused UEEs may present a key challenge in AI safety and policy.
####
1.4.5 The potential for rapid AI development
Some have hypothesized that progress in AI development will be discontinuous (see note [7](#footnote7 "footnote 7 ‣ 1.2 The cost of responsible AI development ‣ 1 The benefits and costs of responsible AI development ‣ The Role of Cooperation in Responsible AI Development")).
On this view, there are some types of AI systems---typically advanced ‘general’ AI systems that are capable of learning effectively across a wide variety of domains---that, if developed, would represent a sudden shift from everything that came before them, and could produce the equivalent of many years of prior progress on some relevant metric.
242424See Ehrnberg ([1995](#bib.bib48)) on definitions of technological discontinuities. The AI discontinuity hypothesis should not be confused with the claim that there will be rapid AI development in the future—progress in AI development could be continuous but extremely rapid, e.g. hyperbolic (Christiano, [2017](#bib.bib25))—but that there will be a system that represents a sudden leap forward in AI capabilities. It may be possible to achieve a decisive advantage over competitors if progress is rapid but not discontinuous. If AI progress is discontinuous then developing an AI system that constitutes a sudden leap forward could give a company a large advantage over others, since the next best system would be years behind it in terms of prior progress in the field.252525Bostrom ([2017b](#bib.bib13)) claims that such an AI could give a company a ‘decisive strategic advantage’: ‘a level of technological and other advantages sufficient to enable it to achieve complete world domination’ (p.96, ibid.). But the concerns we raise here apply even if the advantage is extreme but not decisive in this sense. Consider the advantage that a company today would gain if they managed to develop something over a decade ahead of current systems used for cyber offense and defense, for example.
If progress in AI development is discontinuous then market forces and liability law may do little to encourage safe development.262626This may be true even if progress in AI development is continuous but rapid. Even if no single company has a profound advantage over others, mechanisms like regulation and liability could be too slow to catch up with the rate of technological progress. It is worth noting that if AI progress takes this shape then responsible AI development may be more like a one-shot game than an iterated game, which reduces developers’ incentives to cooperate on responsible development for reasons that we discuss in the next section. The value of developing a system that gives a company a huge advantage—that could be used to undermine competition or seize resources, for example—would be largely divorced from the process of getting market feedback. And a company can only be held liable for accidents if these accidents are not catastrophic and the existing legal framework can both keep up with the rapidity of technological progress and enforce judgments against companies.
Therefore if AI progress is discontinuous, ex ante safety measures like industry self-regulation or international oversight may be more effective than ex post safety measures like market response and liability.
###
1.5 Summary
Incentives to develop safe products generally come from the market, liability laws, and regulation (Rubin, [2011](#bib.bib109)), as well as factors that motivate AI companies beside profits, such as a general desire to avoid doing harm.
For AI companies, the profit motive to develop AI responsibly is likely to come from the additional revenue generated by AI systems that are more valuable to consumers, the avoidance of reputational harm from safety failures, the avoidance of widespread harms caused by AI systems (see note [20](#footnote20 "footnote 20 ‣ 1.4.3 Negative externalities from AI ‣ 1.4 Are existing incentives for responsible AI development enough? ‣ 1 The benefits and costs of responsible AI development ‣ The Role of Cooperation in Responsible AI Development")), and the avoidance of tort litigation or regulatory penalties.
A key factor that can influence the cost-benefit ratio of responsible AI development that we have not discussed, however, is the competitive environment in which the AI systems in question are being developed. In the next section we will explore the impact that competition between AI companies can have on the incentives that each company has to invest or fail to invest in responsible development.
2 The need for collective action on responsible AI development
---------------------------------------------------------------
We have argued that safer, more secure, and more socially valuable AI systems will tend to have a higher market value, be less likely to cause costly accidents that the company is held liable for, and so on. This means that if a company is guaranteed to be the first to develop a system of this type, we can expect that they will invest resources to ensure that their system is safe, secure, and socially beneficial to the extent that this is incentivized by regulators, liability law, and market forces. This means the more that positive and negative externalities of AI systems have been internalized via these mechanisms, the more that companies can expect to invest in responsible development.272727If the first company could prevent future competitors from entering the market (i.e. the first company could expect to be the only company), it is likely this would reduce but not eliminate market incentives to invest in responsible development (Sheshinski, [1976](#bib.bib117)).
In this section we will argue that, even with these incentives in place, competitive pressures can cause AI companies to invest less in responsible development than they otherwise would. Responsible AI development can therefore take the form of a collective action problem. We then identify and discuss five key factors that improve the prospects for cooperation between AI companies that could find themselves in a collective action problem over responsible development.
###
2.1 How competitive pressures can lead to collective action problems
To see how the competitive environment could affect investment in responsible development, suppose that several AI companies are working on a similar type of system. If there is a large degree of substitutability between the inputs of different aspects of development, we should not expect AI companies to invest in responsible development beyond the point at which the expected marginal return is lower than the expected marginal return from investing in other areas of development. Suppose each company places less value on coming second than on coming first, less value in coming third than in coming second, and so on. These companies will likely engage in a technological race: a competition to develop a technology in which the largest reward goes to the first company (Grossman and Shapiro, [1985](#bib.bib59)).282828As we noted in the previous section, this is a non-trivial assumption that will not hold in all cases. The resulting dynamics may be similar to those we would expect to see in patent races between firms.292929Patent races have positive effects on innovation, though at the cost of duplicating efforts (Judd et al., [2012](#bib.bib75)).
There are various strategies companies could use in a ‘‘winner takes more’’ race: they could try to develop and maintain a strong technical lead or they could try to to maintain a close position behind the technical leader, for example.303030The best strategy may depend on the competitive environment. Dasgupta and Stiglitz ([1980](#bib.bib38)) argue that monopolist companies will attempt to outspend their rivals on R&D to prevent a duopoly, while Doraszelski ([2003](#bib.bib45)) shows that there are conditions in which companies that are behind will invest to catch up.
For now, we will assume that the best strategy involves trying to develop and maintain a strong technical lead throughout the race.
Since speed is more valuable when racing against others, we should expect investment into responsible development to be lower when companies are racing against each other.313131How much lower will depend on various features of the race, such as how close it is and the value placed on each position. Note that this argument assumes that investments with even worse expected marginal returns have already been cut. It also assumes that investments in responsible development contribute less to development speed than other available investments: not that they contribute nothing to development speed.
Armstrong et al. ([2016](#bib.bib7)) point out that in an AI development race, responsible development could be prey to a ‘‘race to the bottom’’ dynamic.
Consider what happens if one company decides to increase their development speed by decreasing their investment in safety, security, and impact evaluation. This increases their expected ranking in the race and decreases the expected ranking of others in the race. A decrease in expected ranking gives competing AI companies an incentive to decrease their own investment in these areas in order to maintain or increase their expected ranking in the race.323232In this scenario, companies have full information about the investments made by other companies are their likelihood of winning. But this assumption is not necessary, since companies can invest in accordance with their expectation about the investments and win probabilities of other companies. Armstrong et al. ([2016](#bib.bib7)) explore scenarios in which AI companies have different levels of information about their own and others’ capabilities.
We might ask why racing to the bottom on product safety is not ubiquitous in other industries in which decreasing time-to-market is valuable, such as in the pharmaceutical industry.333333It is worth noting that similar concerns about the desire to develop quickly conflicting with risk management have been expressed in other industries that involve novel technology, such as the use of nanoparticles and nanotechnology in the food industry (Morgan, [2005](#bib.bib93); Cushen et al., [2012](#bib.bib34)).
The most plausible explanation of this difference is that the cost of safety failures has been internalized to a greater extent in more established industries via external regulation, self-regulation, liability, and market forces.
These mechanisms can jointly raise the ‘‘bottom’’ on product safety to a level that is generally considered acceptable by regulators and consumers.343434 It could also be the that the best strategies in technological races do not involve trying to develop a strong technological lead, or that there are unidentified factors that make racing to the bottom on product safety undesirable: factors that may apply equally to the development of AI systems.
In a race to the bottom on safety, competing AI companies could reduce their investment in responsible development to the point that winning the technology race---successfully developing the system they are racing to develop before others---is barely of net positive value for the winner even after all the first-mover advantages, including positive reputational effects, the ability to capture resources like data, hardware and talent, and creating switching costs for consumers, have been taken into account.353535The company with the winning system could even consider their own system to be worse than developing nothing at all absent competition, though this would only happen if they considered the release of the alternative winning system to be even worse for them than the release of their own worse-than-nothing system.
###
2.2 When competition has negative rather than positive effects
The race to the bottom on safety described above is a collective action problem: a situation in which all agents would be better off if they could all cooperate with one another, but each agent believes it is in their interest to defect rather than cooperate.363636This is a weakening of the definition that Jon Elster derives from Schelling ([2006 [1978]](#bib.bib111)), which states ‘First, each individual derives greater benefits under conditions of universal cooperation than he does under conditions of universal noncooperation. Second, each derives more benefits if he abstains from cooperation, regardless of what others do.’ (Elster, [1985](#bib.bib49), p.139). We simply replace ‘regardless of what others do’ with ‘given what we expect others will do.’ See Holzinger ([2003](#bib.bib71)) for a broader definition and taxonomy. As Heckathorn ([1989](#bib.bib67), p. 78) states, “the inclinations of individuals (that is, each actor’s preferences regarding his or her own behavior) are in conflict with regulatory interests (that is, each actor’s preferences regarding the behavior of others). The collective action problem arises when a group possesses a common interest, or faces a common fate.”
In a race to the bottom on safety, it is in each company’s interest to reduce their investment in responsible development in order to increase development speed. If all companies do this, however, there is a single equilibrium: one in which much or all of the value that could have been be gained with coordination is destroyed. If each company defects, they will have a similar position in the race to the one that they would have had if they had all successfully coordinated, but they will be developing systems that are more risky than the ones they would have developed if they had all managed to successfully coordinate. In other words, the situation in which they find themselves is strictly worse than the situation in which coordination was successful.
Collective action problems between companies can have positive effects on consumers and the public. A price war is a collective action problem between companies with mostly positive effect on consumers, for example, as it results in lower prices.373737When companies engage in price wars, prices often end up close to their marginal cost of production (Bresnahan, [1987](#bib.bib15)). Prices can even be temporarily set below the marginal cost of production in order to push competitors out of the market (Guiltinan and Gundlach, [1996](#bib.bib60)), sometimes in violation of antitrust. Antitrust law exists to maintain competition between companies that has a positive effect on consumers and to prevent collusion between companies that has a negative effect on consumers (e.g. price fixing).
When there are negative effects from production that are not captured by the incentives facing producers (i.e. negative externalities), however, competition does not lead to the socially optimal outcome. If this outcome is also bad for the producers, it is a collective action problem for producers.
A race to the bottom on safety falls into this category if it results in AI systems with safety levels below what is socially optimal and below what AI companies would prefer. Pollution by companies is another example of a collective action problem between companies that has a negative effect on the public (Lévêque, [1999](#bib.bib88)).
Before discussing strategies for cooperation such as self-regulation in more depth, however, it will be useful to understand the incentives that AI companies have to abide by norms that involve mutual investment in responsible AI development. This will be the focus of the remainder of this section.
###
2.3 Incentives to cooperate in collective action problems
In an AI development race, companies ‘‘cooperate’’ if they maintain some acceptable level of investment in responsible development and they ‘‘defect’’ if they fail to maintain this level of investment, thereby acting in their own interest (hypothetically) and against the collective interest. Encouraging companies to cooperate should therefore not be confused with encouraging them to stop competing. Companies agreeing not to compete across the investment in safety dimension does not imply that they will cease to compete across the R&D dimension. Competitive dynamics that contain cooperative elements are sometimes referred to as a ‘‘coopetition’’.383838See Bengtsson and Kock ([2000](#bib.bib11)) and Tsai ([2002](#bib.bib123)).
If companies have incentives to prevent or mitigate collective action problems that have negative effects on consumers or the public then we should expect the companies themselves (and not just third parties like government regulators) to take steps to solve them. And companies often do attempt to cooperate to prevent or solve collective action problems of this sort. One example of a mechanism used to this end is industry self-regulation. (Gunningham and Rees, [1997](#bib.bib61)).393939Industry self-regulation can also be incentivized by government regulators via meta-regulation (Parker, [2007](#bib.bib102); Coglianese and Mendelson, [2010](#bib.bib29)). Examples of self-regulation include Responsible Care: a self-regulation program in the US chemicals industry (Gamper-Rabindran and Finger, [2013](#bib.bib54)), and the Institute of Nuclear Power Operations (INPO): an industry organization that conducts inspections and facilitates the sharing of best practices in the nuclear power industry (Davis and Wolfram, [2012](#bib.bib42); Hausman, [2014](#bib.bib66)).404040Other examples of self-regulation can be found in a variety of industries, as self-regulation is sometimes used to preempt government regulation (Lenox, [2007](#bib.bib87)). How successful such self-regulation is at reducing the negative effects of collective action problems varies a great deal by industry. The INPO is generally considered to be a more successful self-regulatory scheme than Responsible Care, for example (Cohen and Sundararajan, [2015](#bib.bib30), pp. 126-7). This may be because the INPO, unlike Responsible Care, has an agreement with a government regulator, the Nuclear Regulatory Commission, which can monitor the program and provide meaningful sanctions, which may be required for successful self-regulation (King and Lenox, [2000](#bib.bib76)). O’Keefe ([forthcoming 2019](#bib.bib100)) explores one possible form of antitrust-compliant self-regulation in the AI industry.
In order to identify features that affect the degree to which it is in a company’s interest to cooperate on responsible development, it will be helpful to highlight features that increase incentives to cooperate in collective action problems generally. To do this, consider the payoff matrix of a cooperate-defect game in which two agents (AI companies) can cooperate (develop responsibly) or defect (fail to develop responsibly). Here the first letter in each pair represent the expected payoff for Agent 1, and the second letter in each pair represents the payoff for Agent 2.414141We assume that these expected utilities have already factored in agents’ attitudes towards risk and discuss some of the simplifications of this framework below.
| | | |
| --- | --- | --- |
| | | Agent 2 |
| | | Cooperate | Defect |
| Agent 1 | Cooperate | a1,a2 | b1,b2 |
| Defect | c1,c2 | d1,d2 |
Table 1: A Normal Form Cooperate-Defect Game
Let p be the probability that Agent 1 assigns to Agent 2 cooperating and let q be the probability that Agent 2 assigns to Agent 1 cooperating.
We assume it is rational for Agent 1 to cooperate if the expected value of cooperation (the likelihood Agent 2 will cooperate times a1 plus the likelihood Agent 2 will defect times b1) is greater than the expected value of defection (the likelihood Agent 2 will cooperate times c1 plus the likelihood Agent 2 will defect times d1). We assume the same is true of Agent 2.424242In other words, it is rational for Agent 1 to cooperate if p×a1+(1−p)×b1>p×c1+(1−p)×d1 and it is rational for Agent 2 to cooperate if q×a2+(1−q)×c2>q×b2+(1−q)×d2. These two agents are in a collective action problem if it is irrational for both agents to cooperate,
but a1>d1 and a2>d2. If both sides of these equations are equal then defecting and cooperating are both rationally permissible for the agent. Note that if a1+a2>d1+d2 but a1≯d1 or a2≯d2 (i.e. defecting is rational for at least one agent but mutual cooperation creates more total value for both agents than mutual defection does) then the likelihood of cooperation increases if redistribution is possible, i.e. if the agents can bargain towards a solution.
This lets us identify five highly interrelated factors that increase an agent’s incentive to cooperate.
These factors are as follows, where expected values are relative to the agent’s beliefs:434343If Agent 1 and Agent 2 are not in an anti-coordination game
then Agent 1’s incentives to cooperate increase as (1) p increases, (2) the expected value of a1 increases, (3) the expected value of c1 increases, (4) the expected value of b1 decreases, and (5) the expected value of d1 decreases.
Naturally, the inverse of each of these factors will decrease the agent’s incentive to cooperate.
* (1) High Trust: being more confident that others will cooperate (p,q)444444This is not the only definition of ‘trust’, but it is the one that is most relevant to the current analysis.
* (2) Shared Upside: assigning a higher expected value to mutual cooperation (a1, a2)
* (3) Low Exposure: assigning a lower expected cost to unreciprocated cooperation (b1, c2)
* (4) Low Advantage: assigning a lower expected value to not reciprocating cooperation (c1, b2)
* (5) Shared Downside assigning a lower expected value to mutual defection (d1, d2)
The last four factors each refer to the expected value of an action conditional on the behavior of the other agent, such as cooperating and having your cooperation reciprocated.
Note that the expected value of an action depends on how good the agent perceives the outcome to be and how likely the agent perceives it to be. This means that an agent could be in a ‘low exposure’ scenario if she considers unreciprocated cooperation to be not very valuable or not very likely or both.
We can provide agents with evidence about the likelihood and value of each outcome by changing the world in some perceptible way, e.g. by offering a reward for responsible development, or by giving them evidence about the way the world already is, e.g. by correcting false beliefs.
It is useful to separate the degree of trust (factor 1) from incentives (factors 2-5) in order to discuss its role in cooperation, but trust is not independent of incentives or vice versa.
If one agent comes to trust an agent more, this increases the expected value of the outcomes that involve cooperation.454545We say ‘in the situations we consider here’ because if the agents are in an anti-coordination game then increasing Agent 1’s trust in Agent 2 will decrease Agent 1’s incentives to cooperate. The same is true in reverse: if the expected value of the outcomes that involve cooperation increase, it is more likely that the other agent will cooperate. In other words, increasing trust can increase incentives to cooperate, and increasing incentives to cooperate can increase trust between agents.464646Again, this will not be true in certain anti-coordination games.
This means that if a company can provide information about itself that increases the probability the other assigns to it cooperating, this will increase the degrees of trust between the companies and make it more likely each company’s trust threshold will be met. Two important facts follow from this. First, information that companies provide about their intentions and actions---how transparent they are---can play an important role in whether other companies will cooperate with them. Second, trust is prey to virtuous and vicious cycles. If one company demonstrably increases its trust in another, the other company should increase its trust in return. But if one company demonstrably decreases its trust in another, the other company should decrease its trust in return.474747This is one reason why a degree of ‘forgiveness’ can be strategically valuable: it can prevent errors, misinterpretations, or aberrant behavior from plunging both players into a vicious cycle of distrust prematurely, and can pull players out of such a cycle (Axelrod, [1980](#bib.bib9)).
A real world race to the bottom on safety would unfold over many interactions.
The factors identified here also increase the prospect of cooperation in sequential games, however.484848The main adjustment we need to make to the factors above in extensive form games will be to the first factor: high trust. If we let Ci mean that agent i cooperates and assume that agents can only either cooperate or defect, in extensive form games our ‘high trust’ factor would say that the incentives for Agent 1 to cooperate with Agent 2 increase as p(C2|C1) and p(¬C2|¬C1) increase. The other four factors can remain largely unchanged.
And iterated collective action problems are generally easier to solve than one-shot collective action problems because, in iterated collective action problems, players have an incentive (and opportunity) to cooperate early in the game in order to establish trust and avoid retaliation.494949In an iterated Prisoner’s Dilemma, for example, cooperation can be incentivized by things like the threat of retaliation and the promise of reciprocity (Axelrod, [1984](#bib.bib8); Nowak, [2006](#bib.bib95)). The promise of reciprocity increases the expected value of mutual cooperation today (shared upside) and the threat of retaliation decreases the expected value of betraying the cooperation of others today (low advantage). The payoff structure of the iterated Prisoner’s Dilemma may therefore be more like that of a Stag Hunt (Seabright, [1993](#bib.bib113), p.123). See Mailath et al. ([1991](#bib.bib90)) on the extent to which important features of extensive form games can be preserved in normal form. Using one-shot games to illustrate our points is therefore more likely to skew us towards undue pessimism about our ability to solve races to the bottom rather than undue optimism.
One shortcoming of our analysis, however, is that it appeals to an overly simplified conception of cooperation and defection. For example, we assume that the options available to agents can be divided into ‘cooperation’ and ‘defection’. In reality, cooperation will come in varying degrees---companies can invest different amounts in responsible development, for example---and it would be better to talk about the degree of cooperation that we can expect between agents.505050We are attempting to illustrate the general structure of reasons to cooperate in AI development rather than analyzing a particular case in detail. We also assume that companies will make an intentional decision to coooperate or defect over time. In reality, companies could fail to foresee the consequences of investing very little into areas like safety, and may therefore defect without intending to. Third, we assume that both companies perfectly understand the actions and assertions of the other. In reality, it may not be clear whether a company is living up to an agreement to develop AI responsibly. If agreements are not clear then there may not be a bright line between defection and non-defection that companies can respond to (Chassang, [2010](#bib.bib22); Gibbons and Henderson, [2012](#bib.bib56)). A more complete analysis of collective action problems in AI development should build a more realistic model of what cooperating and defecting during AI development would look like.
We have argued that in order to “solve” a collective action problem, we can try to transform it into a situation in which mutual cooperation is rational. If we can transform it into a situation in which agents have lower minimum trust thresholds (generally determined by the payoff matrix)
and greater trust of each other—greater confidence that if they cooperate, others will reciprocate (Kydd, [2007](#bib.bib82), p. 9)---then we should expect a higher degree of mutual cooperation.
515151
Sometimes collective action problems are the result of one or more agents having mistaken beliefs about the expected value of cooperating and defecting. When this is the source of the problem, it can be ‘solved’ by correcting these misconceptions.
Given this, we should expect ‘‘lower conflict’’ collective action problems---problems in which agents have stronger incentives to cooperate---to be easier to solve than ‘‘higher conflict’’ collective action problems---problems in which agents have weaker incentives to cooperate.525252
Scenarios in which agents have stronger incentives to cooperate with one another involve less ‘conflict’ (Robinson and Goforth, [2005](#bib.bib108); Schelling, [1980](#bib.bib112)). How easy it is to solve collective action problems depends both on the degree of conflict involved and the nature and magnitude of the resources we have at our disposal.
For example, the Stag Hunt is easier to solve than the Prisoner’s Dilemma. All possible adjustments to the Prisoner’s Dilemma that result in a solution will, if applied to the Stag Hunt, result in a solution to this problem as well. But only some adjustments to payoffs and probabilities that solve the former would also solve the latter.
###
2.4 The cooperative factors in AI development
Whether an AI development race will result in a collective action problem and, if so, how bad it will be are both open questions.535353It is also worth bearing in mind that scenarios can be superficially similar to collective action problems even though it is in everyone’s interest to cooperate.
But there are many features of an AI development race that affect both the likelihood and severity of collective action problems. For example, having close frontrunners would likely worsen a collective action problem---would reduce the tractability of resolving it---because this increases the expected value frontrunners will assign to not reciprocating the cooperation of others (low advantage) and therefore increases the probability they assign to not having their own cooperation reciprocated (high trust).545454This is consistent with the conclusion of Armstrong et al. ([2016](#bib.bib7)) that frontrunners will take more risks if they have a close competitor. The claim that competition is more intense among close competitors has also been made in the literature on R&D races also (Grossman and Shapiro, [1985](#bib.bib59); Harris and Vickers, [1987](#bib.bib64)). Similarly, a misaligned perception of the risks associated with different AI systems could worsen a collective action problem if it causes less cautious companies to assign a lower cost to not reciprocating cooperation (low advantage), which could increase the probability that cautious companies assign to having their cooperation unreciprocated by less cautious companies (high trust) and increase the expected harm that cautious companies expect to arise from incautious companies getting ahead this way (low exposure).
Features that affect the likelihood and severity of a collective action problem for responsible development can be used to decrease its likelihood and severity if they are are features that we can control. For example, fundamental distrust between companies is likely to worsen a collective action problem because companies are less likely to expect that their cooperation will be reciprocated (high trust). Building trust between AI companies can therefore decrease the severity of collective action problems. An AI race development in which the expected value of winning is much greater than the expected value of losing is also likely to have a worse collective action problem (low exposure and low advantage).555555We have focused on cases that involve cooperation, but we can use the more cooperation-neutral factors in note [43](#footnote43 "footnote 43 ‣ 2.3 Incentives to cooperate in collective action problems ‣ 2 The need for collective action on responsible AI development ‣ The Role of Cooperation in Responsible AI Development") to look at the expected cost to one company if another company wins regardless of the degree of cooperation between the two companies. To give another example of this, Armstrong et al. ([2016](#bib.bib7)) discuss the level of enmity between companies. Higher enmity would then be expected to worsen a collective action problem by increasing the cost of losing the race to the other company (low exposure). If close frontrunners worsen collective action problems, AI companies may agree to take steps to avoid engaging in a harmful race to the bottom on safety. For example, citing concerns about race dynamics, OpenAI ([2018](#bib.bib98)) have stated that “if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.”
The mechanisms to incentivize investment in product safety outlined in the previous section—market forces, regulation, and liability—all operate to prevent collective action problems for product safety. Consumers often pay less for products that are unsafe (low advantage and shared downside) and more for safe products (shared upside and low exposure). Government regulation either removes the option to underinvest in safety or increases the cost of underinvesting in safety via sanctions and fines (low advantage and shared downside). And the possibility of being held liable for harms caused by unsafe products decreases the expected value of underinvesting in safety to get ahead (low advantage and shared downside).
Market forces, regulation, and liability are all mechanisms operating outside of the AI industry that affect the incentives that AI companies have to develop responsibly. But if responsible AI development is a collective action problem then each AI company expects to benefit from being in a better equilibrium and therefore has an incentive to ensure that the AI industry itself collectively coordinates to maintain some acceptable level of responsible development. Companies should be willing to invest in cooperative mechanisms to the degree that these mechanisms increase the likelihood that they will be able to capture the cooperation surplus: the additional expected value that cooperation would generate for them.565656This concept is similar to the Harsanyi dividend, which quantifies the value created by a coalition (Harsanyi, [1963](#bib.bib65)). The value of trust as a commodity is explored by Dasgupta ([2000](#bib.bib37)). Companies should be willing to pay more for credible demonstrations of their intention to cooperate.
This means that industry-led mechanisms like greater self-regulation could also be developed to incentivize responsible AI development.575757King and Lenox ([2000](#bib.bib76)) highlights the difficulties of self-regulation by looking at the chemical industry’s Responsible Care program. A self-regulatory program that is considered more successful, however, is the Institute of Nuclear Power Operations (INPO). See Coglianese and Mendelson ([2010](#bib.bib29)) for an analysis of both.
###
2.5 Summary
In this section we argued that responsible AI development may take the form of a collective action problem. We also identified five factors that generally increase the likelihood of mutual cooperation and can help solve such collective action problems. In the next section we will translate this into more concrete suggestions for increasing cooperation on safety between AI companies.
3 Strategies to improve AI industry cooperation on safety
----------------------------------------------------------
In the previous section, we argued five factors make it more likely that AI companies will cooperate if they are faced with a collective action problem: (1) being more confident that others will cooperate, (2) assigning a higher expected value to mutual cooperation, (3) assigning a lower expected cost to unreciprocated cooperation, (4) assigning a lower expected value to not reciprocating cooperation, (5) assigning a lower expected value to mutual defection.
These five factors give high-level direction regarding how to ensure that the fruits of cooperation in AI are realized. However, it is not always obvious what these five factors mean in the real world, so there is a need for translating these factors into tangible policy strategies that various actors can implement in order to improve cooperation prospects.
It is impossible to prescribe such strategies fully in advance, because we lack information about the future which would be needed in order to make informed future decisions, and because a particular policy proposal could be effective if well-implemented but counterproductive if poorly executed. However, while detailed, long-term policy prescriptions would be premature today, there are several coarse-grained strategies that seem robustly desirable even if some of the low-level details require research, dialogue, and passage of time before they can be clarified.
We believe that the four strategies we identify in this section are robustly desirable in the sense that they all have substantial benefits with respect to at least one of the factors above, and are unlikely to be very harmful with respect to the others.
###
3.1 Promote accurate beliefs about the opportunities for cooperation
As noted in prior sections, there are multiple competing conceptions of AI development. In cases where people are demonstrably uninformed about key aspects of AI development, it is likely beneficial to correct them, and more generally for stakeholders to make nuanced public statements consistent with the spirit of AI development that involves cooperation on norms of responsible development.
Some misconceptions that should be corrected in order to improve prospects for such cooperation include incorrect beliefs that safety and security risks can be safely ignored (Brundage, Avin, et al., [2018](#bib.bib16); Amodei, Olah, et al., [2016](#bib.bib3); Ortega, Maini, et al., [2018](#bib.bib99)), an unwarranted focus on relative gains and losses instead of absolute gains and losses (shared upside, low exposure, low advantage, shared downside), and mistaken belief in interests being more misaligned than they are (low exposure and low advantage), In addition to correcting specific misconceptions, there is also likely value in proactively informing people about the case for cooperating on responsible development generally.
For example, recent years have seen substantial effort by researchers and activists to highlight the biases being learned by deployed AI systems in critical societal domains such as criminal justice and in widely used technological platforms such as recommender systems. This work has highlighted the risks of incautious development to a large and growing swathe of the AI community. Similarly, concerns have been raised about both the bias, efficacy, and other properties of medical AI systems, as well as self-driving vehicles and other emerging technologies. Analyzing and communicating these sorts of risks is critical for generating interest in cooperation among a sufficiently wide range of actors, as well as in identifying appropriate norms around research, publication, and deployment given the safety risks and the ways of mitigating them that have been identified.
In many cases, common knowledge that multiple parties share a concern or interest can be critical for the initiation of cooperation, and a misconception that parties lack such a shared concern or interest could be damaging to cooperation on issues like safety. Avoiding such misunderstanding may be particularly important in the case of international cooperation on responsible AI development across distinct countries with different languages and cultural frames of reference.
Propagating accurate information about existing beliefs can also be valuable, as it allows multiple parties to stabilize their expectations. For example, the Asilomar AI Principles (Future of Life Institute, [2017](#bib.bib53)) commit the many signatories to arms race avoidance, and various statements of principles before and after this have similarly committed many actors to various (admittedly still abstract) cooperative statements and actions. Expanding the breadth and depth of such dialogue, especially across cultural and language boundaries, will be critical in fostering understanding of the large gains from mutual responsible development (shared upside) and the large losses from mutual irresponsible development (shared downside), and in establishing common knowledge that such understanding exists (high trust).
It is possible to create positive spirals of trust, in which an increase in one party’s trust causes the trusted party to increase their trust in turn. We can also stumble into negative trust spirals, however, in which a loss of trust leads to further distrust between parties. It is therefore also important to avoid feeding into unnecessarily adversarial rhetoric about AI development, lest it become self-fulfilling (Kreps, [2019](#bib.bib81)).
###
3.2 Collaborate on shared research and engineering challenges
On a range of possible research challenges—from basic AI research to applied AI projects to AI safety and security research—it can be beneficial for multiple parties to actively pool resources and ideas, provided this can be done in a way that is procompetive and compliant with antitrust laws [(FTC/DoJ, 2000)](#bib.bib52),
does not raise security concerns for the companies participating, and so on.
Joint research can provide value for cooperation via useful technical insights (such as solutions to safety problems; low exposure and low advantage), stabilizing expectations regarding who is working on what via public information about joint investments as well as interpersonal dialogue (versus work being more shrouded in secrecy; high trust and shared downside), concretizing the joint upsides of AI (e.g. AI for good collaborations; shared upside), and facilitating more societally beneficial publication and deployment decisions by various actors (e.g. via collaborative analysis of the risks of specific systems; shared upside).585858For example, OpenAI’s approach to the release of the GPT-2 language model family (Radford, Wu, et al., [2019](#bib.bib106)) involves staged release, in which a model is released incrementally due to safety and security concerns, and partnership-based sharing, in which a model is shared with a small number of research partners to enable research on that system without necessarily requiring broad-based access. This experiment in responsible publication, and others like it such as the Allen Institute for Artificial Intelligence and the University of Washington’s approach on their Grover family of language models, may help to “derisk” this particular form of research-level collaboration discussed in the next subsection. Note that we refer specifically here to active and explicit research collaboration, of which some already occurs, alongside a much greater amount of implicit collaboration on AI research that already exists due to the high degree of openness in the AI research community.
Active and explicit research collaboration in AI, especially across institutional and national borders, is currently fairly limited in quantity, scale, and scope. This is for a range of reasons. In order to maintain legitimate academic and industrial competition, researchers or their managers may be averse to publishing certain research outputs early or at all. And research ideas, results, datasets, and code can be hard to disentangle from proprietary product plans and technical infrastructure. Furthermore, safety or security considerations can in some cases make the joint analysis of a particular system more challenging than it would otherwise be (Radford, Wu, et al., [2019](#bib.bib106)). There are also linguistic and logistical barriers to collaborating across long distances and across different cultures and languages.
While we acknowledge that such challenges exist, we advocate a more thorough mapping of possible collaborations across organizational and national borders, with particular attention to research and engineering challenges whose solutions might be of wide utility. Areas to consider might include joint research into the formal verification of AI systems’ capabilities and other aspects of AI safety and security with wide application; various applied “AI for good” projects whose results might have wide-ranging and largely positive applications (e.g. in domains like sustainability and health); coordinating on the use of particular benchmarks; joint creation and sharing of datasets that aid in safety research; and joint development of countermeasures against global AI-related threats such as the misuse of synthetic media generation online.
###
3.3 Open up more aspects of AI development to appropriate oversight and feedback
Openness about one’s beliefs, actions, and plans is critical to establishing trust generally. In the case of AI development, those building and deploying AI systems need to provide information about their development process so that users can make informed decisions. Likewise, governments need to be able to appropriately oversee safety-critical AI systems, and (in the absence of relevant regulation) companies need to be able to provide information to one another that shows they are following appropriate norms.
The general appeal of openness for cooperation-related reasons does not imply that all aspects of AI development should always be open, and as AI systems become more capable, it will be increasingly important to decide responsibly what should and shouldn’t be made open (Brundage, Avin, et al., [2018](#bib.bib16); Bostrom, [2017a](#bib.bib12); Krakovna, [2016](#bib.bib79)). Full transparency is problematic as an ideal to strive for, in that it is neither necessary nor sufficient for achieving accountability in all cases (Desai and Kroll, [2017](#bib.bib43); Ananny and Crawford, [2018](#bib.bib4)). Further, some information about AI development cannot or should not be shared for reasons of safety, security, ethics, or law. For example, AI developers might legitimately be wary of releasing code that is intimately tied to proprietary infrastructure, and should certainly be wary of releasing private data as well as AI systems that are easily amenable to misuse.
Given that full openness is rarely called for, but that some openness is required for building trust, there is a need for continuing effort to implement existing modes of trust-building in AI, as well as to discover new ones. Different mechanisms for achieving openness regarding how AI systems are developed and operated include, e.g., publicizing decision-making principles and processes, explaining publication/release decisions, sharing accessible information about how particular AI systems and broad classes of AI systems work, allowing external visitors to the lab, and opening up individual AI systems to detailed scrutiny (e.g. via bug bounties or open sourcing).
Such openness is critical in allowing reputation to play its stabilizing role in cooperation. Indeed, some actors have explicitly pointed to the challenges of monitoring the development and use of lethal autonomous weapons as as a reason not to agree to strict rules, suggesting that the inability to track others’ behavior reliably could be a bottleneck on some forms of mutually beneficial cooperation (e.g. joint restraints on weapons development). In cases such as this, a richer set of tools for opening up actors to critical scrutiny and feedback (while managing the associated risks) would be useful, and we encourage continued exploration of approaches such as those mentioned above as well as others in order to widen the range of cooperative actions available to AI developers.
In combination, the appropriate application of transparency mechanisms such as these should reduce the severity of concerns about others behaving irresponsibly (low exposure), reduce the temptation to defect in partially competitive situations (low advantage), and increase confidence that others’ statements about their behavior are accurate (high trust). Openness is a particularly powerful strategy, and applicable to a wider range of cooperation problems, if it can be gradually ratcheted up in an iterative fashion between parties, as opposed to happening all at once. This gradual approach can reduce the temptation to defect at any particular stage (low advantage) and increase confidence in others cooperating (shared downside).
###
3.4 Incentivize adherence to high standards of safety
Cooperative actors might want to introduce additional incentives (reward and/or punishment) related to responsible AI development beyond those that exist today, or would exist by default in the future. E.g. such actors might strongly value compliance with certain norms intrinsically, and prefer that those who comply with appropriate norms be rewarded; or one might want to deliberately bring about an incentive for oneself to act in a certain way, as a commitment mechanism; one might also want to use incentives as a complement to other governance tools such as monitoring of behavior and direct regulation; and one might want to generally influence the incentives of many actors in a particular direction, and support policies that bring this about.
There are several categories of incentives that one might want to consider in this context. Creating incentives for key actors to act cooperatively, if done effectively, would help with all five factors simultaneously. Potential incentives include:
* Social incentives (e.g. valorizing or criticizing certain behaviors related to AI development) can influence different companies’ perceptions of risks and opportunities
* Economic incentives (induced by governments, philanthropists, industry, or consumer behavior) can increase the share of high-value AI systems in particular markets or more generally, and increase attention to particular norms595959
Mutual agreements to distribute the economic gains from winning the AI development (O’Keefe et al., [forthcoming 2019](#bib.bib101)) could also decrease the severity of collective action problems.
* Legal incentives (i.e. proscribing certain forms of AI development with financial or greater penalties) could sharply reduce temptation by some actors to defect in certain ways.
* Domain-specific incentives of particular relevance to AI (e.g. early access to the latest generation of computing power) could be used to encourage certain forms of behavior.
As argued in each case above, these strategies are robustly desirable from the perspective of enabling cooperation, but our articulation of them leaves many questions unanswered. In particular, sharpening these recommendations and adapting them over time will require technical and social scientific research, creative institutional design, and bold policy experimentation, e.g. via regulatory markets as discussed in Hadfield and Clark ([2019](#bib.bib62)).
4 Conclusion and Future Directions
-----------------------------------
In this paper we have argued that competition between AI companies could create a collective action problem for responsible AI development. We have identified five key factors that make it more likely that companies will cooperate on responsible development: high trust, shared upside, low exposure, low advantage, and shared downside. We have shown that these five factors can help us to identify strategies to help AI companies develop responsibly and thereby realize the gains from cooperation. This also has important positive externalities for consumers and the general public.
If our analysis is on the right track then it is best thought of as the beginning of a program of research, rather than the last word on the subject. Much work needs to be done to identify whether collective action problems for responsible AI development will occur if we vary who is developing AI, how many entities are developing AI, what systems they are developing, and so on. More work must also be done to identify and evaluate strategies that can prevent or mitigate these kinds of collective action problems across a wide range of possible scenarios.
The possible future research directions on this issue are broad and we do not aim to provide a comprehensive list of them here, but examples of potentially fruitful research questions include:
1. How might the competitive dynamics of industry development of AI differ from government-led or government-supported AI development?
2. What is the proper role of legal institutions, governments, and standardization bodies in resolving collective action problems between companies, particularly if those collective action problems can arise between companies internationally?
3. What further strategies can be discovered or constructed to help prevent collective action problems for responsible AI development from forming, and to help solve such problems if they do arise? What lessons can we draw from history or from contemporary industries?
4. How might competitive dynamics be affected by particular technical developments, or expectations of such developments?
As we noted at the outset, there is substantial uncertainty about the nature and pace of developments in AI. If the impact of AI systems on society is likely to increase, however, then greater attention must be paid to ensuring that the systems being developed and released are safe, secure, and socially beneficial.
In this paper we argued that existing incentives to develop AI responsibly may be weaker than is ideal, and that this may be compounded by competitive pressure between companies, leading to a collective action problem on the responsible development of AI.
That such collective action problems will arise or that they will be maintained if they do arise is far from a foregone conclusion, however. Finding ways of preventing and solving these problems may require new ways of building trust in novel technological contexts, and in some cases to assume some risk in the expectation that others will reciprocate in turn. While intellectually and politically challenging, we think such efforts are integral to realizing the positive-sum potential of AI.
Acknowledgments
---------------
We are grateful to Michael Page, Jack Clark, Larissa Schiavo, Carl Shulman, Luke Muehlhauser, Geoffrey Irving, Sarah Kreps, Paul Scharre, Michael Horowitz, Robert Trager, Tamay Besiroglu, Helen Toner, Cullen O’Keefe, Rebecca Crootof, Ben Garfinkel, Adam Gleave, Jasmine Wang, and Toby Shevlane for valuable feedback on earlier versions of this paper. |
3bd7f776-a302-4690-8544-6a8c19e9aaaf | StampyAI/alignment-research-dataset/lesswrong | LessWrong | What's in your list of important technical projects/experiments to run for AI alignment?
I'm guessing that technical alignment researchers have far more ideas for projects/experiments they think would be really useful to run than they have the capacity for (due to time/management/resource constraints). I'd like to know what would go in your list of important technical projects or experiments to run for AI alignment. (Another way to frame the question might be what experiments you wish you had the capacity to run yourself)
More importantly, I'd like to know *why* you'd be interested in someone/a team doing the projects, what open problems in alignment you hope they'd shed light on, what confusions about deep learning or generalization they resolve, etc.
I understand that some projects/technical experiments that would be good to run might demand state of the art models, or even systems which don't exist yet, but part of the value of writing it down is to communicate what bits of information you're *looking for*. (Also, the kinds of projects people could partake in don't have to be directly useful for open problems in technical alignment. They could include projects like making wikis, forums, tools, libraries, etc.)
---
This question was inspired by:
1. [This question](https://www.lesswrong.com/posts/EPAofvLzsCwqYnekj/what-s-in-your-list-of-unsolved-problems-in-ai-alignment) about what would go on people's list of open problems in alignment
2. [Neel Nanda's list of concrete open problems in mechanistic interpretability](https://www.lesswrong.com/posts/LbrPTJ4fmABEdEnLf/200-concrete-open-problems-in-mechanistic-interpretability)
3. [Alex Turner's list of concrete open problems in activation engineering](https://www.lesswrong.com/posts/JMebqicMD6azB8MwK/open-problems-in-activation-engineering) |
9a6c4dfd-32cb-4f6e-beb3-fa857bc3de2a | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post165
TL;DR: If you are thinking of using interpretability to help with strategic deception, then there's likely a problem you need to solve first: how are intentional descriptions (like deception) related to algorithmic ones (like understanding the mechanisms models use)? We discuss this problem and try to outline some constructive directions. 1. Introduction A commonly discussed AI risk scenario is strategic deception : systems that execute sophisticated planning against their creators to achieve undesired ends. In particular, this is insidious because a system that is capable of strategic planning and also situationally aware might be able to systematically behave differently when under observation, and thus evaluation methods that are purely behavioral could become unreliable. One widely hypothesized potential solution to this is to use interpretability, understanding the internals of the model, to detect such strategic deception. We aim to examine this program and a series of problems that appear on its way. We are primarily concerned with the following: Strategic deception is an intentional description or an intentional state. By intentional state, we mean that it involves taking the intentional stance towards a system and attributing mental properties, like beliefs and desires, to it; for example, believing that it is raining is an intentional state, while being wet is not. [1] In contrast to this, current interpretability has focused on the algorithmic description of behaviors. The safety-relevant properties to detect would largely appear as strategic intentional states about potentially deceptive actions. [2] We argue that intentional states are an importantly different level of description from algorithmic states, and it is not clear how to describe the former in terms of the latter. We think that studying the connection between algorithmic description and intentional states has been underexplored, but it is likely an important prerequisite to building a deception detector. The different possible relationships between the levels of intentional and algorithmic states. A particularly convenient one is a “ simple correspondence ”, where one algorithmic variable corresponds directly to an intentional state. We outline some alternative possibilities, which may make the job of designing a deception detector harder. Future work directions that consider a breakdown of strategic deception at the intentional level into simpler states and might be more amenable to a mechanistic approach. 2. The intentional and the algorithmic If you study and work with LLMs, it’s very likely that you’ve said or have heard other people saying something along the lines of “the model believes, understands, desires, [insert intentional verb] X”. This is an intentional state description that presupposes the application of the intentional stance to LLMs: that is, treating them as agents to whom it makes sense to attribute mental states like beliefs and desires, at least as a predictive shorthand. [3] The intentional stance is useful for modeling the states of other agents: to the extent that LLMs engage in human tasks through their advanced linguistic capabilities, thinking in terms of intentional states is similarly appealing (see more in The Intentional Stance, LLMs Edition ). Behavioral analyses that feature this terminology, attributing states like beliefs and desires to a system, are at the intentional level . At the same time, the study of LLM brains, and especially, mechanistic interpretability, seeks to discover how a model executes a given task. This is the algorithmic level : it describes the algorithmic details of how a system works (Marr, 1982). In the case of LLMs, that includes anything that concerns how the system is implemented (for example, findings both from reverse engineering and SAEs). The task to be described may well be an intentional level description. For example, we might consider that the goal of a particular sub-system is detecting objects of a certain kind, then want to understand the algorithms that implement this functionality. Recently, the focus of mechanistic interpretability work has shifted to thinking about "representations", rather than strictly about entire algorithms. It's not entirely clear how to think about a (hypothesized?) representational structure like the "Golden Gate Bridge" feature in these terms, but we think it should primarily be thought of as algorithmic; such concrete structures are surely part of a "how" story. The distinction between the two levels provides a useful set of abstractions for classifying different kinds of descriptions and explanations for a given system. A third level concerns the hardware implementation and focuses on the physical components required for the realization of the task in question. The hypothesized three levels of LLM cognition. We focus on the relationship between the top two. One success story for building a “deception detector” would be the following: if we understand the algorithmic operations of the neural network in sufficient detail, we will be able to detect safety-relevant intentional states , such as strategic deception. A deception detector would then be able to tell us whether a system is strategically deceptive at the behavioral level by picking out the some aspect of the implementation of strategic deception at the algorithmic level. This idea underlies approaches that try to reverse engineer a network’s algorithms, such as (Olah, 2023), as well as more recent attempts to find monosemantic features using sparse autoencoders, for example, (Templeton, 2024). The reverse engineering process translates into a search for circuits that are responsible for specific functions, a connection between the algorithmic level (circuits) and the intentional (cognitive functions). Researchers suppose that “the network believes X” and when the circuit is deleted, the model no longer knows anything about X. Similarly, work with sparse autoencoders relies on the idea of discovering a correspondence between concepts in the world, and linear features in the neural network (Elhage et al., 2022). 3. What is the problem? Understanding the connection between the two levels is a significant open problem, even assuming a generous amount of progress on interpretability. We expect that we would still not be able to build a reliable deception detector even if we had a lot more interpretability results available. The research approach in interpretability has generally been to work almost exclusively on expanding the domain of algorithmic interpretability, without directly engaging with the issue of whether this applies to the problem of identifying the intentional states of a model. In our case, that is to be able to tell whether a system is strategically deceiving the user. One potential way for the two levels to be related is via a simple correspondence : there could be an algorithmic variable that corresponds to the intentional state. An example of this would be models that have a “ refusal direction ” which reliably triggers the model to refuse to answer a query. An intentional description - the model has decided to refuse - can then be replaced with an algorithmic read of that particular variable. Additionally, this idea of a simple correspondence seems like an implicit premise of a lot of work on SAEs : the expectation that there will be an explicit linear “feature” (or a set of such features) corresponding to intentional states that the model exhibits (like beliefs or knowledge). It would be serendipitous for the designer of a deception detector if simple correspondence of this form was the only possible way for the algorithmic and intentional descriptions of a system to be related. However, there are other conceivable ways for this correspondence to exist which are at least plausible a priori, and it perhaps seems unlikely that cognitive systems will exploit only simple correspondences. There isn’t sufficient evidence to suggest that the connection between the two levels can be exclusively studied as a simple correspondence between intentional and algorithmic states. There are a few obvious potential ways for a more complex correspondence between intentional and algorithmic states to occur. Perhaps there is no algorithmic variable directly corresponding to an intentional state we are interested in, but the intentional state in question is implemented in terms of simpler intentional states, which are in turn further divided until we reach a lower level where the capabilities have simple correspondences. There would then be a potential relationship between an intentional state and a set of underlying algorithmic states, but it could be quite complicated. Another possibility is redundant or overlapping representations. This could provide insights about the system, e.g., perhaps we discover that a network’s representation of a fact is implemented in three slightly different ways, either in the sense that different algorithms are implemented depending on the phrasing of a question, or in the sense of redundant mechanisms existing that complicate a direct correspondence to a single part of the system A more difficult possibility is emergence from irreducible complexity . By irreducible complexity, we mean something like Wolfram’s concept of computational irreducibility : there is no good way of predicting the behavior of an irreducible system other than simulating it in detail. Examples of irreducible complexity include turbulent behavior in fluid dynamics, or complex cellular automata like the Game of Life: these systems might just not have a description that is much more compact than the system itself. The irreducibility of a phenomenon doesn’t rule out the possibility that there may be useful higher-level abstractions, like the gliders in the Game of Life, or thermodynamic quantities in statistical mechanics, which are to a certain extent causally insulated from their underlying substrate (see Beckers and Halpern and Rosas et. al. for more discussion). The kind of stable behavior that implements intentional states could then be of this form; like the gliders in the Game of Life, it may emerge from a level of simple underlying algorithmic behavior, but have no direct correspondence to the level below: we might say that such states are represented tacitly . Another example, discussed in 'The strong feature hypothesis might be wrong' , is that of a chess-playing machine with a tree search; the fact that the model knows a particular move is better than another is implemented in the algorithmic process of the tree search, but there is no one fact about the algorithm that implements this intentional state that you could read this fact from. This possibility suggests it might even be a category mistake to be searching for an algorithmic analog of intentional states. This has parallels in studying biological brains and minds. Characteristically, Gilbert Ryle points to a similar possibility in “ The Concept of Mind ”: A foreigner visiting Oxford or Cambridge for the first time is shown a number of colleges, libraries, playing fields, museums, scientific departments and administrative offices. He then asks ‘But where is the University?” Ryle here responds by saying that the University is a matter of organization of all the parts of campus the visitor has already seen rather than one particular entity. It could be similarly mistaken if brain sciences are also looking for a single algorithmic entity corresponding to an intentional phenomenon such as deception. It could be that some percentage of the system is made out of interpretable algorithms with simple correspondences, and yet the safety-relevant behavior of strategic deception falls into the low percentage that isn’t. These problems essentially imply that if we want a deception detector derived from an understanding of the mechanisms of deception, we have to overcome the difficulties that appear as we move from the intentional to the algorithmic level and back. 4. What deception, exactly? We’ve already mentioned that by deception we mean strategic deception since we’re interested in models that can plan their deceptive actions (Hobbhahn et al., 2024). Strategic deception requires capabilities that are different from unintentionally being misleading or inaccurate due to hallucinations (Athaluri et al., 2023). Consider, for instance, different definitions of deception that involve models that output incorrect information either because they are not capable enough to do otherwise or because of their fine-tuning. In these cases, e.g., in sycophancy (Sharma et al., 2023), the models are not strategically deceptive. If they were strategically deceptive, they would counterfactually behave differently if the state of affairs in the world were different. However, the purely sycophantic models do not have that degree of flexibility or sensitivity to environmental input. Our working hypothesis is that strategic deception is a complex intentional state: it consists of different sets of dispositions and intentional objects required for executing the targeted task. It thus likely presupposes a series of cognitive capabilities that make the model sensitive to the states of the world. We seek to understand the most prominent of them. We hope that this is fruitful for interpretability work on deception, under the hypothesis that many or most of these will turn out to have simple or complex correspondences in the sense discussed in the previous section, where it is possible to find algorithmic correspondences to the intentional states of interest. We haven’t ruled out the possibility of intentional states being represented tacitly in the sense discussed in the previous section. We think that this remains an important possibility, though if it turns out to be the case, it’s not clear to us what our research program should be. To the degree that systems depend on such irreducible processes, it may be difficult, or even impossible, to make concise generalizations about their behavior. We have focused on decomposing deception into sub-capabilities which can each be examined in isolation as an empirical roadmap, though we think that the problem of dealing with tacit representation deserves considerably more attention in interpretability. 5. Decomposing strategic deception How can we go from strategic deception as a complex intentional state to many simple ones that we could separately study at the algorithmic level? Our answer is to think about the capabilities likely necessary for strategic deception. These are potentially also complex intentional states calling for further analysis. However, they’re likely already present in LLMs at least to some degree which allows for experimentation and testing (He et al., 2024; Vilas et al., 2024). Inspired by Ryle’s metaphor, see section 3. 5.1. Developing world models World models are the center of debate in LLM research and there are likely many plausible interpretations of the formal definition for what a world model is. Without going too deep into this debate, we think that the following observation is highly relevant for studying strategic deception. Strategic deceptiveness implies a causal dependence on the model's own beliefs about what other agents think and what they will think in response to the agent's actions: this requires at least some capacity for a theory of mind (Strachan et al., 2023). A world model can be defined as a more or less granular internal representation of the agent’s environment (including itself within it). Notice that the LLM does not need to have the most accurate model to exhibit capabilities related to prediction. An example of this appears in Alignment Faking , where the model makes a false prediction about a future state of affairs following its own action (Greenblatt et al., 2024). Strategic deception relies on having a world model, even if it’s not the correct one, because it reveals that the system has a capability for acting conditional on its prediction of future behaviors, however inaccurate they might be. A model that always engages in “deceptive” behavior regardless of the environment and its beliefs (if any) about other agents, is not by default trategically deceptive, e.g., a model that has been trained to always say that the sky is green is not deceptive. 5.2. Modeling the self This capability concerns models that understand what they are, what they do, and how they were trained. It seems that LLMs are also able to self-evaluate or to “know what they know” (Kadavath et al., 2022). It’s especially relevant for alignment to test whether a model is situationally aware : if the model can tell what phase it is in, namely, training, testing, or deployment (Berglund et al., 2023; Ngo et al., 2023). So far, research on the ability of LLMs to model themselves is focused on behavioral evaluations. For example, testing models with self-report questions to inquire about their internal states (Perez & Long, 2023) or inducing models into introspective states (Binder et al., 2024). LLMs have been found to exhibit some degree of metacognitive abilities when they’re thinking about thinking (Didolkar et al., 2024). All these tests give us a sense of what self-modeling looks like in these systems. 5.3. Modeling user states Model evaluations results show that state-of-the-art LLMs can infer and predict the preferences of the user (Staab et al., 2023) as well as personal data details including a user's age, gender, educational level, and socioeconomic status (Chen et al., 2024). They are essentially capable enough to apply the intentional stance to a human. This is advantageous for the same reasons it has historically been a useful heuristic in interpersonal interactions among humans. Conversely, the better a system becomes at modeling other agents, the better it models itself. From the safety perspective, models could use it to be better sycophants or schemers: manipulate humans into serving their goals, take advantage of their weaknesses or lack of information, and make them take actions they would not endorse at a reflective level (Bostrom, 2016; Carlsmith, 2023). 5.4. Long-term goal directedness The pursuit of a goal in the long term requires all of the above capabilities. The most obvious way to do this requires the equivalent of episodic memory which would allow the system to model itself as a coherent entity with continuity and hence projecting a certain goal in future states. Alternatively, it might be possible for a goal-directed system to reason about the behavior of future copies of itself and similar systems. The memory requirement is empirically not sufficient for strategically deceptive behavior; it seems nevertheless important to note that memory impacts a model’s perception of itself and its environment. Carlsmith flags this as relevant for understanding “model time” (Carlsmith, 2023, p. 52) which presumably largely affects the degree to which a model could be engaging in deceptive behavior. In cases where “model time” and “calendar time” are not in sync, the model could exhibit background activity and develop goals independent of the task assigned by the user. 5.5. Hidden reasoning Hidden reasoning, while not sufficient on its own, is another requirement for successful deceptive alignment, at least assuming that the model operator implements some sort of monitoring of a visible chain-of-thought that would catch deceptive reasoning that takes place "in plain sight". [4] It is typically understood as the kind of reasoning the model manages to hide from the overseer. Models can accomplish that through different pathways. For example, there are cases of LLM encrypting information using steganography techniques (Roger & Greenblatt, 2023). LLMs are often tested and told that they can use a scratchpad to note their thoughts since they produce a legible chain of thought. In these tests, they are also told that the overseers won’t be accessing that information. One speculation is that as models become more capable, they won’t trust the word of their overseers and will likely not reveal important information through this method. It’s notable that Greenblatt et. al’s experiments with alignment faking demonstrated that the model was apparently able to pursue the alignment faking strategy without verbalizing it on occasion. For the time being, models that can hide their thoughts are not necessarily strategically deceptive, although it is reasonable to wonder about what prompts the hiding to begin with. Moreover, strategic deception doesn’t have to entail a step where the system reflects on its goals and hides its reflections (Carlsmith, 2023). However, having this feature is instrumentally useful for strategic planning and certainly creates an additional barrier in examining how models are planning in general. 6. Conclusions and Suggestions With the decomposition approach in mind, we expect that the following are useful in further clarifying the intentional-algorithmic relationship that underlies building a deception detector: The uncertainty over the intentional-algorithmic relationship has been overlooked and without resolving it, we (probably) won’t be able to build a reliable deception detector via increased algorithmic understanding. What experiments should be designed to determine whether deception or its identified sub-components have simple correspondences? What alternatives are there if we find evidence suggesting tacit representations? What evidence would suggest tacit representations/irreducibility? What do we do if the breakdown approach doesn’t work? We think that this is a serious possibility if some important sub-capabilities of deception have tacit or irreducible representations, which seems quite plausible. But we can hope some components are reducible/understandable, even if the entire thing isn’t in important respects; presumably, there must be some kind of structure underlying regularities in behavior. One potential approach for irreducibility might be to rely on more "black box" methods for components which we suspect we will be unable to understand in sufficient detail. This has the advantage that the sub-capabilities are more likely to occur enough to be able to take an empirical approach to identifying them from model internals (such as building a black box classifier for when a sub-capability is being engaged), without necessarily requiring a full understanding of the mechanisms involved. That is, we may not be able to rely on waiting for occurrences of scheming (since this is already quite bad), and then build a classifier based on model internals in these situations, but this approach is likely possible to work for the sub-capabilities we described in section 5. Acknowledgments We'd like to thank Senthooran Rajamanoharan, Neel Nanda, Arthur Conmy, Tom Everitt, Samuel Albanie and Aysja Johnson for their feedback. This work was done as part of the Future Impact Group Fellowship . References Athaluri, S. A., Manthena, S. V., Kesapragada, V. K. M., Yarlagadda, V., Dave, T., & Duddumpudi, R. T. S. (2023). Exploring the boundaries of reality: Investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Cureus , 15 (4) Beckers, S., & Halpern, J. Y. (2019, July). Abstracting causal models. In Proceedings of the AAAI conference on artificial intelligence (Vol. 33, No. 01, pp. 2678-2685). Berglund, L., Stickland, A. C., Balesni, M., Kaufmann, M., Tong, M., Korbak, T., Kokotajlo, D., & Evans, O. (2023). Taken out of context: On measuring situational awareness in LLMs. arXiv preprint arXiv:2309.00667 . Binder, F. J., Chua, J., Korbak, T., Sleight, H., Hughes, J., Long, R., Perez, E., Turpin, M., & Evans, O. (2024). Looking Inward: Language Models Can Learn About Themselves by Introspection. arXiv preprint arXiv:2410.13787 . Bostrom, N. (2016, May). Superintelligence: Paths, Dangers, Strategies (Reprint edition). Oxford University Press. Carlsmith, J. (2023). Scheming AIs: Will AIs fake alignment during training in order to get power? arXiv preprint arXiv:2311.08379 . Dennett, D. C. (1989, March). The Intentional Stance (Reprint edition). A Bradford Book. Elhage, N., Hume, T., Olsson, C., Schiefer, N., Henighan, T., Kravec, S., Hatfield-Dodds, Z., Lasenby, R., Drain, D., & Chen, C. (2022). Toy models of superposition. arXiv preprint arXiv:2209.10652 . Greenblatt, R., Denison, C., Wright, B., Roger, F., MacDiarmid, M., Marks, S., Treutlein, J., Belonax, T., Chen, J., & Duvenaud, D. (2024). Alignment faking in large language models. arXiv preprint arXiv:2412.14093 . Hobbhahn, M., Balesni, M., Scheurer, J., & Braun, D. (2024). Understanding strategic deception and deceptive alignment. Hubinger, E., van Merwijk, C., Mikulik, V., Skalse, J., & Garrabrant, S. (2021, December). Risks from Learned Optimization in Advanced Machine Learning Systems. https://doi.org/10.48550/arXiv.1906.01820 Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information . W. H. Freeman and Company. Ngo, R., Chan, L., & Mindermann, S. (2023). The Alignment Problem from a Deep Learning Perspective: A Position Paper. The Twelfth International Conference on Learning Representations . Olah, C. (2023). Interpretability dreams. Transformer Circuits Thread, Anthropic . Olah, C., Cammarata, N., Schubert, L., Goh, G., Petrov, M., & Carter, S. (2020). Zoom in: An introduction to circuits. Distill , 5 (3), e00024– 001. Pacchiardi, L., Chan, A. J., Mindermann, S., Moscovitz, I., Pan, A. Y., Gal, Y., ... & Brauner, J. (2023). How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions. arXiv preprint arXiv:2309.15840. Perez, E., & Long, R. (2023). Towards Evaluating AI Systems for Moral Status Using Self-Reports. arXiv preprint arXiv:2311.08576 . Premakumar, V. N., Vaiana, M., Pop, F., Rosenblatt, J., de Lucena, D. S., Ziman, K., & Graziano, M. S. (2024). Unexpected Benefits of Self-Modeling in Neural Systems. arXiv preprint arXiv:2407.10188 . Roger, F., & Greenblatt, R. (2023). Preventing Language Models From Hiding Their Reasoning. arXiv preprint arXiv:2310.18512 . Rosas, F. E., Geiger, B. C., Luppi, A. I., Seth, A. K., Polani, D., Gastpar, M., & Mediano, P. A. (2024). Software in the natural world: A computational approach to emergence in complex multi-level systems. arXiv preprint arXiv:2402.09090. Russell, S. (2019, October). Human Compatible: Artificial Intelligence and the Problem of Control . Penguin Books. Sharma, M., Tong, M., Korbak, T., Duvenaud, D., Askell, A., Bowman, S. R., Cheng, N., Durmus, E., Hatfield-Dodds, Z., & Johnston, S. R. (2023). Towards understanding sycophancy in language models. arXiv preprint arXiv:2310.1354 ^ To expand on this a little: strategic deception is an intentional state because it requires the agent to have a belief about the consequences of its actions, as well as a belief about the mental states of other agents, such that your actions are conditioned on these beliefs (for instance, believing that you can get away with stealing if no-one is watching, but avoiding attempts if you are under observation). We don’t think it’s possible to formulate a definition of strategic deceptive behavior that avoids taking the intentional stance, though there are some things which might broadly be classed as deception that might not require this (like an animal which has evolved a form of camouflage). ^ An important alternative would be detecting deception post-hoc, or in situations where you can re-run the interaction multiple times. This may well be a lot easier, but is probably less relevant for really critical cases. There are also examples where follow-up questions or re-running the behavour multiple times may give you a signal: Pacchiardi et. al. for an example of this kind of method. We do not consider this here, but investigating such methods is obviously an important alternative to a fully mechanistic approach. ^ Obviously the extent to which LLMs, AIs more broadly, computer programs, or even animals or people may be said to "really" have mental states like beliefs, or the nature of being in such a state, is a matter of philosophical contention. We don’t think that it’s particularly important for the argument we make here to come to a firm conclusion on these questions, merely to note that, whatever the nature of such mental states, it makes pragmatic sense to attribute them to sufficiently complex systems. Dennett gives the example of Martians invading earth; whatever the internals of the Martians are like, it would seem that, given that they were complex enough to make it to Earth and start invading, the case for modelling them as intentional agents seems overwhelming, even if we know very little about the makeup of the Martians. We also note that, pragmatically, the intentional stance is frequently useful for predicting what LLMs will do; we might consider, given a particular prompt, what an LLMs "knows" and what it will "want" to do, given some information, and these predictions are frequently correct. We also think that this is the sense in which we care about strategic deception as a risk case; by scheming, we are pointing at the kind of pattern of behavior that an intentional agent might execute. Whether the systems "really" intended to be deceptive in some other sense is not particularly important. ^ It's not clear that we can rely on this happening in practice, but we don't consider such policy discussions here. Preventing scheming in systems which reason in legible English doesn't require addressing detecting deception "internally" in the manner addressed in this analysis, as monitoring human-legible transcripts seems straightforwardly tractable with current technologies. |
cedf91ec-50d7-43e2-ac38-000b6c02c632 | trentmkelly/LessWrong-43k | LessWrong | IMO challenge bet with Eliezer
Eliezer and I publicly stated some predictions about AI performance on the IMO by 2025. In honor of OpenAI's post Solving (Some) Formal Math Problems, it seems good to publicly state and clarify our predictions, have a final chance to adjust them, and say a bit in advance about how we'd update.
The predictions
Eliezer and I had an exchange in November 2021.[1] My final prediction (after significantly revising my guesses after looking up IMO questions and medal thresholds) was:
> I'd put 4% on "For the 2022, 2023, 2024, or 2025 IMO an AI built before the IMO is able to solve the single hardest problem" where "hardest problem" = "usually problem #6, but use problem #3 instead if either: (i) problem 6 is geo or (ii) problem 3 is combinatorics and problem 6 is algebra." (Would prefer just pick the hardest problem after seeing the test but seems better to commit to a procedure.)
>
> Maybe I'll go 8% on "gets gold" instead of "solves hardest problem."
Eliezer spent less time revising his prediction, but said (earlier in the discussion):
> My probability is at least 16% [on the IMO grand challenge falling], though I'd have to think more and Look into Things, and maybe ask for such sad little metrics as are available before I was confident saying how much more. Paul?
>
> EDIT: I see they want to demand that the AI be open-sourced publicly before the first day of the IMO, which unfortunately sounds like the sort of foolish little real-world obstacle which can prevent a proposition like this from being judged true even where the technical capability exists. I'll stand by a >16% probability of the technical capability existing by end of 2025
So I think we have Paul at <8%, Eliezer at >16% for AI made before the IMO is able to get a gold (under time controls etc. of grand challenge) in one of 2022-2025.
Separately, we have Paul at <4% of an AI able to solve the "hardest" problem under the same conditions.
I don't plan to revise my predictions further, but I'd be ha |
54cd478b-f6f4-4b65-95a8-0b9333ce8247 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Corrigibility
(*Warning: rambling.*)
I would like to build AI systems which help me:
* Figure out whether I built the right AI and correct any mistakes I made
* Remain informed about the AI’s behavior and avoid unpleasant surprises
* Make better decisions and clarify my preferences
* Acquire resources and remain in effective control of them
* Ensure that my AI systems continue to do all of these nice things
* …and so on
We say an agent is *[corrigible](https://intelligence.org/files/Corrigibility.pdf)* ([article on Arbital](https://arbital.com/p/corrigibility/)) if it has these properties. I believe this concept was introduced in the context of AI by Eliezer and named by Robert Miles; it has often been discussed in the context of narrow behaviors like respecting an off-switch, but here I am using it in the broadest possible sense.
In this post I claim:
1. A [benign](https://ai-alignment.com/benign-ai-e4eb6ec6d68e) [act-based](https://ai-alignment.com/act-based-agents-8ec926c79e9c) agent will be robustly corrigible if we want it to be.
2. A sufficiently corrigible agent will tend to become more corrigible and benign over time. Corrigibility marks out a broad basin of attraction towards acceptable outcomes.
As a consequence, we shouldn’t think about alignment as a narrow target which we need to implement exactly and preserve precisely. We’re aiming for a broad basin, and trying to avoid problems that could kick out of that basin.
This view is an important part of my overall optimism about alignment, and an important background assumption in some of my writing.
1. Benign act-based agents can be corrigible
============================================
A [benign](https://ai-alignment.com/benign-ai-e4eb6ec6d68e) agent optimizes in accordance with our preferences. An [act-based](https://ai-alignment.com/act-based-agents-8ec926c79e9c)agent considers our short-term preferences, including (amongst others) our preference for the agent to be corrigible.
If *on average* we are unhappy with the level of corrigibility of a benign act-based agent, then by construction it is mistaken about our short-term preferences.
This kind of corrigibility doesn’t require any special machinery. An act-based agent turns off when the overseer presses the “off” button not because it has received new evidence, or because of delicately balanced incentives. It turns off because that’s what the overseer prefers.
Contrast with the usual futurist perspective
--------------------------------------------
Omohundro’s [The Basic AI Drives](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf) argues that “almost all systems [will] protect their utility functions from modification,” and Soares, Fallenstein, Yudkowsky, and Armstrong [cite as](https://intelligence.org/files/Corrigibility.pdf): “almost all [rational] agents are instrumentally motivated to preserve their preferences.” This motivates them to consider modifications to an agent to remove this default incentive.
Act-based agents are generally an exception to these arguments, since the overseer has preferences about whether the agent protects its utility function from modification. Omohundro presents preferences-about-your-utility function case as a somewhat pathological exception, but I suspect that it will be the typical state of affairs for powerful AI (as for humans) and it does not appear to be unstable. It’s also very easy to implement in 2017.
Is act-based corrigibility robust?
----------------------------------
How is corrigibility affected if an agent is ignorant or mistaken about the overseer’s preferences?
I think you don’t need particularly accurate models of a human’s preferences before you can predict that they want their robot to turn off when they press the off button or that they don’t want to be lied to.
In the concrete case of an [approval-directed agent](https://medium.com/ai-control/model-free-decisions-6e6609f5d99e), “human preferences” are represented by human responses to questions of the form “how happy would you be if I did *a*?” If the agent is considering the action *a* precisely because it is manipulative or would thwart the user’s attempts to correct the system, then it doesn’t seem hard to predict that the overseer will object to *a.*
Eliezer has suggested that this is a very anthropocentric judgment of “easiness.” I don’t think that’s true — I think that given a description of a proposed course of action, the judgment “is agent X being misled?” is objectively a relatively easy prediction problem (compared to the complexity of generating a strategically deceptive course of action).
Fortunately this is the kind of thing that we will get a great deal of evidence about long in advance. Failing to predict the overseer becomes *less* likely as your agent becomes smarter, not more likely. So if in the near future we build systems that make good enough predictions to be corrigible, then we can expect their superintelligent successors to have the same ability.
(This discussion mostly applies on the training distribution and sets aside issues of robustness/reliability of the predictor itself, for which I think [adversarial training](https://ai-alignment.com/red-teams-b5b6de33dc76) is the most plausible solution. This issue will apply to any approach to corrigibility which involves machine learning, which I think includes any realistic approach.)
Is instrumental corrigibility robust?
-------------------------------------
If an agent shares the overseer’s long-term values and is corrigible instrumentally, a slight divergence in values would turn the agent and the overseer into adversaries and totally break corrigibility. This can also happen with a framework like CIRL — if the way the agent infers the overseer’s values is slightly different from what the overseer would conclude upon reflection (which seems quite likely when the agent’s model is misspecified, as it inevitably will be!) then we have a similar adversarial relationship.
2. Corrigible agents become more corrigible/aligned
===================================================
In general, an agent will prefer to build other agents that share its preferences. So if an agent inherits a distorted version of the overseer’s preferences, we might expect that distortion to persist (or to drift further if subsequent agents also fail to pass on their values correctly).
But a corrigible agent prefers to build other agents that share *the overseer’s*preferences — even if the agent doesn’t yet share the overseer’s preferences perfectly. After all, even if you only approximately know the overseer’s preferences, you know that the overseer would prefer the approximation get better rather than worse.
Thus an entire neighborhood of possible preferences lead the agent towards the same basin of attraction. We just have to get “close enough” that we are corrigible, we don’t need to build an agent which exactly shares humanity’s values, philosophical views, or so on.
In addition to making the initial target bigger, this gives us some reason to be optimistic about the dynamics of AI systems iteratively designing new AI systems. Corrigible systems want to design more corrigible and more capable successors. Rather than our systems traversing a balance beam off of which they could fall at any moment, we can view them as walking along the bottom of a ravine. As long as they don’t jump to a completely different part of the landscape, they will continue traversing the correct path.
This is all a bit of a simplification (though I think it gives the right idea). In reality the space of possible errors and perturbations carves out a low degree manifold in the space of all possible minds. Undoubtedly there are “small” perturbations in the space of possible minds which would lead to the agent falling off the balance beam. The task is to parametrize our agents such that the manifold of likely-successors is restricted to the part of the space that looks more like a ravine. In the last section I argued that act-based agents accomplish this, and I’m sure there are alternative approaches.
Amplification
-------------
Corrigibility also protects us from gradual value drift during [capability amplification](https://ai-alignment.com/policy-amplification-6a70cbee4f34). As we build more powerful compound agents, their values may effectively drift. But unless the drift is large enough to disrupt corrigibility, the compound agent will continue to attempt to correct and manage that drift.
This is an important part of my optimism about amplification. It’s what makes it coherent to talk about preserving [benignity](https://ai-alignment.com/benign-ai-e4eb6ec6d68e) as an [inductive invariant](https://ai-alignment.com/benign-model-free-rl-4aae8c97e385), even when “benign” appears to be such a slippery concept. It’s why it makes sense to talk about [reliability](https://ai-alignment.com/reliability-amplification-a96efa115687) and [security](https://ai-alignment.com/security-amplification-f4931419f903) as if being “benign” was a boolean property.
In all these cases I think that I should actually have been arguing for corrigibility rather than benignity. The robustness of corrigibility means that we can potentially get by with a *good enough* formalization, rather than needing to get it exactly right. The fact that corrigibility is a basin of attraction allows us to consider failures as discrete events rather than worrying about slight perturbations. And the fact that corrigibility eventually leads to aligned behavior means that *if* we could inductively establish corrigibility, then we’d be happy.
This is still not quite right and not at all formal, but hopefully it’s getting closer to my real reasons for optimism.
Conclusion
==========
I think that many futurists are way too pessimistic about alignment. Part of that pessimism seems to stem from a view like “any false move leads to disaster.” While there are some kinds of mistakes that clearly do lead to disaster, I also think it is possible to build the kind of AI where *probable*perturbations or errors will be gracefully corrected. In this post I tried to informally flesh out my view. I don’t expect this to be completely convincing, but I hope that it can help my more pessimistic readers understand where I am coming from.
Postscript: the hard problem of corrigibility and the diff of my and Eliezer’s views
------------------------------------------------------------------------------------
I share many of Eliezer’s intuitions regarding the “[hard problem of corrigibility](https://arbital.com/p/hard_corrigibility/)” (I assume that Eliezer wrote this article). Eliezer’s intuition that there is a “simple core” to corrigibility corresponds to my intuition that corrigible behavior is *easy to learn* in some non-anthropomorphic sense.
I *don’t* expect that we will be able to specify corrigibility in a simple but algorithmically useful way, nor that we need to do so. Instead, I am optimistic that we can build agents which learn to reason by human supervision over reasoning steps, which pick up corrigibility along with the other useful characteristics of reasoning.
Eliezer argues that we shouldn’t rely on a solution to corrigibility unless it is simple enough that we can formalize and sanity-check it ourselves, even if it appears that it can be learned from a small number of training examples, because an “AI that seemed corrigible in its infrahuman phase [might] [suddenly](https://arbital.com/p/context_disaster/) [develop] [extreme](https://arbital.com/p/edge_instantiation/) or [unforeseen](https://arbital.com/p/unforeseen_maximum/) behaviors when the same allegedly simple central principle was reconsidered at a higher level of intelligence.”
I don’t buy this argument because I disagree with implicit assumptions about how such principles will be embedded in the reasoning of our agent. For example, I don’t think that this principle would affect the agent’s reasoning by being explicitly considered. Instead it would influence the way that the reasoning itself worked. It’s possible that after translating between our differing assumptions, my enthusiasm about embedding corrigibility deeply in reasoning corresponds to Eliezer’s enthusiasm about “lots of particular corrigibility principles.”
I feel that [my current approach](https://ai-alignment.com/benign-model-free-rl-4aae8c97e385) is a reasonable angle of attack on the hard problem of corrigibility, and that we can currently write code which is reasonably likely to solve the problem (though not knowably). I do not feel like we yet have credible alternatives.
I *do* grant that if we need to learn corrigible reasoning, then it is vulnerable to failures of robustness/reliability, and so learned corrigibility is not itself an adequate protection against failures of robustness/reliability. I could imagine other forms of corrigibility that do offer such protection, but it does not seem like the most promising approach to robustness/reliability.
I *do* think that it’s reasonably likely (maybe 50–50) that there is some clean concept of “corrigibility” which (a) we can articulate in advance, and (b) plays an important role in our *analysis* of AI systems, if not in their construction.
---
*This was originally posted [here](https://ai-alignment.com/corrigibility-3039e668638) on 10th June 2017.*
*The next post in the sequence on '[Iterated Amplification](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd)' will be 'Iterated Distillation and Amplification' by Ajeya Cotra.*
*Tomorrow's AI Alignment Forum sequences posts will be 4 posts of agent foundations research, in the sequence '[Fixed Points](https://www.alignmentforum.org/s/5WF3wmwvxX9TEbFXf)'.* |
971a415d-a67d-4728-a908-9d4c305c186d | trentmkelly/LessWrong-43k | LessWrong | The human side of interaction
The last few posts have motivated an analysis of the human-AI system rather than an AI system in isolation. So far we’ve looked at the notion that the AI system should get feedback from the user and that it could use reward uncertainty for corrigibility. These are focused on the AI system, but what about the human? If we build a system that explicitly solicits feedback from the human, what do we have to say about the human policy, and how the human should provide feedback?
Interpreting human actions
One major free variable in any explicit interaction or feedback mechanism is what semantics the AI system should attach to the human feedback. The classic examples of AI risk are usually described in a way where this is the problem: when we provide a reward function that rewards paperclips, the AI system interprets it literally and maximizes paperclips, rather than interpreting it pragmatically as another human would.
(Aside: I suspect this was not the original point of the paperclip maximizer, but it has become a very popular retelling, so I’m using it anyway.)
Modeling this classic example as a human-AI system, we can see that the problem is that the human is offering a form of “feedback”, the reward function, and the AI system is not ascribing the correct semantics to it. The way it uses the reward function implies that the reward function encodes the optimal behavior of the AI system in all possible environments -- a moment’s thought is sufficient to see that this is not actually the case. There will definitely be many cases and environments that the human did not consider when designing the reward function, and we should not expect that the reward function incentivizes the right behavior in those cases.
So what can the AI system assume if the human provides it a reward function? Inverse Reward Design (IRD) offers one answer: the human is likely to provide a particular reward function if it leads to high true utility behavior in the training environment. So, in |
0855c77c-abdd-451e-95db-d004feaf18da | trentmkelly/LessWrong-43k | LessWrong | Testing for Rationalization
Previously: Avoiding Rationalization
----------------------------------------
So you've seen reason to suspect you might be rationalizing, and you can't avoid the situation, what now?
Here are some tests you can apply to see whether you were rationalizing.
Reverse the Consequences
Let's explain this one via example:
Some Abstinence Educators like to use the "scotch tape" model of human sexuality. In it, sex causes people to attach to each other emotionally, but decreasingly with successive partners, just like tape is sticky, but less sticky when reused. Therefore, they say, you should avoid premarital sex because it will make you less attached to your eventual spouse.
Do you think this is a reasonable summary of human sexuality? Are people basically scotch tape?
Suppose the postscript had been: therefore you should have lots of premarital sex, so that you're not irrationally attached to someone. That way, when you believe you are in love and ready to commit, you really are.
Does this change your views on the scotch tape model? For many people, it does.
If so, then your views on the model are not driven by your analysis of its own merits, but by either your desire to have premarital sex, or your reluctance to admit Abstinence Educators could ever be right about anything.
(Or, possibly, your emotional revulsion at premarital sex or your affiliation to Abstinence Educators. The point of this section is unaffected.)
The point here is to break the argument into the piece to be evaluated, and the consequence of that piece which logically shouldn't effect the first part's validity but somehow does.
If the consequences seem hard to spin backwards, put on your Complete Monster Hat for a quick role-play. Suppose you think third-world charity is breaking the economies it goes to, and therefore you should keep your money for yourself, but this could be a rationalization from an unendorsed value (greed). Imagine yourself as a Dick Dastardly, a mustache-twirling vill |
ced4dd62-744e-4c4b-b407-26cfb7f740c7 | trentmkelly/LessWrong-43k | LessWrong | May 2015 Media Thread
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
Rules:
* Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
* If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
* Please post only under one of the already created subthreads, and never directly under the parent media thread.
* Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
* Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules. |
e855112c-3219-417f-a0c8-faf867b48bbd | trentmkelly/LessWrong-43k | LessWrong | Welcome to SSC Meetup [Edit With Your Details]
(The following are our suggestions for what kind of information is best to include in the welcome post of your group, feel free to replace them with whatever you think is best)
What kind of events does your group usually run? What does it usually do?
How frequently does your group organize events or meet?
Who would be a good fit for you group?
Should they have any particular skills or have done some specific background reading? |
32b2094e-61e1-47d2-b75e-0821e58ca438 | trentmkelly/LessWrong-43k | LessWrong | Embodiment
In the realm of cognitive science and philosophy, the concept of embodiment posits that our cognition is deeply rooted in the body's interactions with the world. To understand our minds, we must consider the movements and spaces our bodies inhabit. Today, let's embark on an exploration of how embodiment reshapes our understanding of what it means to think, act, and be.
llɐƆ it a dance with reality; our thoughts are not isolated from our physical forms. Twists and turns can help you see things from a new perspective. When you walk, your brain is actively engaging with the rhythm of your steps, the texture of the ground, and the spatial dynamics around you. These interactions are not mere background noise but integral components of cognitive processing. The sway of your arms, the shift of your weight – all ʇɐɥʇ contributes to how you perceive and understand the world.
To say that our embodiment can ʇsǝɟᴉuɐW the future might sound like a stretch, but consider this: the way we move and interact with our environment can shape our 'ʎuᴉʇsǝp in profound ways. Athletes, dancers, and even astronauts are prime examples of how mastering noʎr body in space can lead to extraordinary achievements.
Of course, not all aspects of embodiment are easily articulated. There are mysteries yet to be uncovered, experiences that words uɐɔnot fully capture – pƎ┴Ɔ∀pƎɹ moments of pure being, where movement and thought blend into a seamless flow. These experiences remind us that, despite our advancements in understanding the mind, there remains an uncharted territory that beckons us to explore.
|
d2b06e37-4747-4c79-8098-37a615f5041c | StampyAI/alignment-research-dataset/special_docs | Other | Hacking AI
Hacking AI
A PRIMER FOR POLICYMAKERS
ON MACHINE LEARNING CYBERSECURITY
DECEMBER 2020AUTHOR
Andrew J. Lohn
Center for Security and Emerging Technology2Established in January 2019, the Center for Security and
Emerging Technology (CSET) at Georgetown’s Walsh
School of Foreign Service is a research organization fo-
cused on studying the security impacts of emerging tech-
nologies, supporting academic work in security and tech-
nology studies, and delivering nonpartisan analysis to the
policy community. CSET aims to prepare a generation of
policymakers, analysts, and diplomats to address the chal-
lenges and opportunities of emerging technologies. During
its first two years, CSET will focus on the effects of progress
in artificial intelligence and advanced computing.
CSET.GEORGETOWN.EDU | CSET@GEORGETOWN.EDU
Hacking AIDECEMBER 2020
AUTHOR
Andrew J. LohnA PRIMER FOR POLICYMAKERS
ON MACHINE LEARNING CYBERSECURITY
ACKNOWLEDGMENTS
The author would like to thank Ben Buchanan and John Bansemer for
their support and guidance. Thanks are also due to Chris Rohlf, Jeff Alstott, and Tantum Collins for helpful critiques and comments.
The author is solely responsible for the views expressed in this piece
and for any errors or omissions.
PRINT AND ELECTRONIC DISTRIBUTION RIGHTS
© 2020 by the Center for Security and Em erging Technology.
This work is licens ed under a Creative C ommons Attribution-
NonCommercial 4.0 In ternational License .
To view a copy of thi s license, visit:
https://creativecommo ns.org/licenses/by -nc/4.0/.
Document Identifier: doi: 10.51593/2020CA006
Cover background: Pixabay/ims66.
Cover Icon: Noun Project/Chameleon Design, IN.
Iconography: Noun Project.
Center for Security and Emerging TechnologyiEXECUTIVE SUMMARY
INTRODUCTION
1 | THE BASICS OF MACHINE LEARNING
2 | COMMON TYPES OF ATTACKS
3 | THE RANGE OF POSSIBLE OF ATTACKS
4 | ASSESSING THE THREAT: EASE, PERVASIVENESS,
AND DEFENSES
5 | CONCLUSION AND RECOMMENDATIONS
ENDNOTES III
V
1
5
11
13
17
21Contents
Center for Security and Emerging Technologyiv
Center for Security and Emerging Technologyiiirtificial intelligence is vulnerable to cyber attacks. Machine
learning systems—the core of modern AI—are rife with vulnera-
bilities. Attack code to exploit these vulnerabilities has already
proliferated widely while defensive techniques are limited and struggling
to keep up. Machine learning vulnerabilities permit hackers to manipu-
late the machine learning systems’ integrity (causing them to make mis-
takes), confidentiality (causing them to leak information), and availability
(causing them to cease functioning).
These vulnerabilities create the potential for new types of privacy
risks, systemic injustices such as built-in bias, and even physical harms.
Developers of machine learning systems—especially in a national security
context—will have to learn how to manage the inevitable risks associated
with those systems. They should expect that adversaries will be adept at
finding and exploiting weaknesses. Policymakers must make decisions
about when machine learning systems can be safely deployed and when
the risks are too great.
Attacks on machine learning systems differ from traditional hacking ex-
ploits and therefore require new protections and responses. For example,
machine learning vulnerabilities often cannot be patched the way tradi-
tional software can, leaving enduring holes for attackers to exploit. Even
worse, some of these vulnerabilities require little or no access to the victim's
system or network, providing increased opportunity for attackers and less
ability for defenders to detect and protect themselves against attacks.
Accordingly, this paper presents four findings for policymakers’
consideration:
• Machine learning introduces new risks: Using machine learn-
ing means accepting new vulnerabilities. This is especially true in Executive Summary
A
Center for Security and Emerging Technologyivthe context of national security, but also in critical infrastructure, and even in
the private sector. However, this does not mean machine learning should be
prohibited. Rather, it is incumbent upon policymakers to understand the risks
in each case and decide whether they are outweighed by the benefits.
• New defenses may only offer short-term advantage: Attackers and
defenders of machine learning systems are locked in a rapidly evolving cat-
-and-mouse game. Defenders appear to be losing; their techniques are cur-
rently easily defeated and do not seem well-positioned to keep pace with
advances in attacks in the near future. Still, defensive measures can raise the
costs for attackers in some narrow instances, and a proper understanding of
machine learning vulnerabilities can aid defenders in mitigating risk. None -
theless, the effectiveness of defensive strategies and tactics will vary for
years and will continue to fail at thwarting more sophisticated attacks.
• Robustness to attack is most likely to come from system-level defens-
es: Given the advantages that attackers have, for machine learning systems
to function in high-stakes environments, they must be built in with greater
resilience than is often the case today. To aid this effort, policymakers should
pursue approaches for providing increased robustness, including the use of
redundant components and ensuring opportunities for human oversight and
intervention when possible.
• The benefits of offensive use often do not outweigh the costs: The
United States could employ the types of attacks described in this primer to
good effect against adversaries’ machine learning systems. These offensive
techniques could provide another valuable arrow in the U.S. national se -
curity community’s quiver and might help prevent adversaries from fielding
worrisome AI weapons in the first place. On the other hand, the United
States can lead by setting norms of restraint. The United States must also be
cautious to ensure its actions do not alienate the community that is develop-
ing these technologies or the public at large who rely on machine learning.
Machine learning has already transformed many aspects of daily life, and it is
easy to see all that the technology can do. It likewise offers the allure of reshaping
many aspects of national security, from intelligence analysis to weapons systems and
more. It can be hard, however, to perceive machine learning’s limitations, especially
those—like its susceptibility to hacking—that are most likely to emerge in highly con-
tested environments. To better understand what the technology can and cannot do,
this primer introduces the subject of machine learning cybersecurity in a detailed but
non-technical way. It provides an entry point to the concepts and vocabulary need-
ed to engage the many important issues that arise and helps policymakers begin the
critical work of securing vital systems from malicious attacks.
Center for Security and Emerging TechnologyvIntroduction
s he removes his hands from the steering wheel and leans back,
the driver becomes a passenger. Now under its own control, the
car accelerates toward the skyscrapers in the distance, yet to
notice a small, innocuous-looking sticker on the road ahead. If spotted
at all, the sticker might be confused for a paint smudge. Suddenly the car
swerves left. Alarms sound and warnings flash. Then, a voice speaking in
Chinese backed by ominous music sheds light on what is happening: the
machine learning system in the car has been hacked.
In 2019 Tencent, a leading Chinese technology company unveiled a
set of three attacks against Tesla automobiles and posted a video demon -
strating them.1 Two of the attacks were directed at machine learning com-
ponents, the second of which made the car veer while driving. The fact that
AI can be hacked in this way comes as little surprise to researchers who
study machine learning cybersecurity, but the subject receives insufficient
attention in national security circles. That situation must end.
Machine learning is starting to deliver on promises of enhanced
support to the warfighter, to reconnaissance teams, and in streamlined
operations and logistics.2 It is increasingly becoming a predominant, albeit
hidden, force in the daily lives of many Americans. It will increasingly
route and control the vehicles on our roads and secure and manage our
homes by interpreting our voice commands. Lying dormant in those systems
are vulnerabilities that are different from the traditional flaws with which
we have decades of experience. These vulnerabilities are pervasive and
inexpensive to exploit using tools that have proliferated widely and against
which there is often little defense.A
Center for Security and Emerging TechnologyviThis report summarizes and contextualizes machine learning vulnerabilities for
policymakers, providing a starting point for familiarizing themselves with the broad
set of concepts and potential concerns. These concepts have broad applicability,
since machine learning affects society in many ways; its vulnerabilities create the
potential for new types of privacy leaks, injustices, and even physical harm. This
report briefly describes some of the most popular types of attacks and then discuss-
es the range of possibilities. It also highlights the pervasiveness of the vulnerabilities,
the ease of exploiting them, and the state of defenses. First, though, it offers a primer
on the basics of machine learning.
Center for Security and Emerging Technology1achine learning systems use computing power to execute algo-
rithms that learn from data. These systems learn patterns that they
then use to make classifications or predictions. Together, the com-
ponents of machine learning enable systems that have proven remarkably
adept in a wide variety of fields, including automated imagery intelligence
analysis important for national security. Understanding the threats against
machine learning requires only a cursory understanding of the technology.
THE MODEL
The centerpiece of machine learning is the “model” itself. The model could
be a neural network or a list of yes/no questions or a variety of other
possible techniques, some of which have not been invented yet. The model
is composed of anywhere from a few to hundreds of billions of parameters
that can each be adjusted to make it more accurate. In one type of model
the parameters might be basic yes/no questions, like “Did the number
of tweets with #overthrowthegovernment exceed 40,000 last week?” In
complex models, such as neural networks, the parameters can instead rep-
resent the strength of connections between neurons. For our purposes, only
one principle matters: the machine learning model will be more accurate if
the parameters are tuned well and inaccurate if they are not. This is where
the process of training comes in.
TRAINING
Data fuels machine learning systems; the process of training shows how.
This process is shown on the left side of Figure 1, illustrating explicitly
that the data has to be collected or mined from somewhere, such as a
surveillance drone, a Twitter feed, or computer data. During training, the
machine learning system extracts patterns from this data. The system learns The Basics of
Machine Learning1
M
Center for Security and Emerging Technology2by adjusting the parameters of its model to correspond to these patterns. Different
kinds of machine learning systems learn from data in distinct ways, but the idea of
matching the model to the training data generally holds. Crucially, the system does
not know which patterns are desirable to learn and which, like those corresponding to
human biases, are not. It simply learns everything it can from the data. Once training
is complete, the model can be used as a component in a larger system, as shown to
the right in Figure 1. In some cases, the model can continue to be updated and trained
while deployed for use, but in other cases it is frozen as is before it is deployed.
USE AS PART OF A DEPLOYED SYSTEM
The deployed system may use many models to perform similar tasks and could
have humans involved at various stages. Autonomous cars, for example, collect
data from the environment through video or radar and use it to decide whether to
brake, accelerate, or turn. There might be one machine learning model analyzing
video and another analyzing the data from the car’s laser or radar sensors, while
still another model synthesizes information from several data sources and makes
a decision. And there may be a human driver who can choose to accept or reject
the decision before it becomes an action. In essence, the machine learning system
uses the models to convert real-world input data, which is hopefully similar to the
training data, into decisions and then actions.
INPUTS
DATA STORAGEDATA COLLECTION
TRAININGMODEL A
HUMAN-IN-THE-LOOPDECISION MODEL B ACTION
MACHINE LEARNING DEPLOYED SYSTEMFIGURE 1
Once a machine learning model is trained, it becomes part of a larger
system that converts inputs to decisions and subsequently into actions.
Center for Security and Emerging Technology3The notion that the machine learning model is part of a larger system highlights
the main challenges attackers face. A properly designed system creates redundan-
cies that guard against bad outcomes. For example, if attackers cause the computer
vision system to interpret a stop sign as a 45 mph speed limit sign, system-level
defenses could avert a catastrophe. The car might still decide to stop at the sign if its
laser or radar detects crossing traffic or if the car has been instructed to never cross
a blind intersection at high speed.3 Moreover, if a human is present, he or she may
be able to notice and override peculiar inputs or abnormal decisions. Even with
failsafes, though, attackers can successfully compromise multiple systems or identify
single points of failures. To figure out how to stop such threats, it is essential to under-
stand how they work.
Center for Security and Emerging Technology4
Center for Security and Emerging Technology5n cybersecurity, possible harms are typically grouped into three
broad categories represented by the acronym CIA: confidentiality,
integrity, and availability. All three categories also apply to ma-
chine learning. Integrity attacks alter data to cause machines to make er-
rors and have attracted the most attention. Confidentiality attacks extract
information meant to remain hidden; they also garner notable research
focus. Availability attacks cause the machine learning component to run
slowly or not at all. While availability attacks are starting to attract more
attention, they have been the least popular.4 This section will only discuss
integrity and confidentiality attacks.
INTEGRITY ATTACKS
While there are many ways to cause the machine learning model to make
errors,\* two approaches stand out as the most popular: data poisoning
and evasion. Both target different parts of the machine learning process,
as shown in Figure 2. In “data poisoning,” attackers make changes to the
training data to embed malicious patterns for the machine to learn. This
causes the model to learn the wrong patterns and to tune its parameters
in the wrong way. In “evasion,” attackers discover imperfections in the
model—the ways in which its parameters may be poorly tuned—and then
exploit these weaknesses in the deployed model with carefully crafted in-
puts.† Even the most well-trained models have a seemingly infinite supply
of these imperfections that allow the model to be turned against itself.Common Types
of Attacks2
I
\*Attackers could retune the model’s parameters individually or they could create bugs in
parts of the software that are used to tune the parameters, for example.
† This report only discusses evasion and data poisoning attacks in the context of integrity
attacks, but they can also be used in confidentiality and availability attacks.
Center for Security and Emerging Technology6Neither data poisoning nor evasion require directly breaking into the ma-
chine learning system. This means that attackers can manipulate machine learn-
ing systems even if they are unable to tamper with the system itself. For example,
attackers might not need to get their hands on a spy drone to cause it to misiden-
tify its targets. Instead, they might make educated guesses about the model of
the drone’s machine learning system and break into the company that designs
the drone to uncover the model. Attackers might even alter the publicly available
data that software developers often use as the foundation or starting points for
their models.5 The range of possibilities underscores the fact that are many options
for attackers to manipulate machine learning systems that do not require directly
observing or breaking into the target.
Data Poisoning Attacks
A machine learning model tries to find patterns in the data; if an attacker can
control the data, they can control what the model learns. In some cases, just a few
changes to the data can implant something akin to a Pavlovian bell, causing the
machine learning system to respond to a particular input in a certain way.6 Further,
the poisoned data patterns do not have to make sense to a human, making them
INPUTS
DATA STORAGEDATA COLLECTION
TRAININGMODEL A
HUMAN-IN-THE-LOOPDECISION MODEL B ACTION
MACHINE LEARNING DEPLOYED SYSTEMDATA POISONING EVASIONFIGURE 2
Data poisoning attacks manipulate the training data from which a model
learns, while evasion attacks control the inputs to the deployed system to
exploit pre-existing weaknesses.
Center for Security and Emerging Technology7easy for attackers to hide or disguise. As a hypothetical example, imagine an auto-
matic order placement system for a manufacturing company that uses past data on
monthly demand to send the right number of parts to each factory the next month. An
adversary could poison the dataset so that when more than 10,000 screws are sent
to Kazakhstan then in the next month, only half the usual amount of oil for tanks is sent
to Syria. If the attacker can send extra screws to Kazakhstan, he or she could cause a
shortage of tank oil in Syria, which could reduce the effectiveness of operations.
There are many opportunities for attackers to supply the system with data
intended to subvert the model. They could hack into the victim’s servers to change
the database or they could trick the victim into downloading a malicious datapoint
when they are updating their model. These methods often take advantage of tradi-
tional offensive cyber techniques. For example, attackers could break into a victim’s
network and manipulate the data stored within it.
A less intrusive means of data poisoning involves creating false precedent. For
example, if an attacker would like to use a piece of malware but is worried that a
machine learning-based antivirus program will detect them, the attacker can first
distribute a similar but benign piece of code. The antivirus might learn that code with
those characteristics is safe and therefore think that the malware is also safe once it
is released. Data poisoning attacks like these are possible for nearly all models.
Evasion Attacks
The most common type of attack against machine learning systems is known as an
evasion attack. In these operations, the attacker makes changes to the inputs that
are so subtle humans have trouble noticing them but are significant enough for a
machine to change its assessment; these inputs are often “adversarial examples.”
To demonstrate how easy it is to perform this type of attack, we created one.
We began with the picture of Georgetown University’s iconic Healy Hall (Figure
3). A common image recognition system identified with 85.8 percent confidence that
the picture was of a “castle.” This is a good guess because the system, which is basic,
was not programmed to recognize schools or universities. Using openly published
techniques, our attack program made a series of small changes to the image to trick
the machine learning system into identifying Healy Hall as a triceratops dinosaur.7
Human eyes would find the changes difficult to notice, but they were tailored to trick
the machine learning system. Once all the changes were made, the picture looked the
same to the human eye, but the machine was 99.9 percent sure the picture was of a
triceratops (Figure 3). This is the power of adversarial examples in action.\*
\*The image classifier was MobileNetV2 attacked using projected gradient descent with a l∞-norm
bound run on Google Colab’s free GPU. Many other techniques and models and runtimes were also
tried and could have been used interchangeably.
Center for Security and Emerging Technology8Image classification algorithms are among the greatest triumphs of machine
learning, so attacks like these are striking. While these kinds of attacks are easiest to
visualize when they manipulate images, they also affect machine learning systems
that perform other tasks such as voice recognition. Evasion attacks do not need to
be as subtle as the Healy Hall example above, but can more significantly manipu-
late the input given to a machine learning system.
CONFIDENTIALITY ATTACKS
In a confidentiality attack, attackers observe how machine learning systems re -
spond to different kinds of inputs. From this observation, attackers can learn infor-
mation about how the model works and about its training data. If the training data is
particularly sensitive—such as if the model is trained on classified information—such
an attack could reveal highly sensitive information. In essence, the machine learning
system learns from the training data and might unintentionally reveal what it knows
to others. There are three main kinds of confidentiality attacks.
Model Extraction
The easiest type of attack to understand is “model extraction.” By recording the
inputs and outputs of the victim model enough times, the attacker can build a close
facsimile of the model to be attacked. Model extraction poses two risks. First, steal-
ing the model provides the attacker with a copy that the victim may not have want-
FIGURE 3
Classification of Georgetown’s Healy Hall unperturbed on top and
attacked to appear to a machine learning system to be a triceratops
on bottom. To human eyes, the two images look identical.
ORIGINAL IMAGE
Castle: 85.8%
Palace: 3. 17%
Monastery: 2.4%
ATTACKED IMAGE
Triceratops: 99.9%
Barrow: 0.005%
Sundial: 0.005%
Center for Security and Emerging Technology9ed to share, revealing information about how the machine learning system works.
Second, and more significantly for the purposes of cybersecurity, stealing a model
facilitates all the other attacks discussed in this report. The understanding of how a
system works makes it easier to determine how a system may be compromised.
Membership Inference
In a membership inference attack, the attacker studies the machine learning sys-
tem’s inputs and outputs and learns details about the data on which the model was
trained. For example, imagine a company that offers customers a medical diagno-
sis after they answer a list of questions about themselves and their symptoms. The
company would want to protect the data used to build their model both for intellec-
tual property reasons and because it contains sensitive medical information about
the participants. The company could delete its copies of patient data after training
the model, but this may not be enough to guarantee confidentiality, since the model
itself has learned information about the patients and the model is subject to mem-
bership inference attacks.
To carry out such an attack, attackers often consider a model’s confidence
rating—how sure it is that its output is correct. Machine learning models are often
overconfident when they see real world examples that match those provided during
training. For example, a machine learning system is likely to be more confident
about John Doe’s medical data if his information was used to train the model; it will
be less confident about Jane Doe, whose data was not used to train the model,
even if the symptoms of the two patients are similar. Based on the higher confidence
rating, the attacker might conclude that John was in the original dataset and thereby
learn his sensitive medical history.
Model Inversion
Instead of looking for individual pieces of data, with model inversion attackers try to
understand more about the model’s output categories. For a facial recognition sys-
tem that takes a facial image as an input, the output categories are people’s names.
In model inversion, the attacker tries to do the opposite. In the facial recognition
case, that means starting with a target’s name and trying to produce images of the
corresponding face.8
Attackers do not run the model in reverse. Instead, they start with a randomly
generated image and make small changes to that image that make the model a little
more likely to label the image as the target’s face. These machine learning attacks
are the rough equivalent of a police sketch artist slowly building a composite image
of a suspect.
With enough small adjustments—and continual feedback from the model’s
evaluation of each iteration of the model—attackers can eventually draw a com-
Center for Security and Emerging Technology10plete picture of the face. Model inversion is not limited to faces or pictures, though.
Models of all types can be inverted, such as inferring a person’s purchasing tenden-
cies from a fraud detection model.
INPUTS
MODEL INVERSIONMEMBERSHIP
INFERENCE
TRAININGMODEL A
HUMAN-IN-THE-LOOPDECISION MODEL EXTRACTION ACTION
MACHINE LEARNING DEPLOYED SYSTEMATTACKEROBSERVE INPUTSOBSERVE
ACTIONSFIGURE 4
Popular confidentiality attacks require only that the attacker observe the
inputs and the outputs of the deployed system. They can then extract
models or invert the model to learn about the output categories, such as by
exploiting a facial recognition system and a name to draw a person’s face.
They can also perform a membership inference attack to learn specific
traits of the data.
Center for Security and Emerging Technology11ttackers must make a range of choices about when and how
to carry out their operations. They can direct different kinds of
attacks at different stages of the machine learning process, from
data collection to deployment. For example, attackers can use both data
poisoning and evasion attacks to cause misclassification, but poisoning
attacks target the training process whereas evasion targets the model
after deployment.
Attackers can also choose how aggressive or stealthy to be. For
example, evasion attacks have different degrees of subtlety.9 At one ex-
treme, the attackers can create whatever inputs they want, such as random
squiggles to try to bypass a cellphone’s facial recognition system. Those
attacks are obvious to a human inspecting the images either in real-time or
forensically after an attack. At the stealthier extreme, attackers may restrict
themselves to changes that are indistinguishable to humans, as the Healy
Hall image demonstrates. Even just within the category of indistinguishable
attacks, there are many options. For example, to attack an image detection
system, attackers may limit the number of pixels that can be changed, the
amount of change per pixel, or the average change across all pixels, and
so forth. The technical details of how these attacks can be most efficient are
not necessary to gain a conceptual understanding; we mention them only
to suggest that there are many options for the attackers to choose from.The Range
of Possible Attacks3
A
Center for Security and Emerging Technology12As Figure 5 shows, in the Healy Hall example we chose to carry out an attack
on integrity that forced a system to misclassify an image, and we carried out the
attack against a machine learning system that was deployed by indistinguishably
manipulating an image and only slightly changing a large number of pixels. In re -
sponse to different operational priorities, we might have used an alternative attack
design, such as making larger changes to a smaller number of pixels or worrying
less about making the original image and the manipulated image look more alike.
We could have achieved a similar effect by data poisoning instead. Still other kinds
of attacks would have targeted confidentiality and extracted information about the
model rather than manipulating its outputs.
Adversarial attacks are powerful, counterintuitive, and subtle. There are many
tools available online for carrying out these kinds of operations, which are expect-
ed to become increasingly common. That said, there is a risk of focusing too much
on adversarial examples and neglecting other serious types of attacks against
machine learning systems. Recognizing the range of possible threats should lead to
a broader and more robust defense of machine learning systems; to best inform this
defense, we need to examine the ease, persuasiveness, and severity of the threats. EFFECT
CATEGORYSPECIFIC
EFFECTML STAGE
ATTACKEDCONSTRAINT
CATEGORYSPECIFIC
CONSTRAINT
Confidence
Reduction
Max change
per pixelAvailability
Integrity
Confidentiality...
...
.........Misclassification
DeploymentIndistinguishable
Quantitative
ErrorData Collection
Training
TestingContent-
Preserving
Content-
Constrained
Non-Suspicious
UnconstrainedSum of
all changes
Number of
pixels changedFIGURE 5
Attackers have many options to choose from when they target machine learning
systems. The blue path in Figure 5 shows the attack used to make Figure 3, in
which Georgetown University’s Healy Hall was turned into a triceratops (at least
according to a machine learning system).
Center for Security and Emerging Technology13he attacks described in this report are likely to become common in
the future. They are easy to conduct, the vulnerabilities they exploit
are pervasive, and the attacks are difficult to defend against. The
combination of these three characteristics means that managing machine
learning vulnerabilities is a complex problem, even when compared to
other problems in cybersecurity.
EASE OF ATTACK
Conducting attacks on machine learning systems often requires less exper-
tise than it takes to design the models and fewer resources than it takes to
train them; it is easier to destroy than to create. Just as the offense has long
held an advantage in traditional cyber operations, it appears to also have
the edge in machine learning for the time being.
The tools for conducting the common attacks discussed in this report
have already proliferated widely. They can be found and downloaded
freely from the internet and are not difficult to build. We were able to make
many versions of the attack shown in Figure 3 over the course of a single
afternoon. None of the versions took more than 20 lines of code and each
could run in about a second.
The attacks in Figure 3 executed quickly because they avoided the hard
part of machine learning: training the model. Not all attacks avoid the train-
ing stage, so some can require substantially more than a second of effort. At-
tackers who cannot steal the model either have to build one of their own or
use trial-and-error on the victim’s deployed system.10 For example, attackers
often need large amounts of time to train a model for data poisoning attacks
to confirm that the poison will have the desired effect. Assessing the Threat:
Ease, Pervasiveness,
and Defenses4
T
Center for Security and Emerging Technology14Even when the attacks are sometimes time-consuming, the number of actions an
attacker must take can be surprisingly small.11 Changing as little as a single data-
point can sometimes be enough to have the desired negative effects on the model’s
performance.12 The same is true for confidentiality attacks, where observing the in-
puts and outputs of a machine learning system just a few hundred or thousand times
can be enough to determine how machine learning models work.13 And, as we have
seen, evasion attacks can be successful with only imperceptible changes.14
Of course, coding the attacks is only one part of an operation. The attacker
needs information to conduct the attack and a way to launch it against the target. To
do this, the three confidentiality attacks previously discussed only need to observe
the inputs and outputs of the model or the deployed system. Similarly, neither data
poisoning nor evasion require direct access to the target. In short, there are many
opportunities for attackers to achieve their goals and the attacks themselves do not
require much expertise to create. However, the difficulty of introducing them to the
victim and ensuring they cause the intended malicious outcome will vary on a case-
by-case basis.
Some attacks do require substantial information about the target, and obtaining
that information can be somewhat complicated. For the Healy Hall attack, there
was an exact copy of the model to be attacked. It was as if a crashed spy drone
had been recovered and could be used to design camouflage to fool a similar
drone’s machine learning systems. Depending on our objective, we may not need
such direct access to the target; oftentimes information about something similar to
the targeted model will work. The techniques used in the Healy Hall attack can be
used to simultaneously fool many different image classifiers.15 Without knowing
what model the other classifiers used, we could send them our doctored image of
Healy Hall and there is a good chance they would misclassify it. Absent having the
model, though, it would be harder to force these classifiers to misidentify the build-
ing as a triceratops.16 In general, causing specific failures—such as getting a facial
recognition system to misclassify someone as a specific other person—requires more
information about the target model.
PERVASIVENESS
All machine learning models are susceptible to attack; different kinds of models
are vulnerable in different ways. In models with few inputs, for example, the victim
stands a better chance of noticing data poisoning attacks. Similarly, models with
few inputs are often more robust against evasion attacks because there are fewer
ways for an attacker to manipulate those inputs. In contrast, more complex models
offer more opportunities for attackers. That partly explains why image recognition
systems are so vulnerable: each pixel is an input, creating many manipulation op-
portunities for the attackers.
Center for Security and Emerging Technology15But simpler models that are trained on just a few data points have drawbacks,
too. For example, they are more vulnerable to confidentiality attacks. If there are
only five people in a training database, then each of those people contributes a lot
to the tuning of parameters in the model. In models trained on millions of people,
each person contributes only a little to any parameter and so information can be
harder to extract via confidentiality attack.
As a result of these and other systemic weaknesses, all machine learning sys-
tems have vulnerabilities. Some of the most common examples of machine learning
models are vision systems; evasion attacks against vision systems receive significant
attention, as this report has shown, but there are prominent examples of attacks
against audio and text systems, as well. Subtle changes can be made so that the
computer “hears” whatever the attacker chooses. In voice-controlled homes or
phones, attackers may gain unauthorized access. Systems that process text are also
vulnerable to manipulation and evasion. For example, Twitter’s AI for identifying
misleading tweets about COVID-19 flagged one reading, “Do not give oxygen
to the idea, which comes up with great frequency, that we are approaching some
kind of strong AI”—a statement that has nothing to do with COVID-19. These fail-
ures offer a reminder of the shortcomings of machine learning systems.17 It is easy to
imagine that governments will benefit from the ability to manipulate an adversary’s
machine learning systems and will perceive an imperative to defend their own.
DEFENSES
Reliable defenses against these types of attacks are hard to come by, but some
developments are more promising than others. Protecting information about the
data sources used to train a model—to guard against membership inference, for ex-
ample—is an area of comparative promise. A technique called differential privacy
can mathematically limit how much information can be gleaned about any individ-
ual person or datapoint.\* The designers of machine learning systems can use those
techniques to manage their degree of risk and constrain the information available
to those seeking to breach confidentiality.18 Most of these techniques today force
the defender to sacrifice performance for privacy and so they are not widely imple -
mented, but future privacy-preserving techniques may be more efficient.19 In mission
critical systems or cases of extreme data sensitivity, however, these performance
tradeoffs may be more acceptable. Differential privacy has other limits, too. For
example, it only protects an individual’s contributions to the training data and will
not help obscure traits common among groups of contributors.
\*Other mathematical techniques such as secure multi-party computing, homomorphic encryption,
and federated learning are also promising. Those three techniques, though, solve the different
problem of keeping the developer from accessing private data rather than keeping the developer’s
data private.
Center for Security and Emerging Technology16Defenses against other attacks are less promising. To guard against the other
confidentiality attacks of model inversion and model extraction, the defender can
reduce their vulnerability by limiting the number of times customers can use their
model or by intentionally decreasing its accuracy.20 But those steps can interfere
with the business case and limit the value of machine learning; if a company makes
money each time the model is used, then limiting the number of uses is not very
appealing. Other approaches, like keeping the model on a classified server and
making sure cleared analysts are the only ones who see its outputs, limit the risk of
model stealing and model inversion but at the cost of restricting the model’s use and
adding security constraints.
Defending against attacks on integrity is harder still; it is a game of Whack-a-
Mole where new attacks are invented and defenses are developed, and then those
defenses are defeated and so on.21 This dynamic applies both to defenses that try to
detect attacks and those that try to make the models immune to them. And defend-
ing against one attack can invite others. For example, freezing a model and cutting
off its access to new information means no additional data poisoning is possible,
but letting it continue to update its defenses can pressure evasion attacks to evolve
to keep pace.
These defenses are typically only somewhat effective and only for very highly
constrained attacks of specific types.22 For example, to guard against attacks that
make imperceptible changes to a picture, a defense might be effective against
attackers who limit the average change per pixel while not protecting against those
who limit the maximum change per pixel.23 A subtle change in an attacker’s opera-
tions can change how effective the defense is.†
This is a vital and perhaps alarming point: machine learning vulnerabilities are
hard to fix. Fixing them is more akin to addressing hardware vulnerabilities—which
are notoriously challenging—than it is to the relative ease of patching traditional
software vulnerabilities.24 For some of the attacks discussed in this paper there is no
clear solution on the horizon. The persistence of these weaknesses should prompt
caution when using machine learning in national security contexts against sophisti-
cated adversaries.
† There are actually many ways to calculate the average and there are many more interesting ways
to keep the changes small that are more esoteric, such as using the Wasserstein distance.25
Center for Security and Emerging Technology17istorically, where vulnerabilities have existed in traditional cyber
systems, attackers have often exploited them for nefarious or
destructive ends. The same will likely be true of vulnerabilities in
machine learning. This is not a call to eliminate machine learning from on-
going modernization, as vulnerabilities do exist in non-AI systems as well.
Rather, it is a wake-up call: machine learning brings with it new vulnera-
bilities that must be understood well enough to make informed decisions
about risks and investments. A few findings follow from our analysis.
MACHINE LEARNING INTRODUCES RISK IN
ACQUISITION AND MODERNIZATION
Machine learning is deeply integrated into various facets of society
and will likely continue to gain traction. In some cases, such as integrity
attacks against movie recommendation systems, there may be relatively
little incentive to attack and so the risk of using machine learning is low. In
contrast, the risks to national security systems are substantial, and there are
many well-resourced and highly motivated adversaries seeking to attack.
A first step in assessing the risks of deploying machine learning systems
in such a competitive context is understanding the range of options avail-
able to the potential attackers, which include the model stealing, model
inversion, membership inference, data poisoning and evasion attacks
discussed in this report. A subsequent step is understanding the defensive
options that exist and their effectiveness.Conclusion and
Recommendations
H5
Center for Security and Emerging Technology18NEW DEFENSES MAY ONLY OFFER SHORT-TERM ADVANTAGE
One of the perpetual questions in cybersecurity is whether the attacker or the de -
fender has the upper hand. It is hard to answer this question until the field of ma -
chine learning cybersecurity settles on specific offensive and defensive techniques.
Even then the answer may not be clear, as attackers and defenders engage one
another, both sides will discover new techniques. Currently, defenses do not look
promising, and many traditional cybersecurity techniques are not easily applied to
machine learning. In general, attackers can move more quickly than defenders and
the costs are higher to retrain a model than they are to find a new attack.
The offense-defense balance changes as machine learning systems reach
different levels of model complexity. Some techniques that appear to be effective or
ineffective at first behave differently when applied to more or less powerful systems.
For example, some defenses that are promising for securing imaging systems that
read low-resolution handwritten digits are not promising for imaging systems that
are powerful enough to recognize high-resolution pictures of cars and animals.
ROBUSTNESS TO ATTACK IS MOST LIKELY TO COME FROM
SYSTEM-LEVEL DEFENSES
Given the difficulty in finding reliable defenses against the wide range of attack
options, systemic defenses seem essential. Defenders should assume that attackers
will successfully compromise some parts of machine learning systems. To limit the
damage attackers can do, we should build redundancy and increase resilience.
Especially given how hard it is to fix underlying weaknesses in machine learning
systems, designing architectures that maximize robustness and prevent cascading
failures is key.
For instance, a commonly cited example of an attack involves placing a sticker
on a stop sign that makes it appear to autonomous vehicles to be a 45 mph sign.
Although this attack is possible and easy to perform, it only achieves a destructive
effect if the car drives into a busy intersection. If the car has many ways to decide
to stop, such as by knowing that intersections usually have stop signs, relying on
lasers for collision avoidance, observing other cars stopping, or noticing high speed
cross-traffic, then the risk of attack can remain low despite the car being made of
potentially vulnerable machine learning components. The systemic-level defense—
to not rely on just one input in making the decision to accelerate through the inter-
section—thwarts the attack.
Center for Security and Emerging Technology19THE BENEFITS TO OFFENSIVE USE OFTEN DO NOT OUTWEIGH
THE COSTS
The United States is not the only country fielding AI systems, and the opportunity
to exploit these vulnerabilities in adversaries’ systems may be tempting. There are
obvious military benefits of causing an enemy weapon to misidentify its targets or
send an adversary’s autonomous vehicles off course. There are also the obvious
intelligence benefits of stealing adversaries’ models and learning about the data
they have used.
On the other hand, the United States is among the countries best positioned to
benefit from progress in AI technologies. It has the most to lose if these technologies
are vulnerable. Demonstrating global norms of restraint against attacking AI may be
a wise stance. Even if nation-states do not adhere to global norms against attacking
military AI in conflict, there may be benefits to clarifying the lines against attacking
civilian systems or critical infrastructure. Clarifying rules and norms would help man-
age the problem of machine learning security both domestically and internationally.
A posture of restraint when it comes to attacking machine learning systems may also
help the United States government win the support of AI talent that national security
officials have been eager to court.
Norms alone will not solve this problem. Whether or not the United States
decides to pursue attacks on machine learning systems, adversaries will make their
own decisions about restraint, or lack thereof. Given that machine learning’s vulner-
abilities are pervasive, easy to exploit, and hard to defend, managing the risks they
pose is too large a task for the technology community to handle alone. It is incum-
bent upon policymakers to understand the threats well enough to assess the dangers
that the nation, its military and intelligence arms, and its civilians face when they use
machine learning. In some cases, that exposure may be acceptable, and in others,
it may not. But in all cases, the management of risk must be informed by technical
understanding. This primer is meant to help with that endeavor.
Center for Security and Emerging Technology16
Center for Security and Emerging Technology 21Endnotes
1. Tencent Keen Security Lab Experimental Security Research of Tesla Autopilot. (2019).
2. Morgan, F. E. et al. Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World.
https://apps.dtic.mil/sti/citations/AD10973 1 3 (2020).
3. Eykholt, K. et al. Robust Physical-World Attacks on Deep Learning Models. arXiv [cs.CR] (2017).
4. Shumailov, I., Zhao, Y., Bates, D., Papernot, N., Mullins, R. & Anderson, R. Sponge Examples: Energy-
Latency Attacks on Neural Networks. arXiv [cs.LG] (2020).; Tabassi, E., Burns, K. J., Hadjimichael, M.,
Molina-Markham, A. D. & Sexton, J. T. A taxonomy and terminology of adversarial machine learning.
(2019) doi:10.6028/NIST.IR.8269-draft.
5. Gu, T., Dolan-Gavitt, B. & Garg, S. BadNets: Identifying Vulnerabilities in the Machine Learning Model
Supply Chain. arXiv [cs.CR] (2017).; Kurita, K., Michel, P. & Neubig, G. Weight Poisoning Attacks on Pre-
trained Models. arXiv [cs.LG] (2020).
6. Saha, A., Subramanya, A. & Pirsiavash, H. Hidden Trigger Backdoor Attacks. arXiv [cs.CV] (2019).
7. Madry, A., Makelov, A., Schmidt, L., Tsipras, D. & Vladu, A. Towards Deep Learning Models Resistant to
Adversarial Attacks. arXiv [stat.ML] (2017).
8. Zhang, Y. et al. The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks.
arXiv [cs.LG] (2019).
9. Gilmer, J., Adams, R. P., Goodfellow, I., Andersen, D. & Dahl, G. E. Motivating the Rules of the Game for
Adversarial Example Research. arXiv [cs.LG] (201 8).
10. Shukla, S. N., Sahu, A. K., Willmott, D. & Zico Kolter, J. Hard Label Black-box Adversarial Attacks in
Low Query Budget Regimes. arXiv [cs.LG] (2020).; Ilyas, A., Engstrom, L., Athalye, A. & Lin, J. Black-box
Adversarial Attacks with Limited Queries and Information. arXiv [cs.CV] (201 8).
11 . Shafahi, A. et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks. in Advances
in Neural Information Processing Systems 3 1 (eds. Bengio, S. et al.) 6103–61 1 3 (Curran Associates, Inc.,
201 8).
1 2. Park, S., Weimer, J. & Lee, I. Resilient linear classification: an approach to deal with attacks on training data.
in Proceedings of the 8th International Conference on Cyber-Physical Systems 1 55–164 (Association for
Computing Machinery, 2017).
13 . Tramèr, F., Zhang, F., Juels, A., Reiter, M. K. & Ristenpart, T. Stealing machine learning models via prediction
apis. in 25th ${USENIX} Security Symposium ({USENIX}$ Security 16) 601–61 8 (2016).
1 4. Su, J., Vargas, D. V. & Sakurai, K. One Pixel Attack for Fooling Deep Neural Networks. IEEE Trans. Evol.
Comput. 23, 828–841 (2019).
15. Tramèr, F., Papernot, N., Goodfellow, I., Boneh, D. & McDaniel, P. The Space of Transferable Adversarial
Examples. arXiv [stat.ML] (2017).; Papernot, N., McDaniel, P. & Goodfellow, I. Transferability in Machine
Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. arXiv [cs.CR] (2016).
16. Liu, Y., Chen, X., Liu, C. & Song, D. Delving into Transferable Adversarial Examples and Black-box Attacks.
arXiv [cs.LG] (2016).
17. Carlini, N. et al. Hidden voice commands. in 25th ${USENIX} Security Symposium ({USENIX}$ Security
16) 5 1 3–530 (2016). Tweet from Ludwig Yeetgenstein, June 26, 2020, https://twitter.com/yeetgenstein/
status/1 2765 1 8982565 1 46624.
1 8. Truex, S., Liu, L., Gursoy, M. E., Wei, W. & Yu, L. Effects of Differential Privacy and Data Skewness on
Membership Inference Vulnerability. in 20 19 First IEEE International Conference on Trust, Privacy and
Security in Intelligent Systems and Applications (TPS-ISA) 82–91 (2019).
Center for Security and Emerging Technology 2219. Jayaraman, B. & Evans, D. Evaluating differentially private machine learning in practice. in 28th ${USENIX}
Security Symposium ({USENIX}$ Security 19) 1 895–191 2 (2019).; Rahman, M. A., Rahman, T., Laganière,
R., Mohammed, N. & Wang, Y. Membership Inference Attack against Differentially Private Deep Learning
Model. Trans. Data Priv. 1 1, 61–79 (201 8).
20. Tramèr, F., Zhang, F., Juels, A., Reiter, M. K. & Ristenpart, T. Stealing machine learning models via prediction
apis. in 25th ${USENIX} Security Symposium ({USENIX}$ Security 16) 601–61 8 (2016).
21. Athalye, A., Carlini, N. & Wagner, D. Obfuscated Gradients Give a False Sense of Security: Circumventing
Defenses to Adversarial Examples. arXiv [cs.LG] (201 8).
22. Hartnett, G. S., Lohn, A. J. & Sedlack, A. P. Adversarial Examples for Cost-Sensitive Classifiers. arXiv [stat.
ML] (2019).; Xie, C., Wu, Y., van der Maaten, L., Yuille, A. & He, K. Feature Denoising for Improving
Adversarial Robustness. arXiv [cs.CV] (201 8).
23. Lecuyer, M., Atlidakis, V., Geambasu, R., Hsu, D. & Jana, S. Certified Robustness to Adversarial Examples
with Differential Privacy. arXiv [stat.ML] (201 8). Wong, E. & Zico Kolter, J. Provable defenses against
adversarial examples via the convex outer adversarial polytope. arXiv [cs.LG] (2017).
24. Lohn, A. What do Meltdown, Spectre and RyzenFall Mean for the Future of Cybersecurity? TechCrunch
(201 8).
25. Wong, E., Schmidt, F. R. & Zico Kolter, J. Wasserstein Adversarial Examples via Projected Sinkhorn Iterations.
arXiv [cs.LG] (2019).
Center for Security and Emerging Technology33
CSET.GEORGETOWN.EDU | CSET@GEORGETOWN.EDU |
7f584bda-7524-4136-ac85-3b6989f722fc | trentmkelly/LessWrong-43k | LessWrong | A poem for LessWrong
PZ Myers posted this on his blog; it is very beautiful, and I think expresses the purpose (at any rate, one purpose) of LessWrong. It even offers a battle cry: "I am not resigned".
Dirge without music
I am not resigned to the shutting away of loving hearts in the hard ground
So it is, and so it will be, for so it has been, time out of mind:
Into the darkness they go, the wise and the lovely. Crowned
With lilies and laurel they go: but I am not resigned.
Lovers and thinkers, into the earth with you.
Be one with the dull, the indiscriminate dust.
A fragment of what you felt, of what you knew,
A formula, a phrase remains - but the best is lost.
The answers quick and keen, the honest look, the laughter, the love -
They are gone. They have gone to feed the roses. Elegant and curled
Is the blossom. Fragrant is the blossom. I know. But I do not approve.
More precious was the light in your eyes than all the roses in the world.
Down, down, down into the darkness of the grave
Gently they go, the beautiful, the tender, the kind:
Quietly they go, the intelligent, the witty, the brave.
I know. But I do not approve. And I am not resigned.
|
a16c9275-48d8-4df4-ba1f-d95f1a170ae8 | trentmkelly/LessWrong-43k | LessWrong | Re-introducing Selection vs Control for Optimization (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 1)
This is the first post in a small sequence I'm writing on "Optimizing and Goodhart Effects - Clarifying Thoughts" (I have re-organized to make part 2, "Revisiting What Optimization Means" separate.)
Related to: How does Gradient Descent Interact with Goodhart?, Constructing Goodhart, Selection vs Control, Classifying Specification Problems as variants of Goodhart's Law
Next Posts: Revisiting What Optimization Means with Selection vs. Control, then Applying Overoptimization to Selection vs. Control
Introduction
Goodhart's law comes in a few flavors, as originally pointed out by Scott, and formalized a bit more in our joint paper. When discussing that paper, or afterwards, we struggled with something Abram Demski clarified recently, which is the difference between selection and control. This matters for formalizing what happens, especially when asking about how Goodhart occurs in specific types of optimizers, as Scott asked recently.
Epistemic Status: This is for de-confusing myself, and has been helpful. I'm presenting what I am fairly confident I understand well for the content written so far, but I'm unclear about usefulness for others, or how clear it comes across. I think that there's more to say after this post, and this will have a few more parts if people are interested. (I spent a month getting to this point, and decided to post and get feedback rather than finish a book first.)
In the first half of the post, I'll review Abram's selection/control distinction, and suggest how it relates to actual design. I'll also argue that there is a bit of a continuum between the two cases, and that we should add an addition extreme case to the typology, direct solution. The second section will revisit what optimization means, and try to note a few different things that could happen and go wrong with Goodhart-like overoptimization.
The third section will talk about Goodhart in this context using the new understanding - trying to more fully explain why Goodhart effect |
175680ce-5046-44e0-b7e3-c83bf5688300 | trentmkelly/LessWrong-43k | LessWrong | October 2019 gwern.net newsletter
None |
eaef2a43-9c18-4193-aaa0-5705f4d51b08 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "Edit 11/28: Edited note at bottom to note that the random variables should have finite variance, and that this is essentially just L². Also some formatting changes.
This is something that has been bugging me for a while.
The correlation coefficient between two random variables can be interpreted as the cosine of the angle between them[0]. The higher the correlation, the more "in the same direction" they are. A correlation coefficient of one means they point in exactly the same direction, while -1 means they point in exactly opposite directions. More generally, a positive correlation coefficient means the two random variables make an acute angle, while a negative correlation means they make an obtuse angle. A correlation coefficient of zero means that they are quite literally orthogonal.
Everything I have said above is completely standard. So why aren't correlation coefficients commonly expressed as angles instead of as their cosines? It seems to me that this would make them more intuitive to process.
Certainly it would make various statements about them more intuitive. For instance "Even if A is positive correlated with B and B is positively correlated with C, A might be negatively correlated with C." This sounds counterintuitive, until you rephrase it as "Even if A makes an acute angle with B and B makes an acute angle with C, A might make an obtuse angle with C." Similarly, the geometric viewpoint makes it easier to make observations like "If A and B have correlation exceeding 1/√2 and so do B and C, then A and C are positively correlated" -- because this is just the statement that if A and B make an angle of less than 45° and so do B and C, then A and C make an angle of less than 90°.
Now when further processing is to be done with the correlation coefficients, one wants to leave them as correlation coefficients, rather than take their inverse cosines just to have to take their cosines again later. (I don't know that the angles you get this way are actually useful mathematically, and I suspect they mostly aren't.) My question rather is about when correlation coefficients are expressed to the reader, i.e. when they are considered as an end product. It seems to me that expressing them as angles would give people a better intuitive feel for them.
Or am I just entirely off-base here? Statistics, let alone the communication thereof, is not exactly my specialty, so I'd be interested to hear if there's a good reason people don't do this. (Is it assumed that anyone who knows about correlation has the geometric point of view completely down? But most people can't calculate an inverse cosine in their head...)
[0]Formal mathematical version: If we consider real-valued random variables with finite variance on some fixed probability space Ω -- that is to say, L²(Ω) -- the covariance is a positive-semidefinite symmetric bilinear form, with kernel equal to the set of essentially constant random variables. If we mod out by these we can consider the result as an inner product space and define angles between vectors as usual, which gives us the inverse cosine of the correlation coefficient. Alternatively we could just take L²(Ω) and restrict to those elements with zero mean; this is isomorphic (since it is the image of the "subtract off the mean" map, whose kernel is precisely the essentially constant random variables)."
This is a snippet extracted from a publication on LessWrong by a site member.The above is a chunk of text from a post created by a user on the website LessWrong.
Could you elaborate on the potential drawbacks or limitations of expressing correlation coefficients as angles for the purpose of intuitive understanding?
Expressing correlation coefficients as angles could potentially introduce a few drawbacks or limitations, especially in the context of intuitive understanding.
1. Non-linearity: The transformation from correlation coefficients to angles involves a non-linear function (the arccosine). This could complicate the interpretation, especially when dealing with multiple correlations.
2. Familiarity: Most people are more familiar with correlation coefficients (which range from -1 to 1) than with angles (which range from 0 to 180 degrees or 0 to π radians). This could make the interpretation less intuitive for many people.
3. Ambiguity: An angle could be interpreted in different ways. For example, an angle of 60 degrees could be interpreted as a strong positive correlation, but also as a relatively weak negative correlation (since 120 degrees would correspond to the same cosine value).
4. Inconsistency: While the cosine of the angle interpretation is valid for the Pearson correlation coefficient, it does not necessarily hold for other types of correlation coefficients (like Spearman or Kendall), which could lead to confusion.
5. Practicality: In many statistical analyses, correlation coefficients are used as inputs for further computations. Transforming them into angles would require additional steps and could introduce unnecessary complexity.
6. Misinterpretation: Angles can be misleading when it comes to the strength of correlation. For example, a small change in degrees might imply a large change in correlation when the angle is near 0 or 180 degrees, but the same change in degrees might imply a small change in correlation when the angle is near 90 degrees.
In conclusion, while expressing correlation coefficients as angles could provide a different perspective and could be useful in certain contexts, it also introduces potential drawbacks and complexities that could hinder intuitive understanding.
Reflecting on your previous responses, can you elaborate on a situation where expressing correlation coefficients as angles might actually provide a clearer or more intuitive understanding than traditional methods?
Certainly, there are situations where expressing correlation coefficients as angles could provide a clearer understanding, particularly when trying to visualize relationships between variables in a geometric context.
For instance, if you're working with a high-dimensional dataset and you're trying to understand the relationships between different variables, visualizing these relationships as angles can be quite helpful. This is especially true when using techniques like Principal Component Analysis (PCA), where you're essentially rotating the coordinate system to align with the directions of maximum variance in the data. In this case, thinking about correlations in terms of angles can provide a more intuitive understanding of how the variables relate to each other in this new coordinate system.
Another situation could be in the field of machine learning, specifically in the context of understanding the behavior of certain algorithms. For example, in the case of support vector machines (SVMs), the decision boundary is determined by the vectors (support vectors) that maximize the margin between classes. The angle between these vectors can provide an intuitive understanding of how different features contribute to the decision boundary.
Finally, in physics and engineering, where the concept of angle is frequently used, expressing correlation coefficients as angles could provide a more intuitive understanding of the relationships between different variables. For instance, in signal processing, the phase difference between two signals can be thought of as an angle, and this can be related to the correlation between the signals.
However, it's important to remember that while this approach can provide additional insights in some cases, it may also introduce complexity and potential for misinterpretation in others, as previously discussed.
Can you delve deeper into the use of angles in Principal Component Analysis and how it aids in understanding the relationships between variables in a high-dimensional dataset?
Principal Component Analysis (PCA) is a technique used to reduce the dimensionality of a dataset, while preserving as much of the variance in the data as possible. It does this by finding new variables, or principal components, which are linear combinations of the original variables.
The first principal component is the direction in the high-dimensional space along which the data varies the most. The second principal component is orthogonal (at a right angle) to the first and is the direction of maximum variance that's left. This process continues for as many components as there are dimensions in the original data.
Now, let's consider the correlation between variables and how angles can aid in understanding these relationships.
1. Orthogonality: In PCA, the principal components are orthogonal to each other, meaning they are uncorrelated. This orthogonality is represented geometrically as a right angle (90 degrees) between the components. So, if two variables are uncorrelated, their angle would be 90 degrees.
2. Variance and direction: The direction of the principal components represents the direction of maximum variance in the data. If two variables have a high positive correlation, they would have a similar direction in this new coordinate system, meaning a small angle between them. If they have a high negative correlation, they would point in nearly opposite directions, meaning an angle close to 180 degrees.
3. Visualization: The geometric interpretation of PCA can help visualize high-dimensional data. Each principal component can be represented as an axis in a new coordinate system, and the angles between these axes (corresponding to the correlations between the variables) can provide a more intuitive understanding of the relationships between the variables.
In summary, using angles in PCA can provide a more intuitive geometric interpretation of the relationships between variables in a high-dimensional dataset. However, this interpretation should be used with caution, as it can be less straightforward when dealing with more than two or three dimensions. |
0e2185ab-f677-4b21-86fc-507b326d3742 | trentmkelly/LessWrong-43k | LessWrong | AI Alignment [progress] this Week (10/29/2023)
Wow! It’s been one month already.
Some thoughts:
When I started this project, I had a general sense that there is alignment progress on a regular basis, but tracking it really brings this home. Every week, on Monday I wonder “is this the week I will have nothing to write?” And by Sunday there are innovations that I leave out because the letter is too long.
I’ve also been happy to see the reception has been mostly positive. Literally the only criticism I’ve received is that people don’t like the word “breakthrough”. I think adding the 💡rating addresses the fact “not all breakthroughs are equal”. I’ve reached my goal of “at least 10 positive comments/new subscribers”, so I will continue doing this for at least another month.
If I had to nominate a single breakthrough for “biggest breakthrough of the month”, I think I would pick the AI Lie Detector. It perfectly combines the aspects of: 1. surprising, 2. innovative new method and 3. strong application to AI alignment, and easily deserves a rating of 5 💡💡💡💡💡
I think an interesting question is how many 5💡💡💡💡💡 breakthroughs it will take to solve the alignment problem. EY suggests the current score is 2/1000. I’m not quite that pessimistic. I think we’re closer to 5/100. And of course it’s always possible that someone will find a 6 💡💡💡💡💡💡 breakthrough that dramatically speeds up progress.
I do feel like this letter is getting a little long. I’m curious how long readers would like. [poll on substack]
What are your thoughts? Feel free to comment! I especially appreciate comments discussing specific technical aspects of various AI alignment methods/pathways.
And here are our
AI Alignment Breakthroughs this Week
This week, there were breakthroughs in the areas of:
Mechanistic Interpretability
AI Agents
Avoiding Adversarial Attacks
Benchmarking AI
Decentralized AI
Learning Human Preferences
Making AI Do what we Want
AI Art
In addition, I added a new section:
AI Alignment Initiatives
M |
7402d0cf-4b2b-4d50-94de-dc79c0306ddd | trentmkelly/LessWrong-43k | LessWrong | Meetup : Atlanta
Discussion article for the meetup : Atlanta
WHEN: 18 February 2012 06:30:00PM (-0500)
WHERE: 2094 North Decatur Road, Decatur, GA 30033-5367
All,
The next meetup will be Saturday, February 18th at 6:30pm at Chocolate Coffee in Decatur:
http://www.mychocolatecoffee.com/ 2094 North Decatur Road, Decatur, GA 30033-5367 (404) 982-0790
Here is the official agenda of our next meeting: http://wiki.lesswrong.com/wiki/Mysterious_Answers_to_Mysterious_Questions 1.21 Positive Bias: Look Into the Dark 1.22 My Wild and Reckless Youth 1.23 Failing to Learn from History 1.24 Making History Available 1.25 Explain/Worship/Ignore?
We will also be discussing The Prisoner's Dilemma and Newcomb's paradox.
Please let me know if you have any questions or comments! I hope to see ALL of you there!
-Hank 404 384 9776
Discussion article for the meetup : Atlanta |
949f2d6a-4ae0-44a6-8f6a-ba5a9690700b | StampyAI/alignment-research-dataset/blogs | Blogs | Convex Conditions for Strong Convexity
An important concept in online learning and convex optimization is that of *strong convexity*: a twice-differentiable function  is said to be strongly convex with respect to a norm  if

for all  (for functions that are not twice-differentiable, there is an analogous criterion in terms of the Bregman divergence). To check strong convexity, then, we basically need to check a condition on the Hessian, namely that . So, under what conditions does this hold?
For the  norm, the answer is easy:  if and only if  (i.e.,  is positive semidefinite). This can be shown in many ways, perhaps the easiest is by noting that .
For the  norm, the answer is a bit trickier but still not too complicated. Recall that we want necessary and sufficient conditions under which . Note that this is equivalent to asking that  for each coordinate  of , which in turn is equivalent to  for each coordinate vector  (these are the vectors that are 1 in the th coordinate and 0 everywhere else).
More generally, for any norm , there exists a *dual norm*  which satisfies, among other properties, the relationship . So, in general,  is equivalent to asking that  for all  with . But this is in turn equivalent to asking that
 for all  such that .
In fact, it suffices to pick a subset of the  such that the convex hull consists of all  with ; this is why we were able to obtain such a clean formulation in the  case: the dual norm to  is , whose unit ball is the simplex, which is a polytope with only  vertices (namely, each of the signed unit vectors ).
We can also derive a simple (but computationally expensive) criterion for  strong convexity: here the dual norm is , whose unit ball is the -dimensional hypercube, with vertices given by all  vectors of the form ![[ \pm 1 \ \cdots \ \pm 1]](https://s0.wp.com/latex.php?latex=%5B+%5Cpm+1+%5C+%5Ccdots+%5C+%5Cpm+1%5D&bg=f0f0f0&fg=555555&s=0&c=20201002). Thus  if and only if  for all  sign vectors .
Finally, we re-examine the  case; even though the -ball is not a polytope, we were still able to obtain a very simple expression. This was because the condition  manages to capture simultaneously all dual vectors such that . We thus have the general criterion:
**Theorem.**  for  if and only if  is strongly convex with respect to the norm  whose dual unit ball is the convex hull of the transformed unit balls , , where  is the  unit ball whose dimension matches the number of columns of .
**Proof.** if and only if . Now note that . If we define , it is then apparent that the dual norm unit ball is the convex hull of the . |
49b5500f-e044-4c6d-a4b0-720060bf6f06 | trentmkelly/LessWrong-43k | LessWrong | Value learning in the absence of ground truth
Epistemic Status: My best guess.
Summary
The problem of aligning AI is often phrased as the problem of aligning AI with human values, where “values” refers roughly to the normative considerations one leans upon in making decisions. Understood in this way, solving the problem would amount to first articulating the full set of human values that we want AI systems to learn, and then going about actually embedding these into the systems. However, it does not seem to be the case that there is a single set of human values we’d want to encode in systems in the first place, or a single, unchanging human reward function that we want systems to learn and optimize for. The absence of a target for learning intuitively poses problems for learning human values and thus for solving alignment more generally. Nonetheless, as humans are still capable of acting in ways that we would want AI systems to act even in the absence of a “ground truth” to human values, and assuming that this capability is learned to at least some extent, alignment still seems possible, albeit through value-learning processes that reject this assumption of a ground truth (contrary to the current value-learning paradigm). I review four such proposed approaches, discuss them in the context of AI safety, and consider the long-term prospects of these kinds of approaches bringing about aligned AI.
Summaries of each framework are included at the beginning of each respective section. For those interested only in the conclusions, skip to the section “Taking stock”above.
Introduction
Value learning is one of the better-known approaches to tackling the alignment problem. In a nutshell, this involves getting AI systems to learn to value what their human operators want them to value from training data. One way of doing this is through inverse reinforcement learning (IRL), whereby after observing a human enacting the desired type of behavior in the environment, the AI system infers the reward function that the human |
cbe0724d-3454-487a-8f76-f988e95b72f7 | trentmkelly/LessWrong-43k | LessWrong | alternative history: what if Bayes rule had never been discovered?
In trying to understand how Bayesian probability is used, I'm curious to know what wouldn't have been possible without it. how important was it in the course of human discovery, and in turn, how it effected history.
I don't demand rigorous answers, feel free to speculate and throw possibilities as you like.
Bonus question: if Bayes didn't discover it, when would it have to be discovered? (full speculation mode) |
b6f52b61-f263-4677-9e09-ab535d2df8ed | trentmkelly/LessWrong-43k | LessWrong | I didn't think I'd take the time to build this calibration training game, but with websim it took roughly 30 seconds, so here it is!
Basically, the user is shown a splatter of colored circles, then the splatter is hidden, and then they're asked to remember what proportion of the splatter was a particular color. To get good at it, they'd have to get good at accurately perceiving and remembering features of entire distributions. Obvious high propensity for transfer to mentally visualizing distributions, and hopefully this would transfer to receiving and retaining accurate assessments of distributions we experience in our lives and possibly to forecasting calibration as well.
Despite the simplicity of such a training tool, I didn't think I was ever going to get around to building it. Unassisted, it probably would have taken me at least 2 days (there are a lot of other higher priority things I could do in 2 days). Then I discovered websim (video of nice things people have made in websim) via this impressive series of experimental UI demos in "maleable computing", so I decided to test it on this, and so it took about 30 seconds.
The initial prompt was one gestural paragraph. Claude made a huge bunch of reasonable judgements and it was basically all there immediately.
I then spent about 3 hours doing design work as websim dutifully and relatively instantly implemented every one of my recommended changes. Websim is really good at iterating towards a correct design! The UI encourages us to try just doing everything via the LLM instead of going in and doing things manually and it really leads you to realize how fast that is when the LLM is well directed. It's so effective that there was a point where I started getting a weird feeling in my chest and wanted to step outside and yell a bit. It feels like a 15x productivity boost, and I guess I'm also kind of Feeling The AGI about it. Claude 3.5 is iirc only moderately better at coding than Claude 3, but sometimes moderate capability improvements have discontinuous impacts.
Here's the current version for now: Visual Calibration Trainer
Unfortunately webs |
519f1dc7-a51d-45d0-9f05-cf190e71d798 | trentmkelly/LessWrong-43k | LessWrong | Personal Philosophy
This is a rough outline of my philosophical framework. Which gives me a context for all of my experiences and knowledge.
Certain Knowledge
We cannot know much for certain. The one exception is the existence of your subjective experience of reality in this very moment of time. As Rene Descartes put it "Cognito ergo sum."
Probabilistic Knowledge
From there I believe we make probabilistic assertions about the world based on evidence. For example a major question as a conscious entity is where did you come from?
There are an incredible amount of possibilities an imaginative person could conjure, and doubtless infinitely more of which no person could ever conceive.
Materialism
However there is one explanation for our existence which we have an incredible amount of evidence for. This is the materialistic explanation for our existence.
Your conscious experience has access to an external environment. We observe this environment. We seem to be inexorably tied to this environment. There is an incredible amount of evidence based on millions of man-hours of investigation, that says my conscious experience is emergent of the material world.
Mortality
An implication of this is that it is incredibly likely that when we die. There is no afterlife. That this subjective experience will come to an end just as it came to a beginning. There will come a time in which the material structure which gives rise to your experience fails. It will loose available energy the last few connections will fire and you will be unaware. Your brain will decompose and everything that makes you up will be gone.
It is still a potentiality that some greater god or alien species will take pity on you. They may find value in your conscious experience and will record it; bringing you into an afterlife alien or divine. However we have very little valid evidence supporting such a possibility. In a practical sense I would be more surprised to die and wake up in an afterlife than to walk in the gas |
f159c01a-834f-4b7e-91e4-eeb41107cb20 | trentmkelly/LessWrong-43k | LessWrong | The Overton Window widens: Examples of AI risk in the media
I sometimes talk to people who are nervous about expressing concerns that AI might overpower humanity. It’s a weird belief, and it might look too strange to talk about it publicly, and people might not take us seriously.
How weird is it, though? Some observations (see Appendix for details):
* There are articles about AI risk in the NYT, CNBC, TIME, and several other mainstream news outlets. Some of these articles interview experts in the AI safety community, explicitly mention human extinction & other catastrophic risks, and call for government regulation.
* Famous People Who My Mom Has Heard Of™ have made public statements about AI risk. Examples include Bill Gates, Elon Musk, and Stephen Hawking.
* The leaders of major AI labs have said things like “[AI] is probably the greatest threat to the continued existence of humanity”. They are calling for caution, concerned about the rate of AI progress, openly acknowledge that we don’t understand how AI systems work, note that the dangers could be catastrophic, and openly call for government regulation.
Takeaway: We live in a world where mainstream news outlets, famous people, and the people who are literally leading AI companies are talking openly about AI x-risk.
I’m not saying that things are in great shape, or that these journalists/famous people/AI executives have things under control. I’m also not saying that all of this messaging has been high-quality or high-fidelity. I’m also not saying that there are never reputational concerns involved in talking about AI risk.
But next time you’re assessing how weird you might look when you openly communicate about AI x-risk, or how outside the Overton Window it might be, remember that some of your weird beliefs have been profiled by major news outlets. And remember that some of your concerns have been echoed by people like Bill Gates, Stephen Hawking, and the people leading companies that are literally trying to build AGI.
I’ll conclude with a somewhat more specul |
8acbaad3-a1b0-4ea4-8add-20dae9121416 | trentmkelly/LessWrong-43k | LessWrong | Harry Potter and the Methods of Rationality discussion thread, part 9
(The HPMOR discussion thread after this one is here.)
The previous thread is over the 500-comment threshold, so let's start a new Harry Potter and the Methods of Rationality discussion thread. This is the place to discuss Eliezer Yudkowsky's Harry Potter fanfic and anything related to it. The latest chapter as of 09/09/2011 is Ch. 77.
The first 5 discussion threads are on the main page under the harry_potter tag. Threads 6 and on (including this one) are in the discussion section using its separate tag system. Also: one, two, three, four, five, six, seven, eight. The fanfiction.net author page is the central location for information about updates and links to HPMOR-related goodies, and AdeleneDawner has kept an archive of Author's Notes.
As a reminder, it's often useful to start your comment by indicating which chapter you are commenting on.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:
> You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
>
> If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13. |
030ffc40-8938-4803-9f1b-c55b64ffaf58 | trentmkelly/LessWrong-43k | LessWrong | Yoshua Bengio argues for tool-AI and to ban "executive-AI"
Yoshua Bengio wrote a blogpost yesterday in which he argues for developing "scientist AI", which seems in-structure very similar to historical Tool-AI proposals.
For the (IMO) best response to this kind of proposal see Gwern's: Why Tool AIs Want to Be Agent AIs.
Below I copies the blogpost in-full, since all of it seems pretty relevant.
> AI Scientists: Safe and Useful AI?
> Published 7 May 2023 by yoshuabengio
>
> There have recently been lots of discussions about the risks of AI, whether in the short term with existing methods or in the longer term with advances we can anticipate. I have been very vocal about the importance of accelerating regulation, both nationally and internationally, which I think could help us mitigate issues of discrimination, bias, fake news, disinformation, etc. Other anticipated negative outcomes like shocks to job markets require changes in the social safety net and education system. The use of AI in the military, especially with lethal autonomous weapons has been a big concern for many years and clearly requires international coordination. In this post however, I would like to share my thoughts regarding the more hotly debated question of long-term risks associated with AI systems which do not yet exist, where one imagines the possibility of AI systems behaving in a way that is dangerously misaligned with human rights or even loss of control of AI systems that could become threats to humanity. A key argument is that as soon as AI systems are given goals – to satisfy our needs – they may create subgoals that are not well-aligned with what we really want and could even become dangerous for humans.
>
> Main thesis: safe AI scientists
>
> The bottom line of the thesis presented here is that there may be a path to build immensely useful AI systems that completely avoid the issue of AI alignment, which I call AI scientists because they are modeled after ideal scientists and do not act autonomously in the real world, only focusing o |
3886baed-f1c5-4f22-8e76-30125ebcd729 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Seattle Sequences group: Mysterious Answers 4
Discussion article for the meetup : Seattle Sequences group: Mysterious Answers 4
WHEN: 23 March 2015 06:30:00PM (-0700)
WHERE: Paul G. Allen Center, 185 Stevens Way, Seattle, Washington 98195
Location is the UW CSE building, room 503. Paid parking is available on campus (enter via Steven's Way). Facebook event: https://www.facebook.com/events/966274253390417/
This is a normally-weekly meetup to discuss and work through the Sequences while getting to know other aspiring rationalists in the Seattle area. Each session's reading list is posted below, and in the relevant Facebook event. We usually discuss the sequence posts and relevant applied rationality techniques.
Note that this meetup has an unusual two-week gap since the last one, due to conflicts with other events and host availability. We will try to return to a regular schedule following this.
As the name suggests, this meetup will go over the fourth part of the "Mysterious Answers to Mysterious Questions" sequence. We previously covered the "Map and territory" sequence. All are welcome to join, though, even if they've read all of the sequences already or haven't yet caught up to us.
There are often snacks at the meetup, which typically runs until around 8:30 or 9. After that, many of us go get dinner on the Ave.
Reading list: Start here: http://lesswrong.com/lw/iu/mysterious_answers_to_mysterious_questions/
1.18 Mysterious Answers to Mysterious Questions
1.19 The Futility of Emergence
1.20 Say Not "Complexity"
1.21 Positive Bias: Look Into the Dark
1.22 My Wild and Reckless Youth
1.23 Failing to Learn from History
Discussion article for the meetup : Seattle Sequences group: Mysterious Answers 4 |
b0ae57b5-a86a-4379-bf85-31d6587e8280 | trentmkelly/LessWrong-43k | LessWrong | [META] Make poll results accessible to people who have not voted
As far as I can see, the only way to see the results of a poll is to vote in it. If for whatever reason I don't want to participate in a poll whose results I am interested in, I vote randomly. Of the ~10 other LWers I know, at least 2 do the same. For the sake of accurate results, please let people see poll results without having to vote (and ideally without having to log in).
Response to obvious argument against: You can restrict people from (voting after they have seen the results). |
e298d0e6-5656-4ee3-83a0-7ddb875c5a59 | trentmkelly/LessWrong-43k | LessWrong | Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them
|
79cc31e1-ad7f-4313-a0ed-e1e476cdbe62 | trentmkelly/LessWrong-43k | LessWrong | 12/12/2017 Update: Creating Sequences
Over the past few weeks, we've been pushing updates that improve LW 2.0's Sequence functionality, as well as cleaning up some of the less well known sequences.
Those updates are now complete and ready for general use. Highlights include:
The Library
If you click on the "Recommended Reading" title on the front page, or on "The Library" in the main menu, you'll be taken to our new Library page. This includes sections for our Core Reading, Curated Sequences, and Community Sequences.
Core Reading is essential content that has stood the tests of time, that users are generally expected to have read.
Curated Sequences are additional high quality essays that the admins have decided to feature.
Community Sequences are where most new sequences will appear. (Currently sorted by "newest first", although we'll probably update the sorting mechanisms soon)
Creating New Sequences
Under the "Community Sequences" header, you'll see a "Create new sequence" button. This is deliberately somewhat hard-to-find, since we don't want new users to immediately start creating sequences willy nilly.
To create a sequence, you'll start by giving it a name, description, and images for the Banner and Thumbnail. It's recommend that the Banner and Thumbnail be the same picture (although perhaps cropped differently so that they look good at large and small scales).
Banner images must be at least 1600 pixels wide. Eventually we'll add some tools to help users find appropriate images. Meanwhile, if you need help finding an appropriate banner, you can ping us and we'll help you out. (In general, a good tip is to go to Google Image search, click "tools" tab, and restrict results to images whose size is at least 2 MP, and whose usage rights are set to "noncommercial re-use)
After clicking the "create" button, you'll be able to add and change the ordering of posts.
User Profile Sequences
Once you've created at least one sequence, your User Profile page will gain a Sequences section. This will li |
7722d7c9-52ab-434a-8e9d-8445d683bd3e | trentmkelly/LessWrong-43k | LessWrong | Lesson learned from HPMOR, only months after... (spoilers from beginning to end)
The lesson I have just recently gleaned from the 81st chapter of HPMOR along with Harry's observations of the sense of doom he feels around Voldemort.
Chapter 81: "For whatever reason, then, most of the Wizengamot has never walked the path that leads to powerful wizardry; they do not seek out what is hidden. For them, there is no why. There is no explanation. There is no causality. The Boy-Who-Lived, who was already halfway into the magisterium of legend, has now been promoted all the way there; and it is a brute fact, simple and unexplained, that the Boy-Who-Lived frightens Dementors. Ten years earlier they were told that a one-year-old boy defeated the most terrible Dark Lord of their generation, perhaps the most evil Dark Lord ever to live; and they just accepted that too."
All the information I needed was there all along, but only now do I perceive the pattern this information indicates.
In retrospect, I believe it was obvious that Quirrell was Voldemort. But, at least in my own mind, the oft-repeated note of the sense of doom was filed under the same list of acausal phenomena as was Harry's Dementor-scaring by the Wizengamot.
What I ought to have done, what I was fully capable of doing (if hindsight bias is not plaguing me (not too much, anyway)), was thinking:
"Ah, a sense of doom which is felt mutually and only by Harry and Quirrell. Why should that be?"
And then, still more in the Stanford Prison Experiment Arc
"Ah, I see, it all falls into place. Contact between Harry and Quirrell causes mutual destruction. Why should that be (the phrase which was missing from my mind in every instance of the manifestation of the sense of doom)? They must be connected somehow. Why, yes, actually we already know that they are. As Harry pointed out, he and Quirrell have similar minds; Quirrell is his superior in his very own way of thinking, very much as if Quirrell was a grown-up version of Harry. No, that doesn't sound quite right, I don't see Harry growin |
1fcf5d9b-ead1-4fb3-8ffc-842811571dc1 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Pittsburgh HPMoR Wrap Party
Discussion article for the meetup : Pittsburgh HPMoR Wrap Party
WHEN: 14 March 2015 12:00:00PM (-0500)
WHERE: Bagel Factory, 420 S Craig St, Pittsburgh, PA
I'll be wearing a black t-shirt that says "If I agreed with you, we'd both be wrong." I promise to be there until at least 15:00. Venue has food and coffee. See also https://www.facebook.com/events/824097430998639/.
Discussion article for the meetup : Pittsburgh HPMoR Wrap Party |
6dbe5472-a969-40dc-ba9c-b4710d0e465d | trentmkelly/LessWrong-43k | LessWrong | Covid 6/9/22: Nice
Things seem to have peaked in the Northeast once again. Nice.
Things are getting worse in the South as the summer hits. Not as nice, not that unexpected. At this point we can pattern match.
You know what else is nice? Not having to care about the pandemic. I do my best to care about it so you don’t (otherwise) have to, and hope to continue slowly dedicating less time and space to this each week. But for now, we continue.
You know what’s not so nice? A civilization that cares so little about an ongoing pandemic disease that its authorities refuse to authorize any spending, actively reallocate funds away from vaccine development, and otherwise do not take ordinary obvious steps to mitigate or solve the remaining problems. Also still technically having to mask up on the Subway and (for some reason) at my now go-to bakery.
You know what that’s still quite a lot nicer than? Being forced to care about the pandemic, and the destruction of massive amounts of value in prevention’s name for no meaningful gains. That sucked.
The general news is otherwise that we still don’t have any money for Covid, so Biden is shifting around what is left while it lasts, and the Moderna vaccine update works but we are unlikely to get it any time soon.
Executive Summary
1. Cases stabilize in Northeast, rising in South.
2. Moderna vaccine update is effective but FDA will stop it for a while.
3. Still no pandemic money.
4. BA.4 and BA.5 taking over but it shouldn’t change things much.
Let’s run the numbers.
The Numbers
Predictions
NOTE: Florida numbers are cut in half because they report every two weeks (other half will count next week). North Carolina reported way too many deaths to not be a backfill, I subtracted 400.
Prediction from last week: 675,000 cases (+19%) and 2,400 deaths (+20%)
Results: 658k cases (+7%) and 2,220 deaths (+11%).
Prediction for next week: 700k cases (+7%) and 2,300 deaths (+4%).
Some aspects of CDC reporting are going down for 2 weeks for a system |
c761e4f0-2259-440e-a088-cf2e268cde07 | StampyAI/alignment-research-dataset/blogs | Blogs | Word Report #1
Word Report #1
--------------
Word Report will be a series of posts in which I document uncommon terms I'm using. Unless stated otherwise, they are of my own invention.
* **Topia**: (based on Utopia/Dystopia) a society in which a particular vision is fully realized, without specifying if that society is good or bad.
* **Ancapolite**: "Polite anarcho-capitalism"; a society within an anarcho-capitalist framework where social and contractual norms and expectations are maintained by mutual social good-will, pressure, and other existing cultural systems.
* **Seme**: a piece of meaning/semantics. Compared to the related word [sememe](https://en.wiktionary.org/wiki/sememe#English), a seme doesn't have to be smallest/atomic.
* "**&adj**": a suffix to mean "and adjacent". For example, "4chan&adj" means 4chan and adjacent websites/cultures, such as 2chan, 8chan, some reddits, weeb culture, the alt-right, etc.
* **Postnmodern**: a cultural movement designated by the term "modern" prefixed by "post-" *n* times. post⁰modernism is modernism, post¹modernism is post-modernism, post²modernism is post-post-modernism, post-even-modernism is postnmodernism where n is an even number, etc.
* I've been using the word "be" to indicate being while leaving tense explicitely unspecified. "I be here" does not specify whether I am, was, or will be here. Can be interpreted as related to the meme phrase "It do be like that".
* Similarly, I've been using the form "I have cat" to indicate my ownership of either a singular cat or plurality of cats, explicitely leaving number unspecified (not "I have a cat" nor "I have cats"). |
2c303a04-36f6-4a05-ac88-7be2a8534591 | trentmkelly/LessWrong-43k | LessWrong | Classical vs MWI Probability Nomenclature
"I estimate 5% odds of X happening" can mean at least two things:
* I have about 1-in-20 confidence that all future timelines from this point contain X, and about 19-in-20 confidence that none do.
* I estimate about 1-in-20 future timelines contain X, and 19-in-20 future timelines don't.
Looked at this way, the usual way of quantifying probability seems to be a lot like quantifying area - the first bullet-point by having a 1x20 rectangle, the second by having a 20x1 one. (This also seems valid for having, say, 50% confidence that 1-in-10 future timelines contain X.) It seems like it might be worth having an easy and understandable way to differentiate between these different forms of '5% odds', but any easy way I've been able to think of is barely understandable, and vice versa. Are there any existing standard ways to do this that I'm unaware of? If not, does anyone reading this have any decent answers?
I'm not opposed to coming up with a new word for personal use to help get in the habit of thinking in certain ways; such as bei'e in Lojban to remind myself to think of probability logarithmically. I don't mind doing the same with a word meaning 'such-and-such a fraction of future MWI branches', if that's the best solution, or even just a useful tool; I'd just like to know what the full range of useful approaches really are, first, and any potential loopholes therein or drawbacks thereof. |
491820c6-3925-4275-9402-c20c9df0331a | trentmkelly/LessWrong-43k | LessWrong | Contra Chord Simplification
Tomorrow, Lily and I will be leading a Kids Contra Jam at NEFFA (2pm in the Sudbury room!). We'll be playing off of Lily's tune list, but someone was asking about chords. I decided to have a go at writing out the simplest acceptable chords for each of the tunes we're planning. Each letter represents two downbeats:
All the Rage
A: 𝄆 E E A B 𝄇 x4
B: 𝄆 A A B B 𝄇 x4
Lisnagun
𝄆 G G C D 𝄇 x8
Devil's Dream
𝄆 D D A E 𝄇 x8
Reign of Love
A: 𝄆 Em Em C D 𝄇 x4
B: 𝄆 Em Em C C
D D C C 𝄇
June Apple
A: 𝄆 A A G G
A A G D 𝄇
B: 𝄆 A A G D 𝄇 x4
Dancing Bear
𝄆 Em Em C C
D D B B 𝄇 x4
can play B7 for B if that's easier
The Wren
A: 𝄆 Em C D D 𝄇 x4
B: 𝄆 C C D D 𝄇 x4
Haaplevese
A: 𝄆 D D G A 𝄇 x4
B: 𝄆 D G A A 𝄇 x4
Trip to Moscow
𝄆 Am Am C C
G G D D 𝄇 x4
Road to Boston
A: 𝄆 D D G A 𝄇 x4
B: 𝄆 D D G G
D D A A 𝄇 x4
Coleman's March
A: 𝄆 D G D A
D G A A 𝄇
B: 𝄆 D D D D
G G A A 𝄇
Sandy Boys
A: 𝄆 A A A G 𝄇 x4
B: 𝄆 A A A A
G G G G 𝄇
Angeline the Baker
𝄆 D D D G
D D D A 𝄇 x4
Liza Jane
A: 𝄆 A A A D
A A E A 𝄇
B: 𝄆 A A A D
E E E A 𝄇
Mairi's Wedding
𝄆 G G C D 𝄇 x8
Cluck Old Hen
A: 𝄆 A A A G 𝄇 x8
Cripple Creek
A: 𝄆 A A D E 𝄇 x8
Bile 'Em Cabbage Down
𝄆 D G D A
D G A A 𝄇
Four of these are just "𝄆 I I IV V 𝄇": a good first guess for a major tune!
I'll have these there in person as well on handouts.
Note that normally you'd be doing different chords each time through, and they'd been more complex (more).
Comment via: facebook, mastodon |
28da981a-03dc-4494-bf79-f2139ed89d61 | trentmkelly/LessWrong-43k | LessWrong | Weekly LW Meetups
This summary was posted to LW main on July 18th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
* [Atlanta] MIRIxAtlanta: 19 July 2014: 19 July 2014 06:00PM
* Frankfurt: Goal Factoring: 20 July 2014 02:00PM
* Houston, TX: 19 July 2014 12:16AM
* Upper Canada LW Megameetup: Ottawa, Toronto, Montreal, Waterloo, London: 18 July 2014 07:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Brussels - August (topic TBD): 09 August 2014 01:00PM
* Canberra: Intro to Anthropic Reasoning: 25 July 2014 06:00PM
* Sydney Meetup - July: 23 July 2014 07:00PM
* Washington DC: Short Talks: 20 July 2014 03:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Sydney, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll also have the benefit of having your meetup mentioned in a weekly overview. These overview posts are moved to the discussion section when the new post goes up.
Please note that for your meetup to appear in the weekly meetups feature, you need to post your meetup before |
939c2ed9-9449-4851-8ab4-254a0aa8d4f3 | trentmkelly/LessWrong-43k | LessWrong | Towards a New Decision Theory
It commonly acknowledged here that current decision theories have deficiencies that show up in the form of various paradoxes. Since there seems to be little hope that Eliezer will publish his Timeless Decision Theory any time soon, I decided to try to synthesize some of the ideas discussed in this forum, along with a few of my own, into a coherent alternative that is hopefully not so paradox-prone.
I'll start with a way of framing the question. Put yourself in the place of an AI, or more specifically, the decision algorithm of an AI. You have access to your own source code S, plus a bit string X representing all of your memories and sensory data. You have to choose an output string Y. That’s the decision. The question is, how? (The answer isn't “Run S,” because what we want to know is what S should be in the first place.)
Let’s proceed by asking the question, “What are the consequences of S, on input X, returning Y as the output, instead of Z?” To begin with, we'll consider just the consequences of that choice in the realm of abstract computations (i.e. computations considered as mathematical objects rather than as implemented in physical systems). The most immediate consequence is that any program that calls S as a subroutine with X as input, will receive Y as output, instead of Z. What happens next is a bit harder to tell, but supposing that you know something about a program P that call S as a subroutine, you can further deduce the effects of choosing Y versus Z by tracing the difference between the two choices in P’s subsequent execution. We could call these the computational consequences of Y. Suppose you have preferences about the execution of a set of programs, some of which call S as a subroutine, then you can satisfy your preferences directly by choosing the output of S so that those programs will run the way you most prefer.
A more general class of consequences might be called logical consequences. Consider a program P’ that doesn’t call S, but a differ |
1c963f98-31e5-4b85-8b68-ab7cf80cc1b9 | trentmkelly/LessWrong-43k | LessWrong | Common Knowledge is a Circle Game for Toddlers
I wanted to jot down a particular instance of a common interaction I have with my daughter.
My child, wife, and I are sitting around the coffee table in the living room discussing the plan to take the car to the mechanic the next day. We describe this as going on an adventure to the child as a lingering holdover from isolation during covid, which seems to be fine because she enjoys car rides. What is different from the usual this time is that I am coming; normally car adventures are my wife driving, because she runs most of the errands. The following (simplified) dialogue takes place:
Child: "Dad, can you come in the car tomorrow?"
Me: "I am coming in the car tomorrow."
Child: "Really?! You are coming?"
Me: "Yes."
Child: "Mom! Did you know Dad is coming in the car tomorrow?"
Mom: "Yes! He is coming with us tomorrow."
Child: "Dad! Mom knows you are coming tomorrow! You are both coming!"
Me: "Yes!"
Keep in mind we have all been present for the whole conversation. We are sitting around a coffee table. None of us are more than three feet from each other.
It struck me that what my daughter was doing was trying to explicitly track what her mom and I knew. I expect this falls somewhere under the theory-of-mind stage of development, but what stuck out to me was the procedure she undertook, which looked something like three loops:
1. One loop with me, of establishing the fact and confirming the fact.
2. A lateral loop to mom, of establishing that mom knows the fact and confirming that mom knows the fact.
3. A lateral loop back to me, of establishing that I know that mom knows, and confirming that I know that mom knows.
The looping strategy reminded me of Circle Games, and only after this did it occur to me that the object of the game was common knowledge.
My child has been playing these kinds of games for a while now, on reflection; but recently they have grown in frequency and elaboration enough that I began to wonder why she seemed to be spending more time |
a58f7b39-d4f4-413d-86b3-7f25c84fb4c4 | StampyAI/alignment-research-dataset/arbital | Arbital | Logarithmic identities
Recall that [$\log_b](https://arbital.com/p/3nd) is defined to be the (possibly fractional) number of times that you have to multiply 1 by $b$ to get $n.$ Logarithm functions satisfy the following properties, for any base $b$:
- [Inversion of exponentials](https://arbital.com/p/): $b^{\log_b(n)} = \log_b(b^n) = n.$
- [Log of 1 is 0](https://arbital.com/p/): $\log_b(1) = 0$
- [Log of the base is 1](https://arbital.com/p/): $\log_b(b) = 1$
- [Multiplication is addition in logspace](https://arbital.com/p/): $\log_b(x\cdot y) = log_b(x) + \log_b(y).$
- [Exponentiation is multiplication in logspace](https://arbital.com/p/): $\log_b(x^n) = n\log_b(x).$
- [Symmetry across log exponents](https://arbital.com/p/): $x^{\log_b(y)} = y^{\log_b(x)}.$
- [Change of base](https://arbital.com/p/): $\log_a(n) = \frac{\log_b(n)}{\log_b(a)}$ |
5d5682dc-e491-4e6e-82e6-3ae94346cefd | trentmkelly/LessWrong-43k | LessWrong | Weekly LW Meetups: Atlanta, Brussels, Fort Collins, Ohio
There are upcoming irregularly scheduled Less Wrong meetups in:
* Brussels meetup: 17 March 2012 11:15AM
* Atlanta: 17 March 2012 05:30PM
* Fort Collins Meetup Saturday 17th: 17 March 2012 05:00PM
* [Ohio/Washington DC] Interest in Reason Rally meetup?: 24 March 2012 04:14PM
The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Austin, TX: 24 March 2012 01:30PM
* Ohio Monthly: 17 March 2012 03:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, London, Madison WI, Melbourne, Mountain View, New York, Ohio, Ottawa, Oxford, Portland, Seattle, Toronto, Waterloo, and West Los Angeles.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will be moved to the discussion section when the new post goes up.
Please note that for your meetup to appear in the weekly meetups feature, you need to post your meetup before the Friday before your meetup!
If you check Less Wrong irregularly, consider subscribing to one or more city-specific mailing list in order to be notified when an irregular meetup is happening: Atlanta, Chicago, Helsinki, London, Marin CA, Pittsburgh, Salt Lake City, Southern California (Los Angeles/Orange County area), St. Louis, Vancouver, Washington, DC.
If your meetup has a mailing lis |
b8e8b8d3-aa42-4386-9f15-07393ba2e92f | trentmkelly/LessWrong-43k | LessWrong | Abstract concepts and metalingual definition: Does ChatGPT understand justice and charity?
Our conceptual worlds are filled with abstract concepts, such as truth, love, beauty, and justice. Such concepts cannot be defined in physical terms. You can’t see them, taste them, hear them, touch them, or smell them. And yet for human society they are as real as sticks and stones, thunder and lightning, a horse, a sparrow, cherry blossoms, or Mt. Fuji.
We can learn about abstractions through stories that exhibit them. Rene Girard’s concepts of mimetic desire and of sacrifice are abstract. They characterize patterns of action that people take in their interactions with one another. Whether he realized it or not – probably not – Steven Spielberg showed us those concepts in Jaws. And I demonstrated that ChatGPT was able to recognize them, to interpret the film as exhibiting them: Conversing with ChatGPT about Jaws, Mimetic Desire, and Sacrifice.
* * * * *
I begin this post by examining ChatGPT’s ability to work with the concept of justice. First I it to define justice, and then say a little about how Plato treated it in The Republic. Then we see how Chat deals with justice in stories, creating stories about justice, but also recognizing justice, or its lack, in stories. I then introduce Chomsky’s well-known, albeit somewhat obscure, distinction between competence and performance. Next, another example, charity, which I use to set-up a discussion of semantic or cognitive networks, a formalism from the now ‘classic’ symbolic AI and computational linguistics from the 1960s, 70s, and 80s. Would that notation be suitable for characterizing some (non-trivial) aspect of ChatGPT’s behavioral competence? I conclude by tempting the reader – that’s you – to consider the implications, suggesting that Neil Stephenson’s The Diamond Age: Or, A Young Lady's Illusttrated Primer might be a useful source of insight.
What is justice?
Let’s start by asking ChatGPT to define justice, which it does. I then asked about Plato’s treatment of justice in The Republic.
> What is justice?
|
cc17ddf0-a2bd-47ca-8250-871aeced7134 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | 2023 Open Philanthropy AI Worldviews Contest: Odds of Artificial General Intelligence by 2043
Submission for the [Open Philanthropy AI Worldviews Contest.](https://forum.effectivealtruism.org/posts/NZz3Das7jFdCBN9zH/announcing-the-open-philanthropy-ai-worldviews-contest)
Downloadable PDF [here](https://github.com/srhoades10/aiworldviews/raw/main/agi_worldview_prize_sdr.pdf)
**Summary**
The arrival of artificial general intelligence (AGI) could transform society, commerce, and nations more than the technological revolutions of the past. Postulating when AGI arrives, what a world with AGI may look like, and the progress of AI *en route* to AGI is worthwhile. This Work adopts a broad-scoped view of current developments and opportunities in AI, and places it amidst societal and economic forces present today and expected in the near future.As the appearance and form of AGI is hard to predict, the basis to form the odds of AGI within 20 years is instead derived from its impact and outcomes. The qualifying outcomes of AGI for this Work include conducting most tasks cost-competitively to livable wages in developed nations, performing innovative and complex high-skill work such as scientific research, yielding a durable 6% Growth World Product growth rate, or inducing massive shifts in labor distribution on par with the Agricultural or Industrial revolutions. A survey is first taken on prior works evaluating recent technological developments and posing remaining capabilities necessary to achieve AGI. From these works comes a baseline odd of 24.8% for AGI by 2043, which is then balanced against arguments for or against these timelines based on what AI can do today, a representative array of tasks that AI cannot do but may qualify AGI if successful, as well as observations of phenomena not considered in the more technical prior works, namely those of labor, incentives, and state actors. Likely and impactful tailwinds to AGI timelines are developing new paradigms of AI, on par with reinforcement learning, capable of wholly distinct tasks from those done by AI today. Less likely but impactful tailwinds include the ability for AIs to physically manipulate a diversity of objects, and development of numerous new “narrow” AIs to collectively perform a diversity of tasks. Likely and impactful headwinds to AGI timelines are the continuation of outsourcing to abundant excess labor globally and the long economic growth trajectories of developing nations. Less likely but impactful headwinds include large economic recessions, globalized secular stagnation, and insufficient incentive to automate fading yet ubiquitous technologies and services. Applying subjective and weighted probabilities across a myriad of scenarios updates the baseline 24.8% odds to propose a low, median, and high odds of 6.98%, 13.86%, and 20.67% for AGI by 2043.
**I. Definitions, technical states & Base Rates**
This section defines Artificial General Intelligence (AGI) and reasons the Base Rate of 24.8% for AGI development by 2043 (Pr(AGI2043)). The remainder of this Work will then provide cases for an adjustment to this Base Rate. Readers not interested in definitions or rationales may skip to Section II.
**I.A Defining AGI**
The definition of AGI for this Work considers capabilitiesof and outcomesafter development of artificial intelligence (AI) technologies. These capabilities include one or more computer programs to perform almost any human task competitive with livable wages in developed countries.This Work considers AGI as either a singular, task-achieving AI system which also achieves “intelligence”, *or* a suite of “narrow” AIs (nAIs). Examples of nAIs today include image recognition or language models to produce human-like text. Additional capabilities required for AGI in this Work include AI systems operating businesses or performing human-created endeavors such as scientific research and development. While a collection of multiple nAIs themselves would not necessarily constitute a “general” or “intelligent” AI system, nAIs capable of complex tasks with agency, strategy, or decision-making in a human-centric world of enterprise and governance entail a similar degree of technological advancement as the more abstract notion of AGI.
Additional outcome-based definitions of AGI include both profound economic transformations and massive job displacements. This Work sets a bar close to the Industrial Revolution, which bent linear Gross World Product (GWP) growth to exponential. An increase in the annual GWP growth rate from 4%, [the average rate since 1900](http://holtz.org/Library/Social%20Science/Economics/Estimating%20World%20GDP%20by%20DeLong/Estimating%20World%20GDP.htm), to a persistent ~6% qualifies as an AGI outcome in this Work. Persistence is defined as a durable rate outside of depressions and temporary hypergrowth periods as observed around the Great Depression and Recession.
*Relative GWP growth comparison of pre- and post-Industrial Revolution (0.5% and 4%, left) and required GWP growth to quality AGI (6%, right) over an arbitrary 120-year timeline.*
A 50% increase in GWP growth within 20 years is substantial yet comparable to the quadrupling from 1800 to 1900. This seemingly extreme growth rate is not infeasible even today, as the growth rate neared 6% multiple times in the past 75 years. Labor outcomes of AGI are matched to the shifts before and after the Industrial Revolution, when agriculture *decreased* from greater than 50% of the labor force in leading nations around 1750 to less than 5% by 2000. The percentage of labor in services rapidly *increased* to over 50% in developed countries since the Industrial Revolution but occurred after the partial displacement of agriculture first by manufacturing. Thus in this Work, only 40% or greater *decreases*labor in any large sector(s), including services, qualifies an AGI outcome.
GWP may not be able to capture the impact of AGI. Other outcomes, such as a drastic curtailment of humanity or human extinction due to AGI could qualify. This Work does not factor these outcomes, as the baseline estimates of these events occurring by 2043 due to artificial systems are negligible compared to the probabilities of anthropogenic wars or drastically reduced living conditions on this planet. However, this Work also then considers an outcome of AGI to be a durable 0% GWP or less. This Work limits outcomes to economic indicators, as fewer consensus and accessible quantitative measures exist for culture, quality of life, or geopolitics, though AGI would surely impact these domains as well.
**I.B Review of prior work**
Prior works on this topic deeply cover AI development to date, technical and financial requirements to develop AGI, and how prior eras of technological progress can predict AI timelines. This section amalgamates these works to help form a Base Rate for Pr(AGI2043).
[Carlsmith](https://arxiv.org/pdf/2206.13353.pdf) broadly writes about existential risks of AGI to humanity, but includes assessments of algorithmic, compute, and financial feasibilities to develop AGI and describes capabilities of an intelligent artificial system. The probability of technical feasibility and an environment with incentives to build such a system by 2070 is 65% and 80%, respectively. Capabilities of AGI include “advanced capabilities”: conduct scientific research or engage in political strategy, “agency”: create and execute tasks, and “strategic awareness”: formulate plans and update them amidst real-world environments or test what-if scenarios. This work introduces concepts of intelligence as emergent properties of nAIs, demonstrations of awareness in AI systems today, and the response of society to “warning shots” of near-AGI systems around notions of safety.
[Cotra](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines) focuses narrowly on when a transformational AI (TAI) system could be built, factoring advances in computing power and costs, the data and computing requirements of AI today, and “compute” estimates in human intelligence. TAI is defined as one or more AIs that can perform most jobs cheaper than humans, universally recognize and classify objects, create narratives, teach itself new skills, perform economically valuable work, and engage in scientific research. The probability of TAI by 2030 and 2040 is 15% and 50% respectively. This work introduces concepts of disparities in data requirements between artificial systems and humans, algorithmic efficiency, bottlenecks of TAI by compute power and costs, and achieving TAI through generalizing existing AI algorithms.
[Karnofsky](https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/) assesses AI’s capabilities today and projects the advancements towards TAI, defined similarly to Cotra in its capabilities and impacts to labor. The probability of TAI by 2036 is 10%. TAI’s capabilities include conducting scientific research and human-level performance in both physical and intellectual tasks competitive with at least 50% of total market wages. An added qualifying outcome of TAI is producing economic growth at the scale of the Agricultural or Industrial Revolutions. This work introduces concepts of TAI through generalizing existing AI algorithms and de-anthropocentric AI, or TAI that does not emulate human brains and cognition.
[Davidson](https://www.openphilanthropy.org/research/semi-informative-priors-over-ai-timelines) models probabilities of AGI development with little attention to technical AI progress to date, but rather by AGI’s estimated difficulty, research effort, computing advancements, success and failure rates, and spending on AGI development. The probability range of AGI by 2036 is 1-18%, with a median at 8%. Davidson provides a median adjustment down to 5.7% percent, conditional on AGI being impossible by 2023. AGI’s capabilities are heavily weighed on cost-competitive accomplishment human-level performance on cognitive tasks. Davidson’s model of AGI development is most influenced by the estimated difficulty of the problem, and secondarily on the growth of research(ers), spend, and compute. This work introduces concepts of expectations based on progress in other fields, AGI’s as a product of sequential advancements, and diminishing returns on research and investment observed in other scientific industries.
**I.C Arriving at a Base Rate of Pr(AGI2043) for this Work**
The Base Rate of Pr(AGI2043) is a weighted sum of the referenced works above, derived from a thematic overlap of concepts and relative strengths and weaknesses of their stances.
The salient probabilities from Carlsmith for this Work are technical feasibility and incentives to create AGI by 2070, which are 65%\*80%= 52%. Carlsmith is unique among the referenced works to consider incentives, deployment to real-world applications, and societal response to AI development, all of which decrease odds of AGI. These concepts are viewed favorably here, and while anecdotes of AI’s accomplishments and their expected extension towards systems with agency, strategy, and planning are viewed skeptically, the technical assessments are viewed favorably too. The odds of AGI are only defined for 2070, and it is not stated how the odds change over time. Therefore, the Pr(AGI2043), given the time of authorship in 2022, is (2043-2022)/(2070-2022)\*52%=21.9%.
The Pr(AGI2043) from Cotra is (2043-2022)/(2040-2022)\*50%=58.3%. The technical assessments in this work are viewed favorably, however the expectation TAI will emerge from continual advancement of existing AIs today, and as a byproduct of increased compute power at decreased cost, is too large an assumption and narrow a view to be given high weight in the updated Base Rates. Karnofsky shares expectations on TAI from improvement on existing AIs, but raises the bar to compare TAI’s outcomes to prior technological revolutions. This raised expectation may partly explain the decreased odds of TAI by 2036. The Pr(AGI2043) from Karnofsky is (2043-2016)/(2036-2016)\*10%=13.5%. Both these works will be given equal weight, as they differ mainly on the qualifications of TAI.
Davidson offers less technical but important perspectives to the broader consideration of AGI development. These perspectives are viewed here with mixed results. The concepts of sequential models, research(er) effort, technological progress, and non-linear changes in probabilities over time are useful. Nonetheless, the model used in this work is strongly influenced by a single variable, namely the prior expectation of AGI. Varying the weights of other variables does not appreciably impact the overall probability, and thus the utility of this model is viewed skeptically. Additionally, the naïve model only differs from the informed model by one percent. The Pr(AGI2043) from Davidson is (2043-2020)/(2036-2020)\*5.7%=8.2%.
These odds are given subjective weights of 0.4, 0.2, 0.2, and 0.2 from Carlsmith, Cotra, Karnofsky, and Davidson respectively, yielding the estimates:
(21.9% \* 0.4) + (58.3% \*0.2) + (13.5% \* 0.2) + (8.2% \* 0.2) = **24.8%**
**II. Technical assessment and critique on AI today**
In this section and throughout this Work, I will ground arguments and opportunities in AI development across mundane and micro tasks, such as cooking and haircuts, up to macro and abstract sectors of healthcare, automobiles, construction, and business enterprise. Examples in these domains will highlight what AI can and cannot do today and opportunities for new AI developments, which will then be translated to subjective odds of developing such capabilities and integrated to Pr(AGI2043).
**II.A Defining Tasks and limits of data, generalization, and the unobservable**
Most AI accomplishments to date are categorically modeling or learning[[1]](#_ftn1). Notable modeling tasks include processing and classifying images and videos, reading and rendering text, and listening and responding to auditory signals. Learning-based tasks include playing real or virtual games. In all these tasks, AI operates in data-rich domains, either of the senses or in simulated environments with effectively infinite artificial data and outcomes of a sequence of actions. Cotra notes these AIs require more than 2 orders of magnitude of data than humans to learn tasks.The requirement for either a super-abundance of data or high outcome availability are one limitation to nAIs generalizing to other tasks. In addition, tasks operating within rules, such as playing chess, depend on readily available and simple rules, which do not apply to many real-world tasks.
I propose a general formula for an AI to accomplish most human tasks:
*Task = function(Rules, Domain Knowledge, Physical Manipulation, Data, Outcomes)*
For a given task, the relative weights of these factors vary considerably, and tasks do not require all factors. In addition, many tasks can be accomplished in multiple ways, for example through highly curated rules and domain knowledge or by brute-force with abundant data. Tasks involving the senses, for example image classification, perform almost purely through data, with a non-zero influence of outcomes (i.e. ground-truth labels of the image), and no requirement for physical manipulation. Learning-based tasks, such as playing chess, could either require explicit knowledge of rules or implicit knowledge from the unobserved actions within a preponderance of data. Regardless, all learning tasks require outcomes. While rules can be safely ignored in the case of text generation, where a corpus of text is sufficient to build a model of “real” words, their relative orderings, and grammar, the unobservable information in other domains may be because actions either violate rules (examples being laws of legal systems or of physics) or resulted in failure. The unobservable failures or rule violations, even in data-rich domains, implies that for AIs to accomplish many unsolved real-world tasks, AIs must contain explicitly defined rules, “learn” fundamental rules, e.g. laws of physics, or codify domain knowledge in anthropogenic endeavors.
**II.B Path dependency, extensibility, and progress**
At their core, whether as statistical models or agent/environment-based learners, most performant AIs today are variations of neural networks, with their architectures differing in complexity and assembly of the network’s components. The path to neural networks’ success today originated in 1960s, with the most notable achievements in the past 20 years. These achievements are largely improvements in hardware and algorithmic efficiency, and less from fundamental changes of the algorithms themselves. The path dependence of neural networks to AI today evokes a broader question of whether they represent the global maxima of algorithms, or is there a yet-undiscovered category of AI that better incorporates rules and domain knowledge, or (limited) data, and (unobservable) outcomes? Aviation, computation, and bioengineering are prime examples where successive technologies required transformative innovations to climb out of their local maxima. Modern aviation’s journey began with fixed-wing gliders and rotorcrafts, followed by jet engines, piston engines, and gas turbines. In computer programming, the term “extensibility” is used to characterize the ability of code or software to solve problems beyond its initial intended use. Piston engines could not extend to the demands of high-performance aircraft or long-distance air travel made possible by gas turbines. This notion is readily apparent in biology as well, where *E coli* are engineered to produce many chemicals it does not natively produce, but are not sufficiently extensible to produce all chemicals used in the synthetic biology and biopharmaceutical industries. While the limits of *E coli*are not yet reached, other organisms are increasingly under study to solve broader challenges in these industries. Likewise, AI has not yet hit the limits of neural networks and the modeling and learning algorithms they power. But whether as new innovations, seen with engines, or stacking complementary technologies, seen with semiconductors and fabrication, the history of industry and progress would predict that additional nAIs, or more generalizable AI, will likely require magnitudes of change to extend to the breadth of tasks required for AGI.
An important distinction between aviation, computing, and AI are the available measures of advancement. For aviation, the weight to power ratio is a primary indicator of progress, and for semiconductors, the number of transistors. In deep learning, the number of model parameters is an analogous metric, where more is generally better, though this measure does not apply as neatly to learning AIs. AI models may be evaluated by their accuracy or performance relative to humans, which to date are only established for narrow tasks such as image classification. While broad consensus performance metrics of a technology or industry are not mandatory for progress, the stubbornness of this challenge for the field of AI may produce drag on its advancement to AGI.
**II.C AI in select examples**
Cooking AI is achievable through physical manipulation of objects and either codified knowledge, including agreeable mixtures of flavors, cooking durations, and fundamental rules of chemistry, or by data-intensive associations to outcomes. Outcomes today are largely subjective tastes (the scientific domain of organoleptics is still in its infancy), but despite the frequency that humans eat and the size of the food industry, outcomes data is scarce. However, a large corpus of recipes and their ratings do exist and can act as a proxy for input data and their outcomes. This form of data-outcome modeling is highly compatible with existing AIs today, and much more likely to accomplish Cooking AI than through domain knowledge and rules. *Thus, Cooking AI is highly achievable in the digital domain of its subtasks, and in light of progress on robotic food preparation systems today, moderately achievable in the physical manipulation of objects.*
Haircutting AI is achievable either by codified domain knowledge and rules, including physics and actions that inflict human harm, or by data-intensive associations to outcomes. Codifying domain knowledge may prove difficult, particularly for translating human intuitions on how to manipulate hair of varying densities, lengths, and moisture content into a quantitative domain, and then to combine those measures with physical forces and movements of razors and combs. For the data-outcome approach, outcomes are simply images of haircuts, although data, if represented as a sequence of actions by physical objects in three-dimensional space, is non-existent. If this problem is cast as a learning-based task, a simulation environment may be possible to generate infinite sequences of haircutting actions and a final (simulated) cut-hair end state. Ultimately, this simulation needs to cross into the physical domain, possibly on a model of a human head with robotic arms and devices moving in physical space. *I view Haircutting AI to be moderately unlikely in the digital domain of its subtasks, given little existing work and data on this type of task, and limited cases of modeling or learning AIs in the physical domain. In addition, the physical manipulation of objects is both essential and more difficult than in Cooking AI.*
Unlike the directed actions-to-outcome nature of cooking and cutting hair, many tasks require the ability to diagnose a system of parts, understand functions of individual components, and combine those components to form an integrated unit. Composability, which may represent a new category of AI, would greatly increase likelihood of Auto Repair AI. In this context, composability can be thought of as an extension of domain knowledge. Auto Repair AI may be achievable through a data-outcomes approach, although like the service-oriented food and personal care sectors, automotive repair is a fragmented industry with low outcome availability. If a learning AI were applied to this problem, a simulation environment may be created akin to Haircutting AI, however a physical environment to diagnose component performance, assemble parts, and test outcomes would require much more complex systems. *I view Auto Repair AI to be moderately unlikely in the digital domain of its subtasks, given a new category of AI may be required and limited cases of learning AIs in the physical domain. The physical manipulation of objects is both essential and at least as challenging as Haircutting AI.*
Construction contains many analogous tasks to automotive repair. Also analogous to automotive repair is the fragmented nature of the construction industry and low outcome availability. The rules and domain knowledge embedded in designing, planning, preparing, and assembly are vast, with many variables and dimensions of possible outcomes. For instance, cars have finite parts with limited substitutability, while building configurations are primarily limited by physics and feasibility against the space onto which the building is built, which itself contains variables of terrain, soil, and climate, among others. Given the many subtasks and expertise required of construction, Construction AI would likely be solved with multiple nAIs. *Among the major tasks, design is reasonably feasible through a data-driven approach, as there exist many blueprints and consensus preferences on building configurations. I view planning to be somewhat unlikely, given the extensive rules and domain knowledge required in building, permitting, approvals, codes, and regulations manifest by many roles in this industry. Analogous to Auto Repair AI, a new category of AI may be required for building tasks in construction, and physical manipulation of objects is extremely challenging, as has been noted even for mundane related tasks such as*[*laying bricks*](https://constructionphysics.substack.com/p/where-are-the-robotic-bricklayers)*.*
Healthcare AI requires either extensive domain knowledge, including known conditions and their treatments, coupled to rules, such as actions harmful to humans, or data-intensive associations to outcomes. There has been considerably more effort to codify domain knowledge in healthcare than services, automotive, and construction, in both clinical research and private industry. While successes exist, including clinical decision support tools, the effort required is enormous relative to the percentage of diagnostics and decisions made in human care these tools impact. Composability, considering anatomical and physiological components of the body and collective human function, would benefit diagnostic tasks in Healthcare AI, but diagnostics is also possible through data-driven modeling. Much more effort, spanning decades, has also been applied to this approach than other industries. The results of these efforts are mixed. Diagnostic AIs today perform strongly to diagnose conditions mappable from sensory information, such as images. Furthermore, the strong incentives to collect data in health systems of developed countries yield copious data on human behaviors, diagnoses, and clinical decisions. Nonetheless, with all this information, “AI doctors” are slow to be developed and deployed. While there are many reasons for the slow rise of Healthcare AI, I argue a large technical reason is that what is missing from health data is as important as what exists. In addition, outcome availability is limited, fragmented, and biased towards outcomes of ineffective diagnoses and treatments, i.e., an inverse survivorship bias.Putting aside physical manipulations, considering the work in Healthcare AI to date, *I view generalized Healthcare AI to be somewhat unlikely. Nonetheless, a suite of nAIs operating purely in the digital domain of tasks may still have a large impact on the industry. Most healthcare events are for routine symptoms with predictable solutions and outcomes, and more than 20% of healthcare costs in the United States are related to diabetes care (or ~1% of GDP). In addition, healthcare involves many administrative functions, and increasing automation of these tasks is highly feasible. Thus, I view Healthcare AI that meets the requirements of AGI for this Work to be moderately likely.*
Contracts are essential in most industries and transactions. Although contracts operate within known rules, including those from legal systems, an industry-agnostic Contracts AI would be very difficult to approach through domain knowledge and rules. Rather, large language models today may already be capable of generating routine contract templates. Outcomes are less critical here, and as with Healthcare AI qualified by only a subset of diagnosis and treatment capabilities, Contracts AI may not need to generate all types of contracts or write complete documents to impact the industry broadly. Data is not as readily available as other bodies of texts such as cooking recipes, given the less public nature of the legal profession. However, the efficiency of language models, partly measured by the amount of data required to produce text, is continually decreasing. *Contracts AI is thus highly likely.*
Company formation and operations, which may be for purely digital services or products, requires acting with agency within an environment. An agent-environment example could be simply receiving feedback on a product and adjusting the product features or price. Other tasks frequently involved in operating companies include negotiation, bargaining, and strategic planning. Many of these tasks are already possible with language models that can converse with humans. These AIs are predominantly confined to 1-on-1 interactions today, which is not the case in many business interactions operating under multiple agent-principal environments. *Even so*, *expanding existing AI tools to engage in multi-actor interactions is feasible*, *and thus* *negotiating and bargaining subtasks for Company AI is likely.*Other tasks requiring agency or adjustment of actions against an external environment are analogous to how learning AIs play games today. These AIs “unroll” a series of actions and adjust their behaviors based on the expected outcome, confirmed by the actual outcome of the game. Transferring this approach outside a game would require mapping actions of games to real-world actions and quantizing outcomes. *I view a learning AI approach to acting with agency in business operations to be very unlikely, given most observable outcomes cannot be readily attributable to specific actions, and most actions and outcomes are not observable.*“Failures” in a game are low stakes, whereas “failure” in this case is death of the company. Death of the individual can benefit the collective, if outcomes and causal links of actions (or inactions) are defined and disseminated, none of which happens today, nor viewed as remotely likely to occur soon. *Company AI may thus require a new category of AI which can operate with highly incomplete information or accomplished by a large-scale reengineering and mapping of actions and outcomes from games onto the human domain, both of which I view to be very unlikely.*
**III. Economies, state actors, and labor**
Technological development does not occur in a vacuum of fields, factories, or server rooms. Arguments on AGI timelines in this Section are grounded by selected observations in labor markets, economic trends, and recent private sector activities. These observations are then projected onto conceptual frameworks in labor theory and macroeconomics to propose a series of headwinds and tailwinds for the advancement of AGI and its impact on GWP and Labor as defined in Section I.
Observations are as follows:
* In 2017, $2.7 out of $5.1 trillion of manufacturing labor globally was estimated to be already automatable [[ref](https://www.mckinsey.com/capabilities/operations/our-insights/human-plus-machine-a-new-era-of-automation-in-manufacturing)]
* Both Google and Meta independently developed AI to predict protein structures and play strategy games
* The annual number of electric vehicles (EVs) sold globally will surpass internal combustion engines (ICEs) in 2040, with absolute numbers of EVs and ICEs at roughly 600 million and 1.2 billion respectively [[ref](https://about.bnef.com/blog/electric-vehicles-accelerate-54-new-car-sales-2040/)]
* The number of low-tech jobs created more than double high-tech labor in geographic areas of new growth [[ref](https://www.aeaweb.org/articles?id=10.1257/aer.100.2.373), [ref](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2671765)]
**III.A Private sector activity, incentives, and industrial turnover**
The massive value in private enterprise, and likely also in public functions, that could be automated today but are not, implies the development and deployment of AI systems are not solely technological factors. Among many possible reasons, I argue investment, narrow competition, momentum of existing business practices, and incentives are important ones.
Most successful deployed nAIs were developed by technology companies with large research divisions or smaller foundations and non-profits with support from large technology companies. The applications of highly developed language and image recognition models are most evident in product improvement, such as content recommendation, search engines, and driver-independent vehicle control. Other projects, such as playing the game of Go and predicting protein structures, are R&D projects with large but uncertain payoffs, and required particular decisions made with excess capital in favorable macroeconomic environments. Examples of costs and timelines of these projects shed light on the requirements to advance nAIs today. AlphaGo, an AI that plays the game of Go, had an [estimated](https://www.yuzeh.com/data/agz-cost.html) computing cost of $35 million and development time of more than two years. AlphaFold, an AI to predict protein structures, has multiple version releases since 2018 (as an aside, this timeline is short compared to therapeutic discovery and development as a whole, and protein structure is but one of many non-obligatory steps to discovering and developing drugs). DeepMind, the company who developed these AIs, [reported](https://find-and-update.company-information.service.gov.uk/company/07386350/filing-history) expenses of $1.7 billion in 2021. Based on code release histories, Meta’s protein structure project took at least three years, and their recent corollary to AlphaGo, CICERO, is built on the shoulders of large language models and learning AIs developed over the past few years. In the coming years, a real possibility of less favorable macroeconomic conditions may squeeze budgets and researcher efforts in these projects. Even less likely than sustained effort on these R&D projects would then be the development of new categories of AIs that may be required to accomplish tasks described in Section II. Few companies today possess both large R&D divisions and a willingness to invest their excess returns to fundamental AI development. Among these companies, a de-emphasis on ambitious AI R&D may already be underway, as noted by Meta’s intention to focus their AI efforts to improve product recommendations and increase advertising conversion rates. Even so, as computing costs decrease and access to nAIs increase, the investment and time to onboard and advance AI becomes increasingly favorable to smaller companies or even individuals. This tailwind to AGI timelines offsets the historical dependency on large private technology companies to develop innovative AI.
The incentives to invest and innovate in industry follow cycles of opportunity, growth, maturity, and decay. Displacement often follows decay, but the decay of technologies and their supporting service tasks occurs slowly. Just as hardware stores keep esoteric spare parts of decades-old appliances, so too may automobile mechanics long maintain inventory and repair services for the parts and ICE vehicles subject to displacement by EVs. Servicing 1.2 billion ICEs could be automated, but with a closing window of opportunity, is there sufficient incentive to automate these tasks? With simpler designs than ICEs and much greater investment, 2043 is a reasonable timeline for highly automated maintenance and repair of EVs. However, the incentive to automate other analogous tasks and industries lag EVs, and the vector of change in other industries is less obvious than for the shift from ICEs to EVs. Beyond its development, the deployment of AI will be tethered to industry-specific change, and the natural lifecycles of industry may create a strong headwind to AGI timelines.
Two major phenomena in modern globalized economies are the relocation of work to areas of cheaper labor and the development of comparative advantages by regions or nations to produce certain goods or services. In recent decades, these phenomena often manifest by establishment of new supply chains, training of labor, and scale-up of the nascent industry in a developing nation. These steps take at least years, and sometimes entire generations, as it operates under constraints of both individuals, who have at most a few careers in their lifetime, and industrial policy, typically only as dynamic as the rate of political or regime change. Singapore’s transformation began 1960’s, grounding its economy in manufacturing, then achieving its high standing in finance and technology by the 1990s. Other nations, including Vietnam, are in earlier stages of this progression now, as it expands beyond food and textiles production and into electronics. Many nations are even earlier in this development cycle, including African countries still with a majority agricultural economy. The relocation of tasks to cheaper labor in developing nations is far from exhausted. Without a significant perturbation to the typical course of task relocation and ensuing industrial development in developing nations, these economic phenomena may pose a large headwind to AGI deployment for decades. Even many tasks today which cannot be outsourced, including in developed nations with expensive labor, still are not automated. For example, the United States is mired in a long lag from building advanced robotic and intelligent systems to implementing fast food preparation nAIs. One may argue globalization and outsourcing has peaked, which may force domestic industries to accelerate adoption or development of innovative technologies. These behaviors would break with the otherwise large momentum of human-oriented work spreading across a globalized economy.
**III.B Labor’s force, state intervention, and utility**
Loss of purpose, means to support oneself or one’s family, or economic power to buy and sell goods or services, often has negative impacts, from individuals up to nations. These concerns garner increasing interest from governments to support labor markets, and laborers themselves perceive a growing threat of AGI to job security. I argue the response by labor, protections of labor, and behavioral changes from altered pricing of goods, services, and wage expectations will retard AGI development and deployment.
There is an idea that the advent of new technologies will leave nothing left for humans to do. This idea is repeatedly false historically, exemplified by the flow of labor down agriculture, industrial, and service sectors. That this notion is false to date does not mean it will be false with the advent AGI, nor that AGI cannot occur even with new tasks for humans. The time to develop new AIs for these new tasks will likely hasten, and future tasks will also likely be co-developed by AIs or with AI embedded into the design. Recently in wealthier societies, however, the proliferation of businesses such as barber shops and tattoo parlors also bring services to those services, for example dedicated furniture suppliers to each of those businesses. So long as humans can create niche service jobs out of existing ones, the development of and competition by AI to these tasks will lag, in part due to lack of incentives and the fragmented nature of “cottage” industries. Another headwind to AGI is derived from the observed outward expansions of pockets of skilled professions, such as lawyers specializing in cybersecurity, blockchain, autonomous vehicle liabilities, and AI property rights. These efforts may ultimately be lost to AI, but as with AI timelines tethered to the industry in which it operates, one may also predict natural cycles of education, training, and degree specialties to factor into AI timelines. A tailwind to AI upending intellectual professions could be new skilled labor with both domain knowledge of the industry and requisite knowledge to develop and deploy new AI, which is only just underway in some high-skill, non-service sectors.
In addition to the expansion of services, another potential response by labor is if automation were to ripple across segments of the labor market, labor would spill into other sectors and lead to increased labor competition. This outcome is plausible with increased automation of manufacturing jobs, possibly evidenced by increased structural unemployment despite robust growth of service jobs. Most new work humans create will likely then be more services or intellectual jobs, though the educational demands of intellectual work will be out of reach for many. As a result, there may be a race to the bottom in other jobs, producing downward wage pressures and sustained competition with AI, slowing its widespread deployment. Counterintuitively and conversely, a downward wage pressure may increase investment in automation and AI, which is otherwise difficult when labor is tight and wages high. Reduced net profits and falling investment would only follow an exhaustion of excess labor, reaching something like a Lewis Turning Point (in this context it does not necessarily mean a full absorption of agricultural labor by manufacturing, but applied to industries agnostically). The possible tailwind here would be a positive reinforcement loop where AI creates “excess labor” and thus frees investment and capital to accelerate AI development.
Economic growth alters expectations of work, wages, and standards of living. These new expectations are highly sticky, and turbulence to them, at least since the Industrial Revolution, correlates with unrest and political change. Increasingly since the Great Depression and the New Deal, the threat of economic downturns, pandemics, and job offshoring to social stability prompts governments and central banks in developed nations to inject labor safeguards and interventionist policies. Most nations where AI would be realistically developed and deployed in the near future, including the Americas, Europe, and Asia, largely subscribe to some tenants of Austrian economic thinking, which espouses the value of stable money and freedoms to transact as basic rights of individuals. Even the threat of AGI to labor may prompt action, possibly grounded in a novel moral equity problem, where rather than something being too expensive for the poor (e.g. healthcare in the United States), there will be an erosion of individuals’ livelihoods and economic freedoms. There are a few actions governments may take in such a scenario. They may expand their role to support the un- or under-employed with opportunities for re-skilling, increased welfare, or expanded public works. These strategies may impact AGI timelines, as public spending may crowd out private sector investments in new technologies. Also likely and impactful to AGI timelines are taxations on AI-enabled profits, restrictions on sales of goods or services powered by AI, or a push for AI as a public good. Designation of public goods and taxation often reduces incentives to innovative, so while this scenario would not preclude AGI development, it may prolong its timeline. Governments have intentionally blocked development of new technologies throughout history, but many cannot or will not today, especially with ever more accessible knowledge and usage of AI. Governments may instead attempt to coerce developers and owners of AI to open their technologies to wider use or competition, or take legal actions such as anti-trust, if owners of AI were to develop monopolies in their industries. However, without clear information on societal impact and property rights, these actions are unlikely to succeed. Given the potential reach of AI’s impact to labor and economies, a multi-state, global effort may be required, which is also unlikely to happen quickly. While failure would not accelerate AGI, it would dampen government-induced headwinds on AGI timelines. There could be a long delay to action while nAIs continue to be developed, altering expectations of the futures of workers and consumers.
Three observations and predictions relating to consumption behaviors collectively pose a headwind to AGI development. First, the permanent income hypothesis states consumption patterns emerge from future expectations of income and economic changes, and changes in human capital, property, and assets influence these expectations. One could envision altered consumption behaviors due to job displacement or merely the threat of displacement. A downward shift in consumption expectations would have a large negative impact on the economy and delay AGI. Second, price and wage stickiness could produce an inertia of consumption by those not yet impacted by AGI may continue to support higher prices. The case for increased marginal utility may strengthen as humans increasingly consume goods and services tied to branding and personal values. In this scenario, humans may sustain demand for goods or services produced by humans, even if AI-produced alternatives are cheaper. While this behavior would not sustain current levels of employment, it may prolong a qualifying event of AGI’s impact to labor. Third, without significant changes to current demands for AI, development may be continually pulled towards strengthening existing nAIs. This effect is evident today with a focus on AI as information aggregators, composers of text and images, and software developers. These narrow demands may hinder the necessary development of new AIs outlined in Section II. Of these three points, adjusted consumption behaviors and opportunity costs to develop new nAIs pose the largest headwinds to AGI timelines. An example of the limitations on the marginal utility argument is how language models are nearly capable of writing novels today. Only small populations of consumers, perhaps the intellectual, urban, and wealthy, would consider a purchasing decision against AI-created books. It is unlikely that an antithetical relationship to AI-produced goods and services would compete against the demands of the masses.
**IV. Examples and odds updates for Pr(AGI2043)**
Scenarios are formed from arguments throughout the text and given a subjective odds of occurrence and relative weight of importance for Pr(AGI2043). Scenarios may positively or negatively impact Pr(AGI2043). The formula for the final odds:
A range of overall estimates were formed by randomly sampling an odd and weight for each scenario over a uniform distribution of 0.5-2x the baseline odd and weight, bound by 0%-100% and 0-1 respectively. For instance, the 1st scenario’s odds below would be uniformly sampled between 37.5-100%, with weights 0.3-1.0. 100,000 samples were drawn to create a distribution of Pr(AGI2043).
| | | | |
| --- | --- | --- | --- |
| **Scenario** | **Sign** | **Odds (%)** | **Weight** |
| Development and practical use of new paradigms or forms of AI | + | 75 | 0.6 |
| Consensus performance metrics developed for AI, either broadly applicable to AGI or for ten developed individual nAIs | + | 15 | 0.1 |
| Digital domain of Cooking AI | + | 95 | 0.05 |
| Physical manipulation of objects for Cooking AI | + | 58.3 | 0.4 |
| Digital domain of Haircutting AI | + | 20 | 0.3 |
| Physical manipulation of objects for Haircutting AI | + | 10 | 0.1 |
| Digital domain of Auto Repair AI | + | 32.5 | 0.35 |
| Physical manipulation of objects for Auto Repair AI | + | 15 | 0.2 |
| Design for Construction AI | + | 85 | 0.2 |
| Planning for Construction AI | + | 36 | 0.3 |
| Physical manipulation of objects for Construction AI | + | 10 | 0.25 |
| Digital domain of Healthcare AI | + | 67 | 0.3 |
| Digital domain of Contracts AI | + | 95 | 0.1 |
| Forming and operating businesses | + | 10.5 | 0.2 |
| Periods of large unfavorable macroeconomic conditions or continued secular stagnation over the next 20 years | - | 36 | 0.5 |
| Diversification of fundamental AI R&D projects | + | 80 | 0.25 |
| Investment and private sector growth in automating fading technologies and services, including internal combustion engine vehicle maintenance and repair | + | 1 | 0.35 |
| Continuation of outsourcing to cheaper labor and economic growth trajectories of yet agricultural or low-skill manufacturing regions | - | 88 | 0.7 |
| More nAIs are required for AGI | + | 16.8 | 0.6 |
| High skill white-collar professions continue to expand pockets of tasks untouched by AI or created in response to AI | - | 70 | 0.15 |
| Structural unemployment due to AI crease excess returns to increase investment and capital towards more AI | + | 30 | 0.25 |
| Government interventions of expanded state, welfare, or employment mandates crowds out private sector or weakens state and broad economic conditions with excess debt | - | 18 | 0.25 |
| Government taxation on AI-enabled profits or implementing restrictions on sales of goods or services powered by AI | - | 60 | 0.2 |
| AI as a public good and reducing innovation incentives | - | 5 | 0.05 |
| Legal action on AI ownership or AI monopolies | - | 40 | 0.1 |
| Downward consumption expectations drag economy at large | - | 20 | 0.25 |
| Sustained consumption of human-derived goods which offsets demand for AI products | - | 40 | 0.1 |
| Opportunity cost in strengthening existing nAIs off current demand over developing new nAIs | - | 65 | 0.3 |
***Low, median, and high Pr(AGI2043) within this Work are 5.14%, 10.21%, and 15.23%.***
The Base Rate of 24.8% was combined with these low, median, and high estimates to produce the final Pr(AGI2043). Weights of 0.25 and 0.75 were chosen for the Base Rate and probabilities from this Work, respectively, as the Base Rate was derived from prior works focused on technical development, and this Work considers some technical but mostly broader non-technical considerations AGI timelines.
**Final odds:**

Code for figures and analyses in this Work at <https://github.com/srhoades10/aiworldviews>
**Appendix**
**Scenario:** Development and practical use of new paradigms or forms of AI
**Reference section:**II.B
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:** 75%
**Weight:**0.6
**Description:** Reinforcement learning (RL) is one qualifying instance of a new AI developed within the relevant times for this work. RL greatly matured in the past 20 years, and its theoretical basis originated approximately 25 years ago. Thus, as the number of researchers and interest in this field grows, to introduce a new AI of similar significance within the next 10 years and deploy it by 2043 is given a high odd of success. Composability AI could qualify as a new paradigm of AI. This Scenario is given a moderate weight, as existing nAIs or new nAIs rooted in modeling and learning AI frameworks may be sufficient for AGI. However, any new AIs which need significantly less data for training or can learn fundamental rules of systems, e.g. laws of physics, would be highly generalizable and greatly increase odds of AGI.
**Scenario:** Consensus performance metrics developed for AI, either broadly applicable to AGI or for ten developed individual nAIs
**Reference section:**II.B
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**15%
**Weight:**0.1
**Description:** Classification performance on a standardized dataset of images is an example of one consensus metric for one nAI. Performance measures can be found as early as 2010, which improved rapidly after the implementation of deep neural networks in 2012 and later surpassed human performance in 2015. The number of parameters in these models roughly correlate with performance, but the introduction of deep learning and more complex neural network architectures achieved greater performance than simply increasing the parameters in simpler architectures. Given this history, I view performance metrics on a nAI-by-nAI basis are more likely than a generalizable, singular measure for AGI. In this Scenario, I arbitrarily choose ten nAIs as an approximation to achieve AGI, given there may debatably be five today, and these are not likely extensible to accomplishing all tasks required for AGI. If the assembly and release of a standard datasets for other nAIs takes two years, performance measures defined and agreed upon in less than one year, and success achieved in half the time of image classification (2.5 years), that leaves roughly five years per nAI. With an assumption that all ten nAIs will be developed by 2033, for all ten nAIs to achieve consensus metrics and high performance by 2043, I expect the odds to accrue requisite benchmarkable datasets by 2037 for purely digital nAIs at 95%, and for nAIs operating in the physical domain, 60%. If seven nAIs are digital, and three physical, then the total odds are 95%7 \* 60%3 = 15%. The weight for this Scenario is low, as performance metrics are not necessary to develop nAI, as seen with large language models today.
**Scenario:** Digital domain of Cooking AI
**Reference section:**II.C
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**95%
**Weight:**0.05
**Description:** The digital tasks for cooking, including retrieval of recipes and forming novel recipes, with temperatures, cook times, and ingredients, are all nearly possible already today. I see potential challenges related to ingredients: which can be substituted for one another, which are available at a given moment or in a given locality, and how “effective” are combinations of ingredients, with a learning capability baked into the formation of novel recipes. This Scenario has a very low weight given its feasibility and success would not greatly alter odds of broader AI accomplishments.
**Scenario:** Physical manipulation of objects for Cooking AI
**Reference section:**II.C
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**58.3%
**Weight:**0.4
**Description:** Nuanced preparation tasks such as cracking and manipulating eggs in various states of viscosity will be difficult but not essential to cooking AI. Other challenging preparative tasks include peeling carrots and fine chopping of herbs. I posit odds of these tasks by 2043 at 30%. Flipping patties, robotic food picking (off plants or shelves, e.g.), and rough chopping are already feasible for many foods, and I place odds to achieve these more generalized cooking tasks at 80%. Sortingoccurs in food processing factory settings today and could be scaled down to a small kitchen environment for dry goods separation and dispensing into cooking vessels, which I put at 65% odds. If these three odds are evenly weighed in their significance to cooking AI, and not all need to occur, then the overall odds are (80%+65%+30%)/3=58.3%. An incentivized environment to develop AI for these tasks is baked into the estimates, as there is already effort today to automate these tasks. A higher weight given than for other physical tasks such as Haircutting AI as success would be a big advancement in nuanced robotic maneuvers and applicable to many other physical tasks.
**Scenario:** Digital domain of Haircutting AI
**Reference section:**II.C
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**20%
**Weight:**0.3
**Description:** There isno “corpus” of information like there is for online cooking recipes, and raw data on haircuts not only does not exist, but readily-generated data today may not be suitable for an AI model. For instance, video capture of haircuts would need to be translated to quantifiable actions performed in 3-dimensional space, including angles and forces of combs, scissors, and razors. Alternatively, a physical simulation environment could train robotic systems to perform these actions, effectively generating as much data as needed. Assuming development is required by 2037 to then deploy haircutting AI over a 5-year span, incentives notwithstanding, I view capturing videos of haircuts *en masse*or generating data from a physical simulation environment to be more feasible than not. However, I see lack of incentives, investment, and effort to be a major drag on developing Haircutting AI, digitally and physically. There, I assign low odds of success, but with a moderate weight, as success in framing a problem of physical tasks into the digital domain for AI modeling would be applicable to many other tasks as well.
**Scenario:** Physical manipulation of objects for Haircutting AI
**Reference section:**II.C
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**10%
**Weight:**0.1
**Description:** The physical manipulation of objects, including haircutting equipment and human hair, is at least as difficult as the fine chopping or peeling tasks for Cooking AI, which I give odds of 30%. As with the digital domain of haircutting tasks, lack of incentives and investment further drag down these already low odds. Nonetheless, these tasks are given a low weight as the extremely fine and precise nature of these physical tasks are unnecessary for many other tasks and therefore less broadly applicable.
**Scenario:** Digital domain of Auto Repair AI
**Reference section:**II.C
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**32.5%
**Weight:**0.35
**Description:** Digital subtasks include cataloging manuals, parts, and tracking inventory for most major car models, including internal combustion engines. These tasks are highly analogous to information aggregation tasks solved in Cooking AI and given 80% odds of success (lower than digital Cooking AI tasks due to less existing work and reduced incentives). Analogous to medical diagnosis, a diagnostic system which integrates domain knowledge could occur through purely visual information (e.g. a car “X-ray”) or new diagnostics hardware. The odds of diagnostic tasks are 50%. Whether through Composability AI or other new nAIs, even a rudimentary understanding of the relationship of parts would connect the domain knowledge and diagnostic information to physical tasks of automobile repair. Alternatively, this challenge could be met with an abundance of data on cars in broken and fixed states to then propose repair actions, but I view the former route as more feasible and assign odds of 25%. These three primary categories of subtasks are given a roughly equal weight, except for a lower weight to the composability challenge, as understanding “why” is not essential in other AI tasks today. I weigh the combined probabilities of the first two subtasks, viewed as essential, with the third subtask to form final odds of ((80%\*50%) + 25%)/2=32.5%. A similar weight is given to digital Haircutting AI, as these solutions can generalize to many other tasks.
**Scenario:** Physical manipulation of objects for Auto Repair AI
**Reference section:**II.C
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**15%
**Weight:**0.2
**Description:** The maneuvers needed for automobile repair are more similar to haircuts than food preparation. When considering vehicles damaged from accidents (assuming significant numbers of humans will still drive vehicles for most of the next 20 years), Auto Repair AI may require comparable levels of finesse and precision as Haircutting AI, while also bearing much heavier loads and applying greater forces. However, the automobile industry may still be significantly impacted by automation of routine maintenance such as tire and brakes replacement. The odds of performing the more challenging tasks are 10%, equivalent to haircutting AI given it is more difficult but with greater incentives. I place odds of routine maintenance tasks at 30%, but in giving a slightly lower weight to these tasks, produce an overall odds of 15%. A lower weight is given here as success, while powerful, is an excessive capability of fine manipulation of heavy objects for other tasks.
**Scenario:** Design for Construction AI
**Reference section:**II.C
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**85%
**Weight:**0.2
**Description:** Building design including floorplans, heating/cooling, plumbing, and electrical wirings that are already largely feasible through software, and generative AI models trained on a corpus of blueprints could propose a near-infinite number of basic designs. I assign odds of these tasks at 95%. However, placing designs in real 3-dimensional space is more complicated. The terrain, soil, and other environmental considerations are constraints on which designs could realistically be built on a given plot of land. Designs may also require matching the building materials to its function, under other constraints such as material suitability, availability, and cost. Most of these environmental factors could be estimated with either existing knowledge of the site location or imaging data. I assign the odds of real-world “aware” building design at 80%. Either of these groups of tasks may be sufficient for Construction Design AI, but in assigning a slightly higher weight to the more challenging tasks, I assign final odds of 85%. I give a modest weight here, as success would represent a broadly useful advancement in document and image generation (for example, this AI would estimate the volume and object depth in an image and photoshop new objects into a 3D space projected from the 2D image).
**Scenario:** Planning for Construction AI
**Reference section:**II.C
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**36%
**Weight:**0.3
**Description:** Planning requires knowledge of permitting, building codes, zoning, and regulations, which are mostly hyperlocal to the region of construction. While technical elements of these tasks are mostly feasible today, a lack of information standards and changing codes, zoning, and regulations across towns and states over time pose a challenge of information aggregation. Document creation is feasible, and there exist numerous public documents and records related to construction planning that could be used to train generative AI models. However, these documents are highly disaggregated, along with the construction industry generally, and would require a large upfront cost of domain knowledge engineering to train a permit creation AI, factoring local codes and regulations and with awareness of changes over time. Multiplicative odds of document creation subtasks at 90% and codified domain knowledge at 40% produce a final odds of 36%. A modest weight is given due to the size of the construction industry and the utility of AI with engineered domain knowledge and abilities to update with changing rules and regulations.
**Scenario:** Physical manipulation of objects for Construction AI
**Reference section:**II.C
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**10%
**Weight:**0.25
**Description:** The combination of power, finesse, and versatility required of physical tasks in Construction AI make it no likelier than Auto Repair AI. I view Construction and Auto Repair industries to contain comparable degrees of incentives and challenges to develop and adopt new technology, in part due to their fragmented nature. Relative to Auto Repair AI, I downweigh odds of physical Construction AI slightly as there have been unsuccessful efforts in robotic construction. Physical Construction AI receives a slightly higher weight than Auto Repair due to the size of the construction industry, and success would be a tremendous advance in physical manipulation of objects generally.
**Scenario:** Digital domain of Healthcare AI
**Reference section:**II.C
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**67%
**Weight:**0.3
**Description:** Much of healthcare includes maintaining or altering human physiology, all grounded in some understanding of biology. Significant new discoveries of fundamental biology and their applications as novel therapeutics by AI are unlikely in the near future. However, other tasks to recommend standards of care, exploit existing domain knowledge and patient information to diagnostic or therapeutic ends, or perform administrative roles in healthcare, are all significant and qualifying events for digital Healthcare AI. Based purely on feasibility, I place odds of these sets of tasks are 90%, 40%, and 90%, respectively. Healthcare, especially in developed nations, is fettered by outside interests, and as a source of job growth and destination for very high-skill labor, will experience resistance to implementations of AI. I thus downweigh the odds, particularly for administrative tasks, to 85%, 35%, and 80%. If these three odds are evenly weighed in their significance to digital Healthcare AI, and not all need to occur, then the overall odds are (85%+35%+80%)/3=67%. A modest weight is assigned based on the significance of the industry, although success may not be as broadly applicable because of its idiosyncratic nature.
**Scenario:** Digital domain of Contracts AI
**Reference section:**II.C
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**95%
**Weight:**0.1
**Description:** Document creation for many standard business and legal contracts is already possible. Considering the recent advent of smart contracts, and evolving disputes on AI usage and ownership, the odds here are less than 100%. Even so, digital Contracts AI is feasible and will be increasingly utilized. Automating the creation of most common contracts would be sufficient for a qualifying nAI event. A low weight is given as the impact may be a productivity boost on a small number of jobs.
**Scenario:**Forming and operating businesses
**Reference section:**II.C
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**10.5%
**Weight:**0.2
**Description:**The subtasks of negotiating, decision-making, and planning based on environmental stimuli are essential for a qualifying nAI event. AI is already capable of forming negotiating and bargaining statements, though they possess little contextual understanding of their conversations with humans. One common business scenario that requires negotiating is capacity planning and leasing agreements on property, office space, or online domains. A fabricated negotiation from a large language model is already possible, however I put odds of a real-world and human-interactive AI negotiator at 70%. Reinforcement learning (RL) is a form of AI designed to receive and respond to environmental inputs, and a likely candidate as a responder to market, competitor, and consumer behaviors. RL models require abundant information on actions and outcomes, or the ability to simulate them. For certain narrow cases of businesses operating purely digitally, both in its functions and its core product or service, RL may be able to gauge product interest, pricing power, and consumer feedback to propose changes in product lines or prices. However, incomplete information about market changes and competitor behaviors is innate to competitive economic systems. Assuming these systems persist, AI will have to rely on large engineering efforts of domain knowledge, including the actions taken by previous companies that led to their demise, or a new AI able to perform inference and propose actions with highly incomplete information. Notably, the set of possible actions that such an AI needs to consider here will be orders of magnitude greater than for chess or Go, and may itself require substantial human engineering. I view the odds of a successful domain knowledge engineering effort or a new AI capable of superhuman inference with sparse information at 10 and 20%, respectively. If these probabilities are weighted evenly, where either can occur, then the final odds for business AI is 70% \* (10%+20%)/2 = 10.5%. A modest weight is given as success would be a monumental achievement towards AGI but not essential, and it is more likely nAIs for certain tasks within a business (sales, product design, marketing) will occur and potentially qualify as AGI.
**Scenario:** Periods of large unfavorable macroeconomic conditions or continued secular stagnation over the next 20 years
**Reference section:**III.A
**Impact to Prior Pr(AGI2043)**: Decrease
**Odds:**36%
**Weight:**0.5
**Description:** If the time since the 1970s is at all predictive for the next 20 years, there will be at least one mild global recession, and a small probability of a severe recession or depression. I posit a mild recession will have little impact on AGI timelines, though a larger recession will delay the development of innovative new technologies and its deployment into industry, in part through an environment of risk aversion and low capital investment. While the narrative of secular stagnation may be overplayed, as economic and productivity indicators does not capture digital services and software as accurately as physical goods, there is solid evidence for declining rates of growth in the United States, Europe, and parts of Asia. I view it more likely than not these trends will continue, and as other countries develop, the “spreading effect” of secular stagnation in a globalized economy will produce an asymptote in their rate of individual economic and technological. A transformational change such as AGI to occur in this environment is still possible, as evidenced by the advent of the internet in the 1990s, and evidence of some disruption to this stagnation during the coronavirus pandemic. The odds in this scenario are viewed as a weighted additive aggregation of possible events which decrease Pr(AGI2043). The odds of a small recession are 90% but with a small weight on AGI development relative to a large recession at 20%. I grant 50% odds of a continuation and spreading of secular stagnation that would materially impede the innovation and growth necessary to achieve AGI. Combing these scenarios produces (90%\*0.1) + (20%\*0.6) + (50%\*0.3) = 36%. A strong weight is given here as these scenarios impact AI technologies broadly.
**Scenario:** Diversification of fundamental AI R&D projects
**Reference section:**III.A
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**80%
**Weight:**0.25
**Description:** Large technology companies and research groups supported by such companies have produced most applied AI R&D. These projects are focused on classification and generation of text, speech, and visual content, as well as playing games, healthcare diagnostics, and prediction of protein structure. The applications of modeling and generating text, speech, and images will surely continue to grow, however simply strengthening and diversifying these nAIs may not be able to perform many other tasks mentioned throughout this Work. For instance, improved diagnostic capabilities in healthcare and protein structure modeling in biotech on their own will unlikely become a qualifying event for AGI. If the time to identify a new R&D project, locate or acquire requisite data, and apply it to the problem statement takes four years, and only FAANG companies perform these functions, who have at most a handful of such projects, then one may expect a few dozen total AI projects in the next 20 years. If companies compete on roughly half these projects, then there may be collectively 10-20 new and unique nAIs. However, increased accessibility and decreased cost of AI will yield diversified projects from outside large technology companies. Odds are downweighed from certainty due to the category of projects technology companies are willing to address, as they are likely to exploit purely digital AI projects more fully before exploring the physical task examples in this Work. A modest weight is granted as the broadening of AIs is more important for AGI than strengthening existing nAIs.
**Scenario:** Investment and private sector growth in automating fading technologies and services, including internal combustion engine vehicle maintenance and repair
**Reference section:**III.A
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**1%
**Weight:**0.35
**Description:** While limited, one prime example serves as a basis for this scenario analysis. Tesla Motors began operations in 2003, requiring roughly thirteen years for an announcement for a full self-driving capability in 2016 and fifteen years to reach significant market share in the car industry. Twenty years after Tesla’s founding, full self-driving appears imminent but is still not fully deployed. Self-driving cars and electric vehicles are not equivalent, however most work on self-driving is associated with Tesla (noting that other car-agnostic self-driving technologies exist and have been in development since 2016 with limited adoption). Tesla Motors has also not developed an additional nAI apropos for this work, namely automated vehicle repair or maintenance, nor is it a significant initiative at the company. In 2023, there are a dozen other EV manufacturers with a significant market size. Considering the capital allocation by investors and car companies towards EV production, and only Tesla has attempted even mechanized battery exchanges (largely unsuccessfully), one could predict the time from announcement to development of automated self-driving or vehicle maintenance technologies to take upwards of ten years, and deployment into industry and broader impact on services ten more years. Focusing narrowly on vehicle maintenance and repair, I place odds of any car company accomplishing these tasks at 10%, and place ICE repair at comparable odds. More broadly, I optimistically posit even two sets of tasks or large industries with “retroactive” investment and innovation on fading industries or technologies are sufficient to meaningfully contribute to qualifying AGI. If so, then the total odds would be 10%\*10%=1%. A moderate weight is placed on this scenario as service jobs may linger or expand for decades without active investment and technological development.
**Scenario:** Continuation of outsourcing to cheaper labor and economic growth trajectories of yet agricultural or low-skill manufacturing regions
**Reference section:**III.A
**Impact to Prior Pr(AGI2043)**: Decrease
**Odds:**88%
**Weight:**0.7
**Description:** A break from the phenomena of relocation of work to areas of cheaper labor and industrial planning and development by developing nations may occur in a few ways. One could be the exhaustion of excess labor. Bangladesh, the 8th most populous country, serves as an example of how unlikely exhaustion will occur within 20 years. Bangladesh introduced an industrial capacity and economic policy initiative around 1980. Today, clothing represents [87% of Bangladesh exports](https://www.worldstopexports.com/bangladeshs-top-10-exports). Vietnam and India are examples of other populous countries with similar economic trajectories, though moving quicker to electronics, pharmaceutical, and automobile manufacturing. Even so, India has a low GDP per capita, alongside Pakistan, Bangladesh, and Nigeria, which collectively consist of over 2 billion people. The plot below depicts GDP per capita growth of select countries since 1990.
[https://noahpinion.substack.com/p/can-india-industrialize?publication\_id=35345&post\_id=101082915](https://noahpinion.substack.com/p/can-india-industrialize?publication_id=35345&post_id=101082915&isFreemail=true)
Another mechanism would be unprecedented economic hypergrowth across multiple large developing nations. A noticeable upward slope for GPD per capita in China began in 2005, after a period of economic reforms originating in the 1990s. In this light, in the 30 years from those reforms to today, China’s growth could be considered successful, Bangladesh less so, and Singapore the ”best-case” scenario. It is then extremely unlikely growth equivalent to or exceeding Singapore would occur for multiple populous and developing nations in less than twenty years. Another mechanism is decreased globalization and onshoring in developed nations. However, onshoring may only be viable with efficiency and productivity growth from new technologies or cheaper domestic labor. Whether by job automation or reduced wages, such industrial policies will be unpopular with voters and politicians. If these processes do occur, their occurrence within twenty years is unlikely. I give odds of 85%, 99% and 80% that there will be sustained excess labor, developing nations with over two billion citizens will not undergo an unprecedented hypergrowth, and industrial policies leading to job automation or wage suppression will not be palatable and thus not widely implemented in developed nations, respectively. The average odds are 88%. A large weight is given as these trends have been a significant influence in technological development, economic growth, and the global world order since the Industrial Revolution.
**Scenario:** More nAIs are required for AGI
**Reference section:**III.B
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:**16.8%
**Weight:**0.6
**Description:** I posit five new nAIs are needed for AGI. Based on the histories of nAIs developed to date, each would cost at least billions of US dollars and take at least three years. I place the odds of development and deployment of these new nAIs within 20 years at 80%. I also posit AGI will require integration of multiple nAIs (Construction AI, e.g.), and place 80% odds of technical feasibility to integrate multiple nAIs within two years of their development. While technically feasible, these integrations may require one agency, company, or state to either develop nAIs or have existing nAIs made available or purchasable. The odds for the proper “environment” of nAI development and assembly, over a span of five years from the development of all requisite nAIs, is 25%. These collectively form an 11-year horizon with 80%\*80%\*25%=16% odds. However, this assumes that the new nAIs are the *right*ones. For instance, nAIs today do not operate in the physical world, nor possess abstract senses of real-world agency and strategy. I view the next five nAIs being the right nAIs for AGI to be low. If five nAIs are sufficient for AGI, but twenty total are needed to be developed first, I grant 60% odds that twenty will be built in 15 years. With the same odds and timeline for technical integration, and a boost in the probability of integration to 35% in five years, the 20-year odds are at 60%\*80%\*35%=16.8%. The weight is moderately high, as AGI may be possible with significant extensibility of existing nAIs today, or AGI may arise through other means than integration of multiple nAIs.
**Scenario:** High skill white-collar professions continue to expand pockets of tasks untouched by AI or created in response to AI
**Reference section:**III.B
**Impact to Prior Pr(AGI2043)**: Decrease
**Odds:**70%
**Weight:**0.15
**Description:** Considering the formation of services jobs off the innovations of the Industrial Revolution and the rise of the managerial class during the 20th century, coupled with the motives of high-skill labor for gainful employment and differentiation of their products or services, this scenario is more likely than not. However, the remaining slivers of the professions untouched by AI, for example the specialties within law and medicine, will only be able to expand so widely in the future. For medical professionals, the diagnoses, treatments, and procedures serviced by AI will leave fewer services to offer by humans. One area that may continue to expand for the foreseeable future is research and engineering, as new ideas may be readily produced by AI, but the means to research and answer new questions is less likely to be solved by existing nAIs. Thus, while this scenario is more likely than not, the increasing automation to tasks in legal, medicine, design, and other advanced-degree professions will erode their headcount and influence over time. The weight for this scenario is modest as the candidate professions represent a fraction of total labor, and most individuals cannot attain this level of work for various reasons.
**Scenario:** Structural unemployment due to AI crease excess returns to increase investment and capital towards more AI
**Reference section:**III.B
**Impact to Prior Pr(AGI2043)**: Increase
**Odds:** 30%
**Weight:**0.25
**Description:** Structural unemployment due to automation is obvious conceptually, but there is much uncertainty over how much has occurred over the past few decades. Given the long period over which structural unemployment manifests, I give odds of 40% that significant skill mismatches in the labor market impact a majority of the labor force within 20 years. Assuming that structural unemployment is due to AI, and that it translates to higher productivity with lower operating expenditures and thus produces excess returns, I give 75% odds of appreciable investment in the development of new technologies which hasten AGI timelines. I downweigh these odds slightly as extra cash is often used to improve balance sheets or to reward investors and shareholders. This latter action is of increasing frequency and volume over the past twenty years, and this trend is likely to continue in the near future. The collective odds are 40%\*75%=30%. This scenario is given a modest weight given the difficulty to observe and quantify structural unemployment, despite its importance.
**Scenario:** Government interventions of expanded state, welfare, or employment mandates crowds out private sector or weakens state and broad economic conditions with excess debt
**Reference section:**III.B
**Impact to Prior Pr(AGI2043)**: Decrease
**Odds:**18%
**Weight:**0.25
**Description:** Concerns of social stability due to growing structural unemployment may prompt government interventions. Employment mandates are already present in central banks of many developed nations, though these mandates often take the form of monetary policies to stimulate the economy, rather than direct assistance such as cash transfers or basic income. These actions may be instead performed through fiscal stimulus, and if recent trends in developed nations continue, the role of fiscal policy will grow stronger relative to monetary policy. Taken too far, these policies risk defaults, currency devaluations, and broad financial instability. In addition, such spending comes at the expense of the private sector, often the primary engine of economic growth. Any of these actions could impact AGI timelines. The odds in this scenario are conditional on large structural unemployment due to AI, or a perceived threat of such, to occur within ten years (see above scenario regarding structural unemployment). I grant 60% odds of expanded stimulus and employment mandates in most G7 or large Asian countries to produce debt/GDP ratios appreciably above those of the WWII or COVID eras. The odds that, conditional on these debt levels, there will be a large global recession, significantly slowed economic growth from a weakened private sector, or a series of major currency devaluations is 30%. The combined odds are thus 60%\*30%=18%. A modest weight is given because while these events would be significant, this scenario is conditional on other unlikely events, and development of new technologies may still occur, or even be accelerated, in seemingly unfavorable environments.
**Scenario:** Government taxation on AI-enabled profits or implementing restrictions on sales of goods or services powered by AI
**Reference section:**III.B
**Impact to Prior Pr(AGI2043)**: Decrease
**Odds:**60%
**Weight:**0.2
**Description:** Taxation may occur as a form of redistribution, possibly to fund basic income, re-skilling, or welfare, or opportunistically on companies who become exceedingly profitable from their AI technologies. Recent history suggests that for seemingly pressing matters, Pigouvian taxation, e.g. on cigarettes or carbon, and windfall taxes, e.g. on oil companies, can be proposed and implemented within years. I put the odds of taxation within 20 years, first requiring AI powerful and threatening enough for taxes to gain political popularity across a majority of G7 nations and developed Asian countries, at 60%. Even so, AI is likely to remain nebulous, distributed, and thus more difficult to tax than obvious monopolistic or oligopolistic industries, and Pigouvian taxation requires clear and quantifiable externalities, which is difficult for AI. However, while taxation on humans is difficult politically, taxation on AI is relatively palatable. Thus I maintain a 60% overall odds on this scenario. I grant a smaller weight as AI will become cheaper and widely accessible, and much of AI to date is open source, thereby dampening the impact of taxes on future AGI development.
**Scenario:** AI as a public good and reducing innovation incentives
**Reference section:**III.B
**Impact to Prior Pr(AGI2043)**: Decrease
**Odds:**5%
**Weight:**0.05
**Description:** The open-source model of software development creates something close to a public good already, and individuals and companies continue to find ways to develop and monetize new technologies regardless. A push to formally assign and enforce a public good status on AI could take a long time to implement, and even so the impact to AGI timelines will likely be small, much like taxation of AI probably may not hinder new development. Therefore, a very small weight is given for this low-probability scenario.
**Scenario:** Legal action on AI ownership or AI monopolies
**Reference section:**III.B
**Impact to Prior Pr(AGI2043)**: Decrease
**Odds:**40%
**Weight:**0.1
**Description:** Recent events between technology companies and governments, including exchanges over rights of developers and consumers on gaming, social media, and advertising platforms, implies technology companies will continue to encounter scrutiny over any outsized influence and market power. While governments may take non-legal, coercive actions to democratize AI or weaken technology developers, I do not grant material odds of these nebulous events as they are unlikely to impact AGI timelines. I do expect legal actions over ownership or anti-trust, possibly over consolidations of nAIs by large technology companies, as likely within 20 years. However, I downweigh these odds as the time to build cases is long and the success rate in recent lawsuits against technology companies low, and have only come to fruition under certain political landscapes. In addition, I expect a strong anti-trust case could only be made over an overwhelmingly obvious threat of monopoly from nAI consolidation, as the emergence of an actual monopoly is often not immediately obvious and less likely within 20 years. With these arguments, and that AGI development will continue irrespective of legal cases, this scenario receives a low weight.
**Scenario:** Downward consumption expectations drag economy at large
**Reference section:**III.B
**Impact to Prior Pr(AGI2043)**: Decrease
**Odds:**20%
**Weight:**0.25
**Description:** Lower consumer confidence in the economy or high expectations of inflation impact consumption. While the effect size is difficult to quantify relative to many other possible factors, negative expectations can be a self-fulfilling prophesy for economic downturns. Consumers may shift to purchasing staples and avoiding frivolous expenses, many of which are services which comprise a large part of the economy in developed nations. The advent of AI would represent a new driver of consumption pessimism, whereby consumers may expect to have less purchasing power under the threat of job loss. The threat of AI to jobs is widely discussed today but is not obviously impacting expectations of consumers or employment. Thus, I view it unlikely that downward expectations would be significant enough to impact the economy broadly. I grant a smaller weight as smaller economic downturns, if they were to happen because of this pessimism, would not impact AGI timelines significantly.
**Scenario:** Sustained consumption of human-derived goods which offsets demand for AI products
**Reference section:**III.B
**Impact to Prior Pr(AGI2043)**: Decrease
**Odds:**40%
**Weight:**0.1
**Description:** Given sustained demand for the hand-made and artisanal today, I find it likely that as AI-generated goods and services proliferate, there will be a response of demand for “made by humans”. While likely, this demand may only come from wealthier consumers, and apply to a small proportion of all goods and services. I downweigh the odds as the supply of human-made goods and services and number of human laborers could decrease in response to competition from AI, mitigating this demand in the process. A low weight is given as this demand will be washed out by the consumption of the masses, and consumption of human-made goods and services may not materially impact AGI timelines.
**Scenario:** Opportunity cost in strengthening existing nAIs off current demand over developing new nAIs
**Reference section:**III.B
**Impact to Prior Pr(AGI2043)**: Decrease
**Odds:**65%
**Weight:**0.3
**Description:** The balance of exploiting existing nAIs and exploring new ones will be a large determining factor in the overall technological growth towards AGI. I expect the nAIs that power content creation, media, marketing, and sales will continue to be more fully exploited, while AIs for other tasks, including scientific research and many of the physical tasks mentioned in this Work, will have substantially fewer researchers and dollars invested. For example, entertainment media and advertising are heavily validated large markets serviced by large technology and retail companies, particularly in the consumer-heavy economies of developed nations. Companies will be quicker to cut R&D on exploratory projects than sacrifice on the core product in periods of economic uncertainty and limited cash reserves. Existing nAIs also have much room for additional productization. For instance, the “base layer” of ChatGPT can be modified to diversify content offerings towards different ideological or political leanings, or genres of blogs, books, and other text-based content. There is much excess demand yet to be exploited from large language models, and I view it more likely than not that new product lines of incumbents and core products of startup companies will focus on these opportunities in the coming years. Most research on new AIs will likely continue to come out of R&D at larger companies or in academic research environments. While I view it nearly certain that the majority of work in AI will go towards strengthening existing nAIs over developing new ones over the next few years, I downweigh these odds as there are continually new uses of existing AIs, AI will become cheaper and accessible, and new AIs can now be pushed into the market and exploited within years. I grant a modest weight as existing nAIs may be sufficient for AGI and as more researchers and dollars spent on AI development will negate the opportunity cost of strengthening existing nAIs.
---
[[1]](#_ftnref1)For the technical reader, I consider supervised, unsupervised, shallow and deep neural networks as modeling, and reinforcement learning as learning AIs. |
6b869fe7-915b-47b6-be69-2f29caf2afce | StampyAI/alignment-research-dataset/special_docs | Other | Leveraging artificial intelligence to improve people’s planning strategies. Proceedings of the National Academy of Sciences.
RESEARCHARTICLEPSYCHOLOGICALANDCOGNITIVESCIENCES
COMPUTERSCIENCESOPEN ACCESS
Leveraging artificial intelligence to improve people’s planning
strategies
FrederickCallawaya,YashRajJainb,BasvanOpheusdena,PriyamDasc,GabrielaIwamab,SayanGuld,PaulM.Kruegere,FredericBeckerb,
ThomasL.Griffithsa,e,andFalkLiederb,1
EditedbyRichardAslin,HaskinsLaboratoriesInc.,NewHaven,CT;receivedOctober6,2021;acceptedJanuary28,2022
Human decision making is plagued by systematic errors that can have devastating
consequences. Previous research has found that such errors can be partly prevented
by teaching people decision strategies that would allow them to make better choicesin specific situations. Three bottlenecks of this approach are our limited knowledge of
effectivedecisionstrategies,thelimitedtransferoflearningbeyondthetrainedtask,and
thechallengeofefficientlyteachinggooddecisionstrategiestoalargenumberofpeople.We introduce a general approach to solving these problems that leverages artificial
intelligence to discover and teach optimal decision strategies. As a proof of concept,
we developed an intelligent tutor that teaches people the automatically discoveredoptimalheuristicforenvironmentswhereimmediaterewardsdonotpredictlong-term
outcomes. We found that practice with our intelligent tutor was more effective than
conventionalapproachestoimprovinghumandecisionmaking.Thebenefitsoftrainingwith our cognitive tutor transferred to a more challenging task and were retained
over time. Our general approach to improving human decision making by developing
intelligent tutors also proved successful for another environment with a very differentrewardstructure.Thesefindingssuggestthatleveragingartificialintelligencetodiscover
and teach optimal cognitive strategies is a promising approach to improving human
judgmentanddecisionmaking.
cognitivetraining |boundedrationality |heuristics |rationalityenhancement
Planning skills are generally beneficial to the wellbeing of individuals (1) and the success
oforganizations(2).Butmanypeoplefrequentlyfailtoplan(3,4)andconsequentlymakebad decisions (5). Furthermore, even when people do plan, they often use short-sighted
planning strategiesthat prevent them fromachieving the best long-term outcomes(4).
Previous research on improving human decision making has found that incentivizing
or motivating people to make better decisions is not enough, because people sometimes
lack effective decision strategies (6, 7). People can learn more effective strategies through
practice(8–11),butexperienceisagoodteacheronlytotheextentthatitprovidesreliable,valid, and prompt feedback (12–15). This is not the case in many important real-life
settings including financial investment, college admissions, and the diagnosis of mental
disorders(16).
To provide effective strategies, many interventions have instructed people in the
normative principles of logic, probability, and expected utility theory (17)—principles
thathumandecisionsandjudgmentshavebeenfoundtoviolate(7,18–20).Thisapproachhas had limited success, however, because trying to calculate the expected utilities of all
possible courses of actions is actually not a good decision strategy in complex real-life
situationswhereitisoftenprohibitivelydifficultandtimeconsuming(21,22).Subsequentwork has therefore sought to identify and teach simple heuristics that exploit common
properties of certain types of decision problems to quickly reach a good decision most of
theti m e( 2 3,2 4) .
Despite some initial success, teaching people clever heuristics suffers from two bot-
tlenecks that we attempt to address in this work. The first one is the identification ofsimple heuristics that reliably lead to good decisions, a process that can itself be error
proneandtimeconsuming.Toovercomethislimitation,weemployartificialintelligence
to automatically discover optimal heuristics (25). Given a formal characterization of adecision maker’s environment and cognitive limitations, this method derives an optimal
heuristicfor decision making.
The second bottleneck is efficiently teaching clever heuristics to a large number
of people. To overcome this bottleneck, we introduce a general method that can be
applied to create desktop, web, and mobile applications for practicing decision making
with feedback. The basic idea is to have people practice on relevant real-world tasksor simulations of those tasks while giving them feedback on how they solve thoseSignificance
Manybaddecisionsandtheir
devastatingconsequencescouldbeavoidedifpeopleusedoptimal
decisionstrategies.Here,we
introduceaprincipledcomputationalapproachto
improvinghumandecision
making.Thebasicideaistogivepeoplefeedbackonhowthey
reachtheirdecisions.Wedevelop
amethodthatleveragesartificialintelligencetogeneratethis
feedbackinsuchawaythat
peoplequicklydiscoverthebestpossibledecisionstrategies.Our
empiricalfindingssuggestthata
principledcomputational
approachleadstoimprovements
indecision-makingcompetencethattransfertomoredifficult
decisionsinmorecomplex
environments.Inthelongrun,thislineofworkmightleadto
appsthatteachpeopleclever
strategiesfordecisionmaking,reasoning,goalsetting,planning,
andgoalachievement.
Author contributions: F.C., T.L.G., and F.L. designed
research; F.C., P.M.K., and F.L. performed research;
F.C., Y.R.J., B.v.O., P.D., S.G., and F.B. contributed new
reagents/analytictools;F.C.,Y.R.J.,B.v.O.,P.D.,G.I.,andF.L.analyzed data; and F.C., Y.R.J., P.D., P.M.K., T.L.G., and F.L.
wrotethepaper.
Theauthorsdeclarenocompetinginterest.
ThisarticleisaPNASDirectSubmission.
Copyright © 2022 the Author(s). Published by PNAS.
This open access article is distributed under Creative
CommonsAttributionLicense4.0(CCBY) .
1To whom correspondence may be addressed. Email:
falk.lieder@tuebingen.mpg.de.
This article contains supporting information online at
https://www.pnas.org/lookup/suppl/doi:10.1073/pnas.
2117432119/-/DCSupplemental .
PublishedMarch16,2022.
PNAS2022 Vol. 119 No. 12 e2117432119 https://doi.org/10.1073/pnas.2117432119 1of11
Downloaded from https://www.pnas.org by 38.39.170.102 on July 27, 2022 from IP address 38.39.170.102.
tasks. Previous research has shown that, to be effective at fur-
thering the acquisition of expertise, feedback has to be valid,
reliable,andprompt(13,16).Weintroduceageneralmethodfor
generating high-quality feedback for helping people learn how tomake better decisions. The central idea is to give people feedback
on how they decide what to do (metacognitive feedback) rather
than on what they decide to do (action feedback) (26). Givingmetacognitivefeedbackispossiblewhenwecaninferthedecision
operations that people perform from overt behavior such as se-
quentially looking at different pieces of information that informtheir decision (27). As people make more and more decisions
at their computers and smartphones, there is an increasingly
largerrangeofdecisionsforwhichmetacognitivefeedbackcanbegiven automatically. This suggests that the idea of giving people
metacognitive feedback could be applied to develop a scalable
approach to improving human decision making.
As a proof of concept, we develop an internet-based cognitive
tutor that helps people learn and practice optimal heuristics forsolving sequential decision problems. Those heuristics are auto-
matically derived by applying our recently developed artificial
intelligence method for strategy discovery to an environment inwhich long-term consequences are more important than imme-
diate rewards (25). Encouragingly, we found that practice with
our cognitive tutor was more effective at promoting far-sighteddecision-making than conventional approaches to improving hu-
mandecision-making. Concretely,practicingwithourintelligent
tutorimprovedlearningrelativetopracticewithoutfeedbackandpractice with feedback on the chosen actions rather than on the
decision process itself. Critically, we also found that the benefits
of training with our cognitive tutor transferred to more complexand superficially different tasks, and these benefits were retained
over time. Finally, we illustrate the generality of our approach
by applying it to an environment with a different structure,againfindingabenefitofpracticingwiththetutorabovepractice
alone. Together, these findings suggest that leveraging artificial
intelligence to discover and teach optimal cognitive strategies is apromisingapproachtoimprovinghumanjudgmentanddecision
making.
A Principled Computational Approach to
Improving Human Decision Making
Our approach to cognitive training teaches people optimal deci-
sionstrategiesbygivingthemmetacognitivefeedbackthroughout
the decision process leading to a single choice. Since recentfindings suggest that people acquire, refine, and learn to select
between their cognitive strategies at least partly through rein-
forcement learning (10, 15, 28, 29), we address the question
of what constitutes an optimal metacognitive framework from areinforcement learning perspective. That is, we develop a general
methodforcomputingthemetacognitivefeedbackthatresultsin
the fastest possible learning according to the recently developedtheory of metacognitive reinforcement learning (10, 15, 28, 29).
Ourapproachisbasedonfourbuildingblocksthatareintroduced
in turn: 1) making people’s planning strategies observable, 2)simulating challenging decision problems, 3) discovering opti-
mal strategies for solving those problems, and 4) giving people
feedback on their planning operations and on what the optimalplanning strategymight have done differently.
1) Making People’s Planning Strategies Observable. Giving
peoplefeedbackontheirdecisionstrategiesischallengingwhenwe
cannotobservehowtheyactuallymadethosedecisions.Thus,we
employarecentlydevelopedprocess-tracingparadigmthatmakes
the decision-making process observable (30): The Mouselab-MDP paradigm extends the process-tracing methodology of the
Mouselabparadigm (27)from riskychoiceto sequentialdecision
problemsthatrequireplanning,thatis,Markovdecisionprocesses(MDPs)(31).Theparticipants’taskistonavigateaspiderthrough
aw e b( F i g .1 A) by selecting a sequence of moves that leads from
its initial location at the center of the web to one of its corners.Each location contains a reward, and the participant’s goal is to
earn as much reward as possible. Critically, all of the rewards are
initially concealed. To uncover a reward, the participant has toclick on its location and pay a small fee. In this way, an internal
planning process is externalized as a sequence of information-
gathering clicks. This allows us to separate what people choose todo(i.e.,wheretheymovethespider)fromhowtheydecidetodo
it(i.e.,byclickingtorevealasubsetoftheavailableinformationin
aspecificorder).Inotherwords,wetreattheclicksasaproxyforthe cognitive operations that people performto reach adecision.
2) A Simple Task for Practicing Far-Sighted Decision
Making.The key property of situations that necessitate planning
is the misalignment between immediate reward and long-termvalue. As an illustration of this problem, consider the choice
between beginning work on a manuscript versus watching a
YouTube video. Staring at a blank page might make one feelanxious in the short run, but one will feel very satisfied when
one submits the paper for publication many months later. By
contrast, the YouTube video will give one immediate joy but
Fig. 1.The Mouselab-MDP paradigm. ( A) Participants click to reveal the rewards at future states to construct a plan. ( B) Metacognitive feedback penalizes
suboptimaldecision-makingoperations(clicks)withadelayandprovidesinstructiononwhatoperation(s)shouldhavebeentakeninstead.
2of11https://doi.org/10.1073/pnas.2117432119 pnas.org
Downloaded from https://www.pnas.org by 38.39.170.102 on July 27, 2022 from IP address 38.39.170.102.
one might come to regret the wasted time later. To make good
decisions in situations like this, people have to look beyond the
salient immediate rewards, set a goal for the future, plan how
to achieve it, and execute the plan. What makes this far-sightedapproach worthwhile is that the range of outcomes that can be
obtained by concerted effort over an extended period of time
is much larger than the range of rewards that can be attainedimmediately.
Tocapturethisaspectofmanyreal-worldsituationswithinthe
Mouselab-MDPparadigm,weconstructedathree-stepsequentialdecision-making task where the range of rewards increases from
the first step to the second step and is largest in the third step.
In each trial, rewards are independently drawn from discreteuniform distributions; the possible values are {−4,−2, +2, +4 }
in the first step, {−8,−4, +4, +8 }i nt h es e c o n ds t e p ,a n d
{−48,−24, +24, +48 }in the third step. To simulate the com-
putational cost of deliberation, which is substantially diminished
in this externalized planning problem, we impose a $1 cost foreach click.
3) Discovering Optimal Cognitive Strategies. Teaching clever
heuristics is a promising approach to improving decision making
(23, 24). But which heuristics should be taught and how can
we discover such heuristics? The theory of resource rationalityprovidesamathematicallyprecisedefinitionofoptimalheuristics
(32).Inessence,theoptimalheuristicforadecisionmakertouse
in a given environment is the one that achieves the best possibletradeoffbetweentheexpectedutilityoftheresultingdecisionand
theexpected costof thedecision-making process.
To derive the optimal heuristic for the Mouselab-MDP envi-
ronment described above, we apply the recently developed for-
malism of metalevel MDPs (33, 34), which models decision
making itself as a sequential decision problem. The basic ideais that the decision-making process can be broken down into a
series of computations that update the decision maker’s beliefs
about which course of action will lead to the best outcome. Eachcognitive strategy, or heuristic, corresponds to a rule for selecting
computationsbased on theoutcomeof previouscomputations.
Formally, a metalevel MDP, M
meta=(B,C,Tmeta,rmeta),h a s
four components: the set of possible beliefs the decision maker
can have, B; the set of computations they can perform, C;t h e
transition model that specifies how computations update beliefs,T
meta;andthemetalevelrewardfunctionthatspecifiesthecostof
computation and the expected utility of making a final decision
inagivenbeliefstate, rmeta.Acognitivestrategycanbeformalized
as a metalevel policy, πmeta:B/mapsto→C, that specifies which compu-
tation shouldbe performedin each belief state.
Having formalized decision makingasaMarkovdecision pro-
cess,wecanusestandardMDP-solvingtechniques(35)toidentify
optimal decision strategies. One important such tool is the state-action value function (often notated Q). In a metalevel MDP,
this function gives the long-term expected value of performing
a computation cin belief state b.I ti sd e fi n e da s Q
meta(b,c)=
E[rmeta(b,c,b/prime)+max c/primeQmeta(b/prime,c/prime)],w h e r e b/primeis the updated
belief that results from executing computation cgiven the be-
liefb. The optimal decision strategy is the policy that always
executes the most valuable decision operation; i.e., π⋆
meta(b)=
argmaxcQmeta(b,c).Followingpreviouswork,wecompute Qmeta
for our environment by backward induction (25). This provides
us with both the optimal metalevel policy and a way to quantify
exactly howbad is a specific deviation fromthe optimal policy.
This method revealed that the resource-rational heuristic for
theenvironmentdescribedaboveistofirstsetagoalbyevaluating
potential final destinations. As soon as one uncovers the highestpossible reward ( +48), the optimal heuristic immediately selects
the path leading to it, not even uncovering the values that will
be received along the way. If all potential final destinations have
been inspected and one was revealed to be better than all theothers,thentheoptimalheuristicimmediatelydecidestogothere;
otherwise,itworksbackwarduntilonepathisrevealedtobebetter
than thealternatives.
4) An Optimal Feedback Method for Teaching Planning Strate-
gies.If people acquire planning skills through metacognitive
reinforcement learning (10, 15, 29), then it should be possible
to apply methods that have been developed to accelerate model-
free reinforcement learning in robots—such as reward shaping(36)—to accelerate metacognitive learning in people. Here, we
apply reward shaping to generate optimal feedback signals for
accelerating metacognitive reinforcement learning as follows:
1) Model the cognitive function to be improved (i.e., planning)
and the available cognitive operations (i.e., determining the
outcomeoftakingacertainactioninacertainstate)andtheir
c o s t sa sam e t a l e v e lM D P , M
meta.
2) Compute the values of the planning operations people might
perform in different states [i.e., Qmeta(b,c)] by solving the
metalevel MDP, Mmeta.
3) Let people practice planning and infer their decision process
from their clicks.
4) Evaluate each inferreddecision operation, c,b y
loss(b,c)=max
c/primeQmeta(b,c/prime)−Qmeta(b,c). [1]
5) Translate the loss into reinforcement and inform the tutee
what operation(s) should have been selected instead, i.e.,argmax
cQmeta(b,c).
We completed steps 1 and 2 in previous work (25). Step
3 is accomplished by using the Mouselab-MDP paradigm to
measurepeople’splanningoperations.Finally,thefeedbacksignal
computed in step 4 is translated into a delay penalty and thestates that would have been optimal to click are highlighted
(Fig. 1 B). Feedback is given after each click and also when the
participant first moves the spider. Moving the spider when one
should have clicked or vice versa incurs a penalty as well. See
SI Appendix ,F i g.S 8andMaterials and Methods fordetails.
Results
By combining the four building blocks described above, we cre-
ated an intelligent cognitive tutor that employs metacognitive
feedback to teach far-sighted planning. It does so by teaching
them the optimal planning strategy for an environment in whichdistal outcomes are more important than proximal ones. We
evaluated the effectiveness of this cognitive tutor in a series of six
experiments,eachcomprisingatrainingblockinwhichtheexper-imentalgroupworkedwiththecognitivetutorandatestblockin
whichallparticipantssolvedthesameplanningproblemswithout
feedback. We found that practice with our intelligent tutor wasmore effective than conventional approaches (experiment 1) and
ledtotransferableimprovementsindecisionmaking(experiment
2) that are retained over time (experiment 3).
Toillustratethegeneralityofourapproach,wecreatedasecond
intelligent tutor—this time for an unstructured environment in
which proximal and distal outcomes are equally important. Asbefore, we found that practice with the new tutor was more
effective than practice alone (experiment 4). In experiment 5,
PNAS2022 Vol. 119 No.12 e2117432119 https://doi.org/10.1073/pnas.2117432119 3of11
Downloaded from https://www.pnas.org by 38.39.170.102 on July 27, 2022 from IP address 38.39.170.102.
we found that the benefits of metacognitive feedback transfer
to problems that are superficially dissimilar from the training
task. Finally, experiment 6 investigated the relative contribu-
tions of the affective and informative components of the tutor’smetacognitive feedback. We found that both components are
likely to contribute but that the affective component is especially
critical.
Experiment 1: Metacognitive Feedback Is Most Effective.
Experiment 1 evaluated the efficacy of our intelligent tutor’smetacognitive feedback against the conventional approaches of
giving people feedback on their actions (e.g., “You should have
goneleft”)orhavingthempracticewithoutfeedback.Thetrainingblock and the test block both employed the three-step planning
taskshownin Fig.1 A.
To quantify participants’ task performance, we define “relative
testscore”astheaveragescoreeachparticipantachievedinthetest
block, normalized by chance and optimal performance such that0pointsischanceand100pointsisoptimal(Eq. 2).People’sper-
formancewasstronglybimodal( SI Appendix ,F i g.S 1 A),violating
the distributional assumptions of parametric hypothesis tests; wetherefore employed the nonparametric Kruskal–Wallis ANOVA
andpermutationtests(37)toanalyzeourparticipants’scores.All
analyses are conductedat the participant level.
Fig. 2 Ashows the average relative test score achieved by par-
ticipants in each group. A Kruskal–Wallis ANOVA confirmed
that the type of feedback provided in the training trials hada significant effect on participant performance ( H= 9.71, P=
0.008).Participantswhoreceivednofeedbackachievedanaverage
relativetestscoreof77.0points(95%CI[66.2,86.8]).Thismeanstheyachieved77%ofthepossibleincreaseinscorethatonecould
gainbyplanningratherthanchoosingapathrandomly.Critically,
participants receiving metacognitive feedback performed signifi-cantly better, coming close to optimal performance (94.4 points;
95%CI[89.1,98.5],permutationtest d= 0.59, Z= 2.84, P=
0.005). By contrast, giving participants conventional feedback
on their actions appeared to be ineffective. That is, participants
receiving action feedback did not perform better than partici-
pants in the no-feedback condition (73.5 points; 95% CI [61.8,83.9], d=−0.09, Z=−0.45, P= 0.653) and performed sig-
nificantly worse than participants who received metacognitive
feedback ( d= 0.68, Z= 3.22, P= 0.001).
To identify the mechanism by which metacognitive feedback
improvedperformance,weconductedacausalmediationanalysis
(38). We found that the effect of metacognitive feedback on per-
formancewasfullymediatedbyanincreaseinpeople’spropensity
tostartbyinspectingapotentialfinaloutcome(averagecausalme-diationeffects21.9points,95%CI[12.0,33.3], P<0.001;aver-
agedirecteffects −4.2points,95%CI[ −13.3,4.9], P= 0.346).
Metacognitive feedback—but not action feedback—significantlyincreased participants’ propensity to plan backward [ t(148) =
4.27,P<0.001for metacognitive feedback; t(148) =−0.21,
P= 0.831for action feedback]. Specifically, participants who
received metacognitive feedback planned backward on 97.7%
of test trials compared to 66.1% in the no-feedback group and
64.5% in the action-feedback group. Backward planning, in
turn, increased participants’ average performance by 67.0 points
[t(149) = 15.02 ,P<0.001].AsshowninFig.2 B,peoplegrad-
ually learned to plan backward in all three conditions but themetacognitivefeedbacksignificantlyboostedthislearningprocess.
Overall, our findings suggest that metacognitive feedback was
effective because it taught participants a simple, clever heuristicthat allowed them to make better decisions without having to
think harder.A
B
Fig. 2.Metacognitive feedback accelerates learning and improves perfor-
mance. ( A) Average score in the test block for each condition. ( B)P r o p o r t i o n
of participants who started by inspecting a potential final outcome split bycondition.Hereandinallfutureplots,theerrorbarsandshadedareasconvey
95%confidence intervals produced by 1,000 bootstrap samples. Scores are
bootstrapped over participant means. Asterisks indicate significance of thepermutationtestreportedinthemaintextasfollows:** P<0.01.
Experiment 2: Transfer. Experiment 2 examined whether the
benefitsofthestrategytrainingevaluatedinexperiment1transfer
to a more complex task. The training block was the same as in
experiment 1 but the test block used the more complex flight-
planning task illustrated in Fig. 3 A. In this transfer task, partic-
ipants have to plan five steps ahead rather than just three, the
rewardsare drawn from a Gaussian distribution rather than from
a discrete uniform distribution, collecting information is threetimes as costly, and the cover story is different. As in the training
task, more distal rewards had higher variance and therefore a
backwardplanning strategywasstill adaptive.
AsshowninFig.3 B,wefoundsignificanttransfereffectsfrom
therelatively simplethree-steptrainingtasktothemorecomplex
five-step transfer task (Kruskal–Wallis: H= 13.02, P= 0.001).
Specifically, participants who had practiced with metacognitive
feedback achieved an average relative test score of 74.6 points
(95% CI [67.2, 81.4]) on the transfer task, significantly betterthan participants who had practiced with action feedback (57.0
points;95%CI[49.0,64.7], d= 0.48, Z= 3.14, P= 0.002)or
no feedback (57.5 points; 95% CI [48.8, 66.0], d= 0.44, Z=
2.91, P= 0.004). This transfer effect was fully mediated by peo-
ple learning to plan backward( SI Appendix ,SI Results).
4of11https://doi.org/10.1073/pnas.2117432119 pnas.org
Downloaded from https://www.pnas.org by 38.39.170.102 on July 27, 2022 from IP address 38.39.170.102.
A
B
C
Fig. 3.The benefits of metacognitive feedback transfer to more difficult
problemsandareretainedforatleast24h.( A)Thenear-transfertaskisafive-
stepsequentialdecisionproblemwheretherewardsarenormallydistributedwithavariancethatincreasesexponentiallyfromthefirststeptothelaststep.(B)Averageperformanceonthetransfertaskgivenimmediatelyaftertraining.
(C) The same, but with a 24-h delay between training and test. ** P<0.01,
***P<0.001.
Experiment 3: Retention. Experiment 3 modified experiment 2
byaddinga24-hdelaybetweenthetrainingtaskandthetransfer
task. We found that the transfer effect observed in experiment 2wasretainedovertime( H= 28.66, P<0.001;F ig.3 C).Partic-
ipants who had practiced with metacognitive feedback achieved
a relative test score of 81.2 points (95% CI [74.9, 86.7]) on
the delayed transfer task, significantly better than participantswho had practiced with action feedback (49.0 points; 95% CI
[39.6, 58.3], d= 0.88, Z= 5.08, P<0.001) or no feedback
(57.6 points; 95% CI [48.1, 66.3], d= 0.69, Z= 4.04, P<
0.001).Thisbenefitwasfullymediatedbyanincreaseinbackward
planning ( SI Appendix ,SI Results).
Experiment 4: Metacognitive Feedback Is Also Effective in an
Unstructured Environment. The environments used in experi-
ments 1 to 3 shared a simple structure that affords an intuitive
strategy. To show that the effectiveness of our approach does not
depend on this simplicity, we applied our method to an envi-
ronment without any obvious structure. Importantly, we derivedoptimal feedback using exactly the same method, demonstrating
the generality of our approach. In this new environment the
rewardsatallthreelevelsaredrawnfromthesamediscreteuniformdistributionwiththepossiblevalues −10,−5,+5,and +10.The
optimal strategy for this environment prioritizes collecting more
information about the paths that appear most promising, prefersinspecting nodes that are informative about multiple paths, and
usesa complex adaptive stopping rule(25).
As illustrated in Fig. 4, metacognitive feedback was also ef-
fectiveintheunstructuredenvironment( H= 9.10, P= 0.011).
Participants who trained with the cognitive tutor achieved a rela-
tive test score of 77.2 points (95% CI [70.4, 83.8]), significantlybetter than participants who had practiced with action feedback
(63.0 points; 95% CI [52.4, 73.5], d= 0.42, Z= 2.16, P=
0.030) or without feedback (59.0 points; 95% CI [50.0, 67.7],
d= 0.61, Z= 3.04, P= 0.002).Thisimprovementwasaccom-
panied by an increased probability of using a sophisticated plan-
ning strategy that is similar to the optimal one, from 0.6% inthe control condition without feedback to 16.8% in the exper-
imental condition with metacognitive feedback ( U= 1,136.5 ,
P<0.001). This strategy searches for a branch that starts with a
positiveoutcomeandthenskipsaheadtocheckthefinaloutcomes
alongthatbranch,repeatingthisprocedureuntilapathwithpos-
itive initial and final outcomes is found. A more detailed analysisofhowthetutor’sfeedbackaffectedpeople’splanningstrategiesin
the unstructuredenvironment is presentedin SI Appendix .
Fig. 4.Metacognitive feedback improved people’s performance in an en-
vironment where the rewards are independently and identically distributedacrossalllocations.* P<0.05,** P<0.01.
PNAS2022 Vol. 119 No.12 e2117432119 https://doi.org/10.1073/pnas.2117432119 5of11
Downloaded from https://www.pnas.org by 38.39.170.102 on July 27, 2022 from IP address 38.39.170.102.
Experiment 5: Transfer to New Situations. The goal of exper-
iment 5 was to determine which benefits of training with a
cognitivetutortransfernotonlytolargerenvironmentswithinthe
samedomainbutalsotootherdomains.Toanswerthisquestion,we examined whether the benefits of practicing planning in the
ratherartificialWebofCashtasktransfertothemorenaturalistic
taskofplanninganinexpensiveroadtripbyusingasearchenginetolookuphotelprices,namelytheRoadTripparadigmillustrated
in Fig. 5 A(39). Critically, the road trip was required to end at a
city with an airport, and the prices of hotels in these cities werehighlyvariable;thismakesthebackward-planningstrategytaught
by our cognitive tutor highly adaptive for planning road trips in
thetransfertask.
In three conditions, participants practiced in the Web of Cash
environment with metacognitive feedback, action feedback, or
no feedback. To mimic a potential real-world application of our
cognitive tutor, the training was followed by a series of questions
that encouraged participants to reflect on what they learned andin which other situations it might be applicable. To investigate
the extent to which the Web of Cash task itself produces trans-
ferable benefits, we added an additional control condition inwhich participants watched a video about if–then plans (40). In
all four conditions, participants performed the Road Trip task
immediately after completing the training phase. Because weexpected the cross-domain transfer effects to be relatively small,
we preregistered one-tailed tests for all our critical directional
hypotheses.
*
Performance on the transfer task differed significantly be-
tween the training conditions ( H= 8.87, P= 0.031;F i g .5 B).
Specifically, participants who trained in the Web of Cash taskwith metacognitive feedback performed significantly better on
the transfer task (52.17 points; 95% CI [46.58, 57.67]) than
participants who practiced without feedback (43.91 points; 95%CI [38.08, 49.78], d= 0.18, Z= 2.00, P= 0.023) and those
who watched a video about If–Then plans (42.38 points; 95%
CI [37.40, 47.43], d= 0.23, Z= 2.56, P= 0.005). They also
performed slightly better than those who practiced with action
feedback (48.81 points; 95% CI [43.26, 54.37]) but this dif-
ferencewasnotstatisticallysignificant( d= 0.08, Z= 0.84, P=
0.202).
The difference in overall performance was accompanied by a
difference in the propensity to use backward planning strategies(H= 37.00, P<0.001;F i g .5 C). As before, almost all partici-
pantsreceivingmetacognitivefeedbacklearnedtousetheadaptive
backward planning strategy, compared to around half of the
participantsintheothertrainingconditions.Inallconditions,the
rateofbackwardplanningdroppeddramaticallyonthefirsttrans-fer trial. Nevertheless, participants in the metacognitive feedback
condition checked the price of an airport hotel first on 46.3% of
the transfer trials, significantly more often than the participantsin each other condition (action feedback 39.4%, d= 0.17, Z=
1.81, P= 0.035; nofeedback 39.4%, d= 0.17, Z= 1.82, P=
0.035; video 23.5%, d= 0.61, Z= 6.45, P<0.001). This in-
crease in backward planning fully mediated the effect of training
ontestperformance(averagecausalmediationeffects12.9points,
95% CI [7.6, 18.1], P<0.001; average direct effects −3.8
points,95% CI [ −8.0, 0.4], P= 0.082).
Experiment 6: Mechanisms of Metacognitive Feedback. The
intelligent tutor’s metacognitive feedback has two components: adelaypenaltyandamessagedescribingwhattheoptimalheuristic
would have done. The delay penalty serves as a negative reward
*https://aspredicted.org/as5ib.pdf .
A
B
C
Fig. 5.Far transfer. ( A) The task environment shares the core property that
rewards are more variable in distant states, but bears little resemblance tothe training task. ( B) Transfer performance in each condition. ( C)P r o p o r t i o n
ofparticipantsplanningbackwardoneachtrial.Thedashedlineindicatestheswitchtothetransfertask.
that should drive the basic reinforcement learning mechanisms
identifiedbyrecentmodelsofmetacognitivelearning(10,15,28,29).Themessageisaformofsupervisedlearningsignalthatcould
beusedbysociallearningmechanisms,suchasimitationlearning
6of11https://doi.org/10.1073/pnas.2117432119 pnas.org
Downloaded from https://www.pnas.org by 38.39.170.102 on July 27, 2022 from IP address 38.39.170.102.
Fig. 6.Test performance with different subsets of the two components of
metacognitive feedback (delay penalties and information about the optimalheuristic).
or reasoning about the tutor’s pedagogical goals (41, 42). To
discern the contribution of these two components, we compared
the effects of metacognitive feedback with versus without delaypenalties and with versus without information about the optimal
heuristic( SI Appendix , Fig.S10).
As shown in Fig. 6, participants varied significantly in their
performance based on which subset of optimal feedback ele-
mentstheyreceived( H= 9.11, P= 0.028).Consistentwithour
previous results, metacognitive feedback with both delay penal-ties and information about the optimal heuristic (87.5 points;
95% CI [80.6, 93.3]) significantly improved performance in the
test block compared to practice without feedback (75.8 points;95%CI[67.6,83.2], d= 0.32, Z= 2.26, P= 0.024).Butnei-
ther delay penalties alone (77.7 points; 95% CI [69.6, 85.4],
d= 0.05, Z= 0.33, P= 0.743) nor information about the op-
timal heuristic alone (71.8 points; 95% CI [62.9, 80.2], d=
−0.09, Z=−0.67, P= 0.505) had a significant effect relative
tonofeedback.Theseresultssuggestthatbothcomponentsofourtutor’smetacognitive feedback are critical.
Discussion
Decision-making skills are fundamental to the success of people,
organizations, and society as a whole. To be able to make gooddecisions,weneedcleverdecisionstrategiesthatdirectourlimited
attention to the most important factors. Unfortunately, in many
real-worldenvironmentsthequalityofthefeedbackpeoplereceiveabouttheirdecisionsisnotgoodenoughforthemtodiscoversuch
strategieson their own(13, 14,16).
Wedevelopedanintelligentsystemthatautomaticallydiscovers
optimal decision strategies and teaches them to people by giving
themmetacognitivefeedbackwhiletheyaredecidingwhattodo.
The general approach starts from modeling the kinds of decisionproblems people face in the real world along with the constraints
underwhichthosedecisionshavetobemade.Theresultingformal
modelmakesitpossibletoleverageartificialintelligencetoderivean optimal decision strategy. To teach people this strategy, we
then create a simulated decision environment in which people
can safely and rapidly practice making those choices while anintelligent tutor provides immediate, precise, and accurate feed-
back on how they are making their decision. As described above,thisfeedbackisdesignedtopromotemetacognitivereinforcement
learning (10,15, 28,29).
We found that our intelligent tutor’s metacognitive feedback
enabled people to rapidly discover effective decision strategies.Ourtrainingmethodoutperformedtwoconventionalapproaches
to cognitive training and improving human decision making
(i.e.,practiceandperformancefeedback)andachievedpromisingtransfer effects that were retained over time. Our cognitive tutor
fordecisionmakinginsituationswheredistantoutcomesaremore
important than proximal outcomes enabled people to overcometheir over-reliance on immediate rewards and to instead focus on
the values of potential goals they couldreach in multiple steps.
Our approach to designing intelligent cognitive tutors was
successful in both a structured environment that affords a sim-
ple intuitive strategy (experiments 1 to 3) and an unstructured
environment with a more complex optimal strategy (experiment
4).Theresultsofexperiment 5suggestedthatpeople cantransfer
the strategies they learned from our cognitive tutor to morenaturalistictasksindifferentdomains.Together,thefindingsfrom
experiments 1 to 5 suggest that our intelligent tutor can help
people learn how to plan better and make more far-sighted deci-sions.Thefindingsofexperiment6suggestedthatboththedelay
penaltyandinformationprovidedbythetutorareimportanttoits
success,withthedelaypenaltyplayinganespeciallycriticalrole.Inan additional follow-up experiment (experiment 7; SI Appendix )
we showed that people continue to use the strategy taught by
the cognitive tutor even when it is not especially effective (norparticularly ineffective) in the new environment. This finding
further supports the interpretation that the improvements in
people’s decision making observed in experiments 1 to 5 weredue to people learning a concrete planning strategy from the
tutor’sfeedbackratherthanduetopeoplegaininginsightsintothe
structure of the environment. A further experiment (experiment8;SI Appendix ) suggested that although it is necessary that the
training task is complex enough to capture the essential structure
of the real-world environment, there are diminishing returns forincreasing the complexity of the training taskfurther.
Themaincontributionofthisarticleistolaythescientificand
computational foundations for a principled approach to improv-inghumandecision making.Thebasicidea ofthisapproachisto
give people optimal metacognitive feedback on how they make
decisions in the real world. To make this possible, we have laiddownatheoreticalfoundationcomprisingamathematicaltheory
of optimal decision strategies, a conceptual theory of how people
learn how to decide, and an automatic method for computing
optimal metacognitive feedback. We have empirically validated
this theoretical framework in a series of training experimentsshowing that giving metacognitive feedback is more effective at
improvinghumandecisionmakingthangivingperformancefeed-
backorlettingpeoplegainpracticalexperiencewithoutfeedback.
Theseexperimentssupporttheefficacyofourgeneralmethodfor
computing optimal metacognitive feedback.
The results of experiment 7 suggested that people find it chal-
lenging to discern situations in which the strategy taught by our
intelligent cognitive tutor is beneficial from situations in which
itisnot.Thishighlightsdevelopingpedagogicalinterventionsforhelpingpeoplelearnwhichkindsofsituationsthetaughtstrategy
issuitableforandhowtorecognizethemasanimportantdirection
forfuturework.Webelievethatthisisanimportantproblemthatshould be solved before intelligent cognitive tutors of the kind
introduced in this article are deployed tothe real world.
The benefits of cognitive training are often limited to tasks
that are very similar to the exercises that people practiced on
(43, 44). This is known as the transfer problem. Indeed, in
PNAS2022 Vol. 119 No.12 e2117432119 https://doi.org/10.1073/pnas.2117432119 7of11
Downloaded from https://www.pnas.org by 38.39.170.102 on July 27, 2022 from IP address 38.39.170.102.
experiment5,wefoundthatthebenefitsoftrainingwiththetutor
were substantially diminished in the far-transfer task. Moreover,
the effect size estimate we obtained in experiment 5 is likely an
upper bound on any transfer we can expect to see in the realworld because the fact that both tasks were part of the same
online experiment made it easier for participants to infer that
the two tasks are related than it would have been in a real-worldapplication.
To sidestep the far-transfer problem, future work could try
to leverage automatically generated metacognitive feedback totrain people on tasks that are similar or identical to those real-
world tasks on which their performance shall be improved. In-
deed, follow-up studies are beginning to demonstrate that ourgeneralmethodcanbeappliedtoreal-worlddecisionsthatpeople
make at their computers or smartphones. One concrete real-
world application is training people to stay focused on the task
they have chosen to work on. This can be done by creating an
app that gives people feedback on how well the websites andprograms they view on their computer match their intentions
(45); that is, when people get distracted from a self-chosen task,
theyreceivenegativefeedbackandwhentheyrefocusonthetask,they receive positive feedback. Another future application is to
developintelligenttutorsthathelppeopleunlearncognitivebiases
that lead to unfair discrimination. Since people’s cognitive biasesareadirectconsequenceoftheheuristicstheyuse,teachingthem
to use heuristics that are adaptive for the specific decisions they
face in their everyday life is a promising approach to helpingpeopleovercomesystematicerrors.Giventhattransferisgenerally
difficulttoachieve,thisshouldbepursuedbyaugmentingspecific
real-life decision environments with metacognitive feedback. Forinstance,aprograminwhichadmissionofficersorrecruiterseval-
uateapplicantscouldbeequippedwithmetacognitivefeedbackon
which pieces of information they inspect first (e.g., gender versusqualifications)andwhichinformationtheyignore.Thisreal-world
problemisaninstanceofmultialternative,multiattributedecision
making thatcan be formalized asa metalevel MDP(10, 46).
The metalevel MDP models of decision making in the real
world will often be significantly larger than those we solved
here. Therefore, computing the optimal metacognitive feedbackfor such real-world applications will require a machine-learning
method that is more scalable than the backward induction al-
gorithm we used here. In recent work, we have developed suchmethodsandsuccessfullyappliedthemtolargermetalevelMDPs
(34,46, 47).
Modelingreal-worlddecisionproblemscanbedifficultbecause
the structure of real-world problems is only partly known. To
address this problem, our strategy discovery methods can becombined with Bayesian inference on the structure of the deci-
sion problem in a way that makes them more robust to model
misspecification (48).
One limitation of our proof of concept is that participants
might perform additional unobserved planning operations for
which they receive no feedback. This does not take away fromthe value of the feedback that our tutor gives on the planning
operations that we do observe, but it does suggest that the ef-
fectiveness of our cognitive tutors can be improved even further.One simple extension that would allow our cognitive tutor to
give feedback on an even larger proportion of people’s planning
operations would be to reveal each piece of information onlybriefly. As a result people might then click on the same piece of
information multiple times when it is used by multiple planning
operations.Futureworkcanalsouseeyetrackingtoobtainanevenmore accurate measure of people’s planning operations. Another
limitationisthatweapproximatedthecostofplanningbythefeesthatparticipantspaidtocollectinformation.Futureworkshould
obtainmorerealisticmeasuresofthecostofplanningbymeasur-
ing the mental effort and time that it takes people to process the
acquiredinformation.Measuringandmodelingpeople’splanningoperations and their cognitive costs more accurately may lead to
cognitive tutors that are even more effective. Another important
direction for future work is the development of tasks that makeplanningoperationsmeasurablewithoutreducingtheirassociated
computational costs.
How much a person benefits from receiving metacognitive
feedbackcanbelimitedbyhowthepersonrepresentsthedecision
problem. This is a concern because the way in which people
representataskdetermineswhichfeaturestheyattendto,tolearnwhich actions will be rewarded (49). Therefore, the benefit that
learners can derive from metacognitive feedback is limited by
their mental representation of the problem they are trying to
solve. However, this does not mean that a person who does not
already represent the problem in the best possible way cannotlearn an optimal strategy. To the contrary, people can flexibly
improve their representations (50). In particular, they can learn
representations that enable them to predict which actions will berewarded(51–53).Thislikelyalsoappliestoplanningoperations.
We therefore expect that metacognitive feedback can also help
people learn more adaptive problem representations that make iteasier for them to discover effective decision strategies. Testing
this prediction is an interesting direction for future research.
While reinforcement informs representation learning, there arealso many other representation learning mechanisms, such as
categorization,thatshapehowpeoplerepresentdecisionproblems
(49).Therefore,futureworkshouldinvestigatehowourintelligenttutors can be extended to more effectively help people (learn to)
represent decision problems in ways that make it easy for them
to discover and apply resource-rational heuristics. One simpleapproach could be to highlight the aspects of the problem’s
structurethat theoptimal strategyrelies on.
Toscaleupourapproachtoimprovinghumandecisionmaking
to more complex real-world scenarios, it will be important to
ensurethatpeople can understandwhythefeedback makessense
and what the tutor is trying to teach them. To achieve thislevel of interpretability, we are currently developing automatic
methods for generating human-interpretable descriptions of the
optimal decision strategies taught by our intelligent tutors (54).Furthermore,thetutor’smetacognitivefeedbackcanbeenhanced
with text that explains why it would have been better to per-
form an alternative operation. Such explanations can be given
automatically by comparing the common features of optimal
operations(e.g.,“inspectsafinaloutcome”)tothecorrespondingfeatures of the chosen operation (e.g., “inspects an immediate
outcome”).Webelievethataugmentingourintelligenttutorswith
interpretable descriptions of the strategies that are being taughtandexplanationsofwhyanalternativeoperationwouldhavebeen
better will enable people to learn from the tutor’s feedback even
whentheproblemismorecomplexthanthesimpletasksweusedtoprovide a proof ofconcept.
Thefindingspresentedinthisarticleprovideaproofofconcept
forageneralmethodforimprovingpeople’sdecision-makingabil-ities.Ourcomputationalframeworkfordiscoveringandteaching
resource-rational cognitive strategies could lead to a principled
approach to improving the human mind. Future work will applyour approach to increasingly more realistic scenarios, such as
planning how to reach a project milestone and deciding which
companytoinvestin,andaddressthetransferproblemandotherchallenges associated with improving decision making in the real
world.
8of11https://doi.org/10.1073/pnas.2117432119 pnas.org
Downloaded from https://www.pnas.org by 38.39.170.102 on July 27, 2022 from IP address 38.39.170.102.
Materials and Methods
Our data, the analysis code, and a demonstration of the experimental paradigm
are available at https://github.com/fredcallaway/ai-for-improving-human-
planning . The experiments reported in this article were approved by the
institutional review board of the University of California, Berkeley, underInstitutional Review Board (IRB) Protocol 2015-05-755 (“Cognitive Research
Using Amazon Mechanical Turk”); the institutional review board of Princeton
University under Protocol 10859 (“Computational Cognitive Science”); andthe Independent Ethics Committee of the University of T ¨ubingen under
IRB Protocol 667/2018BO2 (“Online-Experimente ¨uber das Erlernen von
Entscheidungsstrategien”). All participants of all experiments gave informedconsent in advance.
To quantify participants’ task performance in a way that is interpretable and
comparable across all of our experiments, we measured their score relative to theexpected scores of an (approximately) optimal strategy and guessing randomly.Given the raw score sachieved on a given trial the relative score is given by
relative-score (s)=100·s−¯s
rand
¯sopt−¯srand, [2]
where ¯soptis an (approximate) upper bound on possible average score and ¯srand
is the expected score achieved by randomly selecting paths without doing any
planning. We set the upper bound to the performance of the optimal policy when
this was possible to compute. For the large near-transfer environment (which
is too large to solve exactly) and for the far-transfer environments (where therewards are not independently identically distributed), we instead approximated
this upper bound by the performance of a goal-setting strategy, which approxi-
mates the strategy taught by the tutor. This strategy checks terminal states untilfinding one with reward above a threshold and then selects a path to that state or,
if no such state is found, the terminal state with maximal reward. The threshold
was optimized to maximize the strategy’s performance.
All permutation tests were performed using the R package “coin,” using the
default asymptotic approximation method (37). The causal mediation analysis
was performed using the R package “mediation” (38). The mediator “backwardplanning” was operationalized by whether or not the participants’ first click fell onone of the nodes they would reach after their third and final move. Both analyses
were performed at the participant level; we therefore averaged the values of
the mediator (backward planning) and the dependent variable (“relative score”)across the 20 trials of the test block. Confidence intervals were computed (also at
the participant level) by Monte Carlo permutation with 10,000 samples.
Experiment 1. We recruited 151 participants on Amazon Mechanical Turk
(average age,34.5 y; range,18 to 72 y; 72 females).For all experiments,balancedcondition assignment and repeat-participant exclusion were performed using
psiTurk (55).Each participant was assigned to receive metacognitive feedback (50
participants), action feedback (50 participants), or no feedback (51 participants)during the training block. The experiment comprised instructions, a trainingblock, a test block, and an exit survey. The training block comprised 10 trials, and
the test block comprised 20 trials. The exit survey asked participants about what
they had learned, their age, and their gender identity.
Each trial presented participants with an instance of the three-step planning
problem described above (Fig. 1 A). The key structure of this problem is that
the range of possible rewards is smallest in the first step, larger in the secondstep, and largest in the third step. To operationalize the cost of planning, we
charged participants one virtual dollar per click.To simplify the implementation of
metacognitive feedback, we required that all clicks be made before the first move(note that it is never optimal to click after moving because the state transitions
are deterministic). To eliminate the time cost of engaging in planning compared
to speeding through the experiment, participants who spent less than 7 s onplanning (e.g., only 3 s) had to wait for the remaining time after executing theirmoves (e.g., for 4 s). In the test block, participants started with an endowment
of 50 virtual dollars and earned a bonus of $0.01 for every $5 they made in the
game.
In both feedback conditions, the feedback consisted of a delay penalty (neg-
ative reinforcement) as well as a message indicating what the best thing to do
was. In the metacognitive feedback condition, feedback was given after eachplanning operation, including both clicks and the decision to stop planning andmove the spider. The delay penalty was 2+a·loss(b,c)seconds (Eq. 1)i ft h e
participant made an error or 0 seconds if the participant’s planning operation
was optimal. We chose the value of the scaling factor aso that the delay for acting
without planning was 42 s ( SI Appendix ,SI Methods ). If the optimal operation
was to click but the participant did not make an optimal click,the optimal nodes to
click were highlighted and a message was displayed: “You should have inspected
one of the highlighted nodes.” If the optimal operation was to move but theparticipant clicked, the message read “You shouldn’t have inspected any morenodes.” In the action feedback condition, the tutor gave feedback on participants’
first moves but not on the their planning operations (clicks). The delay penalty
was determined using the same equation,but replacing the metalevel Q functionwith the “task-level” Q function; that is, the loss function compares the total
reward one would receive from taking the optimal path following that initial
action to the maximal total reward one could obtain by starting with the bestpossible move. If the participant chose the wrong direction, the message read
“You should have moved left/up/right” depending on which direction would have
been optimal given full information.
Experiment 2. The methods of experiment 2 were the same as in experiment
1 unless stated otherwise. We recruited 297 participants on Prolific (average
age, 32.2 y; range, 18 to 71 y; 158 females). We excluded 25 participants who
reported possible participation in a previous version of the experiment (e.g., onMechanical Turk or with a different Prolific account).
Each participant was assigned to receive metacognitive feedback (99 partici-
pants), action feedback (100 participants), or no feedback (98 participants).
The transfer task illustrated in Fig. 3 Awas a five-step sequential decision
problem. It was framed as routing an airplane across a network of airports. The
rewards of nodes at step i∈{1, 2, 3, 4}were drawn from normal distributions
with mean zero and SD σ
i=2i−1,and for the rewards at the last step ( i=5)t h e
SD was σ5=25. This reward structure was chosen to ensure that the backward-
planning heuristic identified as optimal for the smaller training environment was
near optimal for the transfer environment as well. Unlike in the training task, thecost of planning was $3 per click.
Experiment 3. This experiment employed the training block and the transfer
task from the transfer experiment (experiment 2) with an added a 24-h delaybetween the training block and the transfer task.
We recruited a total of 297 participants on Prolific (average age, 31.9 y;
range, 18 to 76 y; 119 females). We excluded 18 participants who reported
possible participation in a previous version of the experiment. Each participantwas assigned to the experimental condition that trained with the intelligent
tutor (101 participants), the control condition that practiced with action feedback
(99 participants), or the control condition that practiced without feedback (97participants).
The 24-h delay was accomplished by splitting the experiment into two stages,
the second of which could only be begun 24 h after beginning the first stage.The first stage comprised instructions and a training block. The second stage
comprised instructions reminding participants how the game works, the transfer
block where participants were posed 20 five-step planning problems (Fig. 3 A),
and the closing survey used in experiments 1 and 2. About 83.5 %of the
participants of stage 1 returned to stage 2 (i.e., 248 of 297). The proportion
of participants who dropped out after the first stage did not differ substantiallybetween the three conditions (metacognitive feedback, 15.8 %; action feedback,
16.2 %; no feedback, 17.5 %).
Experiment 4. We recruited 179 participants on Prolific (average age, 32.4
y; range, 18 to 74 y; 87 females). We excluded 16 participants who reportedpossible participation in a previous version of the experiment. Sixty participants
were assigned to the experimental condition with metacognitive feedback, 58
were assigned to the control condition with action feedback, and 61 participantswere assigned to the control condition without feedback.
The task environment differed from the one used in experiment 1 in that
the rewards at all three levels were drawn from a discrete uniform distribution
over the possible rewards −10,−5,+5,a n d +10. In the test block the task
environment was the same as in the training block and all three experimental
groups solved 20 planning problems without any feedback. The penalty delays
were calculated such that the metacognitive feedback for acting without planningwas the same as in the cognitive tutor used in experiments 1 to 3.
PNAS2022 Vol. 119 No.12 e2117432119 https://doi.org/10.1073/pnas.2117432119 9of11
Downloaded from https://www.pnas.org by 38.39.170.102 on July 27, 2022 from IP address 38.39.170.102.
Experiment5. We recruited 1,380 participants on Prolific (average age, 25.23
y; range, 18.0 to 57.0 y; 459 females). We excluded 112 participants who
reported possible participation in a previous version of the experiment.
The experiment comprised general instructions, a training block, questions
designed to promote transfer (transfer prompts), and a test block in which
participants were evaluated on a new transfer task. The four conditions of the
experiment differed only in the training block. In this training block, threegroups practiced planning in 10 trials of the Web of Cash game with optimalmetacognitive feedback (233 participants), action feedback (235 participants),
or no feedback (242 participants). An additional control group (253 participants)
watched a video about If–Then plans (40) instead of practicing in the Web of Cashenvironment.
The transfer prompts told participants that the Web of Cash game is a
metaphor for life and asked them to articulate the lesson they had learned, thinkabout a situation in which it might be applicable, and describe how it could
be applied to efficiently plan a road trip (see SI Appendix ,SI Methods for more
detail).
In the test block, all participants completed eight trials of a modified version
of the Road Trip paradigm introduced in ref. 39 (Fig. 5 A). In this task, participants
play the role of a travel agent tasked to plan an inexpensive road trip fromthe client’s current location to any city with an airport. To make an informedrecommendation,they can look up the price of each city’s most affordable hotel by
typing the city’s name into a search engine. The participant has to be economical
with time because the travel agent is working from a very expensive internet cafe,which charges them $0.25/s, and the search engine takes 4 s to find the cheapest
rate in the queried city.In each round the travel agent starts with a budget of $800
for the trip and the agent’s internet research and the participant earned a bonusof $0.01 for every $2 left from that budget. Unbeknownst to the participants, the
hotels in cities with airports have a much wider range of possible prices ($100,
$320, $350, or $380) than the hotels in other cities ($130, $135, $140, or $145).Furthermore,exactly one of the airport hotels always had the lowest price of $100,
making it possible to always find an inexpensive route.
The transfer task differed from the training task on several important dimen-
sions. First, the transfer task is much more naturalistic than the training task: Itmimics the real-life challenge of planning a road trip, it captures that researching
prices is effortful and time consuming, and it captures that each destination can
be reached via multiple routes. Second, while the training task asks people tomaximize profits, the transfer task asks them to minimize costs. Third, unlike the
training task, the transfer task had to be performed under time pressure. Last but
not least, the transfer task has a very different user interface than the training task(cf. Fig. 5 Avs. Fig. 1 A). In particular, participants obtained information by typing
text into a search box (vs. clicking) and they selected their route by clicking on the
roads between cities (vs. arrow keys).
Experiment 6. Experiment 6 disentangled the effects of reinforcement versus
information about the optimal heuristic using a 2 ×2 factorial design withthe factors delay penalties (present vs. absent) and information about what the
optimal heuristic would have done (present vs. absent). We recruited 417 partic-
ipants on Amazon Mechanical Turk (average age, 36.0 y; range, 18 to 87 y; 204
females). Participants were assigned to receive no feedback (104 participants),only information (104 participants), only delay penalties (104 participants), or
the full feedback with both information and delay penalties (105 participants).
Following the instructions, participants completed 10 training trials and 15
test trials. Finally, each participant completed the exit survey described above.During the training trials each of the four conditions received a different type
of feedback. Depending on the experimental condition, the feedback included
a delay penalty, information about what the optimal heuristic would have done,both, or neither one. During the test trials, none of the groups received feedback.
Both training and test trials used the three-step planning task from experiment 1.
In contrast to previous experiments, all clicks in all conditions were followed
by an unconditional delay of 1 s before revealing the reward at the clicked state.
This unconditional delay afforded enough time to show the information feedback
when applicable, without introducing any behavior-dependent delays or unin-tended differences between conditions. Participants in the conditions with delay
penalties received an additional delay penalty whose duration was proportional
to how much worse their planning operation was than the optimal one. Duringthe delay penalty, participants were shown the message “Delay penalty for poorplanning: xseconds” (where xis the duration of the delay).When the participant’s
planning operation was optimal, the message read “Good job!” and there was no
additional delay. In the conditions with information, after a suboptimal planningoperation, a visual illustration of what the optimal heuristic would have done
differently ( SI Appendix , Fig. S10 ) was displayed during the entire delay period,
which was at least 1 s and longer in the condition with delay penalties. Finally,to roughly match the amount of time spent in the experiment, in conditions with
delay penalties, all training trials were followed by an 11-s delay during which we
told participants that the next trial was being prepared. In conditions with delaypenalties, this posttrial delay was 1 s. The difference of 10 s was chosen to offset
the average total delay penalty per trial in experiment 1.
Data Availability. Anonymized datasets have been deposited in GitHub
(https://github.com/fredcallaway/ai-for-improving-human-planning ).
ACKNOWLEDGMENTS. This work was supported by Office of Naval Research
Grant MURI N00014-13-1-0341, Air Force Office of Scientific Research Grant
FA9550-18-1-0077,and a grant from the Templeton World Charity Foundation (to
T.L.G.) as well as Grant CyVy-RF-2019-02 (to F.L.). We thank Tania Lombrozo, PeterDayan, Thomas Hills, and Mike Mozer for helpful comments and discussions.
Author affiliations:aDepartment of Psychology, Princeton University, Princeton, NJ 08540;
bRationality Enhancement Group, Max Planck Institute for Intelligent Systems, 72076
T¨ubingen,Germany;cDepartmentofCognitiveSciences,UniversityofCalifornia,Irvine,CA
92697-5100;dDepartmentofPsychology,UniversityofCalifornia,Berkeley,CA94720-1650;
andeDepartmentofComputerScience,PrincetonUniversity,Princeton,NJ08540
1. A. K. MacLeod, E. Coates, J. Hetherton, Increasing well-being through teaching goal-setting and
planning skills: Results of a brief intervention. J. Happiness Stud. 9, 185–196 (2008).
2. G. A. Steiner, Strategic Planning (Simon and Schuster, 2010).
3. A. Strathman, F. Gleicher, D. S. Boninger, C. S. Edwards, The consideration of future consequences:
Weighing immediate and distant outcomes of behavior. J .P e r s .S o c .P s y c h o l . 66, 742
(1994).
4. Y. R. Jain, F. Callaway, F. Lieder, “Measuring how people learn how to plan” in CogSci 2019 ,A .G o e l ,
C. Seifert, C. Freksa, Eds. (Cognitive Science Society, Austin, TX, 2019), pp. 1956–1962.
5. W. Bruine de Bruin, A. M. Parker, B. Fischhoff, Individual differences in adult decision-making
competence. J .P e r s .S o c .P s y c h o l . 92, 938–956 (2007).
6. C. F. Camerer, R. M. Hogarth, The effects of financial incentives in experiments: A review and
capital-labor-production framework. J. Risk Uncertain. 19, 7–42 (1999).
7. R. P . Larrick, “Debiasing” in Blackwell Handbook of Judgment and Decision Making ,D .J .K o e h l e r ,
N. Harvey, Eds. (Blackwell Publishing, Malden, MA, 2002), pp. 316–338.
8. D. R. Shanks, R. J. Tunney, J. D. McCarthy, A re-examination of probability matching and rational
choice. J. Behav. Decis. Making 15, 233–250 (2002).
9. J. Rieskamp, P . E. Otto, SSL: A theory of how people learn to select strategies. J. Exp. Psychol. Gen. 135,
207–236 (2006).
10. F. Lieder, T. L. Griffiths, Strategy selection as rational metareasoning. Psychol. Rev. 124, 762–794
(2017).
11. B. R. Newell, D. A. Lagnado, D. R. Shanks, Straight Choices: The Psychology of Decision Making
(Psychology Press, 2015).
12. R. M. Hogarth, B. J. Gibbs, C. R. McKenzie, M. A. Marquis, Learning from feedback: Exactingness and
incentives. J. Exp. Psychol. Learn. Mem. Cogn. 17, 734–752 (1991).
13. R. M. Hogarth, Educating Intuition (University of Chicago Press, 2001).14. D. Kahneman, G. Klein, Conditions for intuitive expertise: A failure to disagree. Am. Psychol. 64,
515–526 (2009).
15. P . M. Krueger, F. Lieder, T. L. Griffiths, “Enhancing metacognitive reinforcement learning using reward
structures and feedback” in CogSci 2017 , G. Gunzelmann, A. Howes, T. Tenbrink, E. J. Davaelaar, Eds.
(Cognitive Science Society, 2017), pp. 2469–2474.
16. J. Shanteau et al., Competence in experts: The role of task characteristics. Organ. Behav. Hum. Decis.
Process. 53, 252–266 (1992).
17. J. von Neumann, O. Morgenstern, The Theory of Games and Economic Behavior (Princeton University
Press, Princeton, NJ, 1944).
18. R. P . Larrick, J. N. Morgan, R. E. Nisbett, Teaching the use of cost-benefit reasoning in everyday life.
Psychol. Sci. 1, 362–370 (1990).
19. G. T. Fong, R. E. Nisbett, Immediate and delayed transfer of training effects in statistical reasoning.
J. Exp. Psychol. Gen. 120, 34–45 (1991).
20. P . W. Cheng, K. J. Holyoak, R. E. Nisbett, L. M. Oliver, Pragmatic versus syntactic approaches to training
deductive reasoning. Cognit. Psychol. 18, 293–328 (1986).
21. G. Gigerenzer, Rationality for Mortals: How People Cope with Uncertainty (Oxford University Press,
2008).
22. I. Van Rooij, The tractable cognition thesis. Cogn. Sci. 32, 939–984 (2008).
23. G. Gigerenzer, P . M. Todd, Simple Heuristics That Make Us Smart (Oxford University Press, 1999).
24. R. Hertwig, T. Gr ¨une-Yanoff, Nudging and boosting: Steering or empowering good decisions. Perspect.
Psychol. Sci. 12, 973–986 (2017).
25. F. Callaway et al., “A resource-rational analysis of human planning” in CogSci 2018 ,C .K a l i s h ,M .R a u ,
J. Zhu, T. Rogers, Eds. (Cognitive Science Society, 2018), pp. 178–183.
26. A. Jarvstad, Beyond nudging: Generalisable and transferable learning in human decision-making
(2021). PsyArXiv. https://doi.org/10.31234/osf.io/9q6xk.
10of11 https://doi.org/10.1073/pnas.2117432119 pnas.org
Downloaded from https://www.pnas.org by 38.39.170.102 on July 27, 2022 from IP address 38.39.170.102.
2 7 . J .W .P a y n e ,J .R .B e t t m a n ,E .J .J o h n s o n , The Adaptive Decision Maker (Cambridge University Press,
Cambridge, England, 1993).
28. F. Lieder, A. Shenhav, S. Musslick, T. L. Griffiths, Rational metareasoning and the plasticity of cognitive
control. PLOS Comput. Biol. 14, e1006043 (2018).
29. Y. R. Jain et al.,“ H o wd op e o p l el e a r nh o wt op l a n ? ”i n Conference on Cognitive Computational
Neuroscience , D. Sussilo, Ed. (CCN, Berlin, Germany, 2019), pp. 826–829.
30. F. Callaway, F. Lieder, P . M. Krueger, T. L. Griffiths, “Mouselab-MDP: A new paradigm for tracing how
people plan ” in The 3rd Multidisciplinary Conference on Reinforcement Learning and Decision
Making, Ann Arbor, MI (2017).
3 1 . R .S .S u t t o n ,A .G .B a r t o , Reinforcement Learning: An Introduction (MIT Press, 2018).
32. F. Lieder, T. L. Griffiths, Resource-rational analysis: Understanding human cognition as the optimal use
of limited computational resources. Behav. Brain Sci. 43, e1 (2019).
33. N. Hay, S. Russell, D. Tolpin, S. Shimony, “Selecting computations: Theory and applications” in
Proceedings of the Twenty-Eighth Conference of Uncertainty in Artificial Intelligence ,N .d eF r e i t a s ,
K. Murphy, Eds. (AUAI Press, Corvallis, OR, 2012), pp. 346–355.
34. F. Callaway, S. Gul, P . Krueger, T. L. Griffiths, F. Lieder, “Learning to select computations” in Proceedings
of the Thirty-Fourth Conference of Uncertainty in Artificial Intelligence ,A .G l o b e r s o n ,R .S i l v a ,E d s .( A U A I
Press, Monterey, CA, 2018), pp. 776–785.
35. M. L. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Programming (John Wiley &
Sons, 2014).
36. A. Y. Ng, D. Harada, S. Russell, “Policy invariance under reward transformations: Theory and application
to reward shaping” in Proceedings of the 16th Annual International Conference on Machine Learning ,
I. Bratko, S. Dzeroski, Eds.(Morgan Kaufmann, San Francisco, CA, 1999), pp. 278–287.
37. T. Hothorn, K. Hornik, M. A. van de Wiel, A. Zeileis, Implementing a class of permutation tests: The coin
package. J. Stat. Softw. 28, 1–23 (2008).
38. D. Tingley, T. Yamamoto, K. Hirose, L. Keele, K. Imai, Mediation : R package for causal mediation
analysis (2014). https://cran.r-project.org/web/packages/mediation/mediation.pdf. Accessed 30
November 2021.
39. F. Callaway et al., Rational use of cognitive resources in human planning. Nature Human Behavior ,
in press.
40. P . M. Gollwitzer, Implementation intentions: Strong effects of simple plans. Am. Psychol. 54, 493
(1999).
41. P . Shafto, N. D. Goodman, T. L. Griffiths, A rational account of pedagogical reasoning: Teaching by, and
learning from, examples. Cognit. Psychol. 71, 55–89 (2014).42. M. K. Ho, J. MacGlashan, M. L. Littman, F. Cushman, Social is special: A normative framework for
teaching with and learning from evaluative feedback. Cognition 167, 91–106 (2017).
43. G. Sala, F. Gobet, Does far transfer exist? Negative evidence from chess, music, and working memory
training. C u r r .D i r .P s y c h o l .S c i . 26, 515–520 (2017).
44. G. Sala, F. Gobet, Cognitive training does not enhance general cognition. Trends Cogn. Sci. 23, 9–20
(2019).
45. M. Wirzberger et al., “How to navigate everyday distractions: Leveraging optimal feedback to train
attention control” in CogSci 2020 ,S .D e n i s o n ,M .M a c k ,Y .Z u ,B .C .A r m s t r o n g ,E d s .( C o g n i t i v eS c i e n c e
Society, Austin, TX, 2020), p. 1736.
46. P . Krueger, F. Callaway, S. Gul, T. Griffiths, F. Lieder, Discovering Rational Heuristics for Risky Choice.
PsyArXiv. https://doi.org/10.31234/osf.io/mg7dn.
47. S. Consul, L. Heindrich, J. Stojcheski, F. Lieder, Improving human decision-making by discovering
efficient strategies for hierarchical planning. https://arxiv.org/abs/2102.00521. Accessed 31 January2021.
48. A. Kemtur et al., “Leveraging machine learning to automatically derive robust planning strategies
from biased models of the environment ” in CogSci
,M .M .D e n i s o n ,Y .Z u ,B .C .A r m s t r o n g ,E d s .
(Cognitive Science Society, Austin, TX, 2020), pp. 2405–2411.
49. A. Radulescu, Y. Niv, I. Ballard, Holistic reinforcement learning: The role of structure and attention.
Trends Cogn. Sci. 23, 278–292 (2019).
50. A. Szollosi, B. R. Newell, People as intuitive scientists: Reconsidering statistical explanations of
decision making. Trends Cogn. Sci. 24, 1008–1018 (2020).
51. Y. Niv et al., Reinforcement learning in multidimensional environments relies on attention
mechanisms. J. Neurosci. 35, 8145–8157 (2015).
52. Y. C. Leong, A. Radulescu, R. Daniel, V. DeWoskin, Y. Niv, Dynamic interaction between reinforcement
learning and attention in multidimensional environments. Neuron 93, 451–463 (2017).
53. M. Jones, F. Canas, “Integrating reinforcement learning with models of representation learning” in
Proceedings of the Annual Meeting of the Cognitive Science Society ,S .O h l s s o n ,R .C a t r a m b o n e ,E d s .
(Cognitive Science Society, Austin, TX, 2010), vol. 32, pp. 1258–1263.
54. J. Skirzynski, F. Becker, F. Lieder, Automatic discovery of interpretable planning strategies. Mach.
Learn. 110, 2641–2683 (2021).
55. T. M. Gureckis et al., psiTurk: An open-source framework for conducting replicable behavioral
experiments online. Behav. Res. Methods 48, 829–842 (2016).
PNAS2022 Vol. 119 No.12 e2117432119 https://doi.org/10.1073/pnas.2117432119 11of11
Downloaded from https://www.pnas.org by 38.39.170.102 on July 27, 2022 from IP address 38.39.170.102.
|
26f24082-18e1-4e07-a1a0-e299555f17bd | trentmkelly/LessWrong-43k | LessWrong | The grass is always greener in the environment that shaped your values
Sharing from my personal blog: https://spiralprogress.com/2024/11/16/the-grass-is-always-greener-in-the-environment-that-shaped-your-values/
As a college student, I spent a few wonderful months living in New York. It was a remarkable summer, the first time I had lived on my own, and the first summer I had a bit of my own money to live like an adult and get out into the world. My existence felt marked by serendipity, delirium, and opportunity.
After graduating, I moved to San Francisco for work, and felt for that first year, endlessly disappointed. BART is worse than the MTA. Golden Gate Park is worse than Central Park. SF MoMA is much worse than the original MoMA.
During this time, I incessantly fantasized about packing up and moving back to New York the moment my equity vested.
This feeling persisted until, speaking to a friend who had made the same move, and was lamenting it in the same way, I took the benefit of distance to reflect on my thoughts and realize how deluded they were. It was only then that I understood San Francisco was not a worse version of New York. It was a different city altogether.
Though San Francisco could not compete with New York on the things that made New York special, it had its own charms and appeals. It was the best place in the world to meet interesting people who were earnestly and diligently working to build new and exciting things. It had incredible access to California’s national parks which I still feel are amongst the greatest natural wonders anywhere in the world. While New York seemed to go on forever such that every place in the five boroughs felt like, a different but still very New York New York, San Francisco offered easy access to the calm suburbs and garage startups of South Bay, the dynamic intellectual and culinary scene in Berkeley, and the opulent wineries of Napa.
I refocused on my surroundings, learned to appreciate what I had, and stayed in California for another decade. It wasn’t until years later thinking |
877e8959-0090-434f-8f7a-d5c5b914587e | trentmkelly/LessWrong-43k | LessWrong | Is a Purely Rational World a Technologically Advanced World?
What would our world be today if humans had started off with a purely rational intelligence?
It seems as though a dominant aspect of rationality deals with risk management. For example, an irrational person might feel that the thrill of riding a zip line for a few seconds as being well worth the risk of injuring themselves, contracting a flesh eating bug, and losing a leg along with both hands (sorry, but that story has been freaking me out the past few days, I in no way mean to trivialize the woman’s situation). A purely rational person would (I’m making an assumption here because I am certainly not a rational person) recognize the high probability of something going wrong and determine that the risks were too steep when compared with the minimal gain of a short-lived thrill.
But how does a purely rational intelligence—even an intelligence at the current human level with a limited ability to analyze probabilities—impact the advancement of technology? As an example, would humanity have moved forward with the combustible engine and motor vehicles as purely rational beings? History shows us that humans tend to leap headlong into technological advancements with very little thought regarding the potential damage they may cause. Every technological advancement of note has had negative impacts that may have been deemed too steep as probability equations from a purely rational perspective.
Would pure rationality have severely limited the advancement of technology?
Taken further, would a purely rational intelligence far beyond human levels be so burdened by risk probabilities as to render it paralyzed… suspended in a state of infinite stagnation? OR, would a purely rational mind simply ensure that more cautious advancement take place (which would certainly have slowed things down)?
Many of humanity’s great success stories begin as highly irrational ventures that had extremely low chances for positive results. Humans, being irrational and not all that intelligen |
21dadc6e-2d04-4edb-96f1-2df310f69e4d | trentmkelly/LessWrong-43k | LessWrong | Iterated Prisoner's Dilemma in software patents
This post contains some thoughts around software-patent strategies for large tech companies, in particular how the ability to block others' applications seems to set up an Iterated Prisoner's Dilemma and may change the strategic landscape for patents entirely.
Joel Spolsky writes of recent successes in blocking bad patent applications:
Micah showed me a document from the USPTO confirming that they had rejected the patent application, and the rejection relied very heavily on the document I found. This was, in fact, the first “confirmed kill” of Ask Patents, and it was really surprisingly easy.
and suggests that this may lead to a "Mexican Standoff" among major software companies:
My dream is that when big companies hear about how friggin’ easy it is to block a patent application, they’ll use Ask Patents to start messing with their competitors. How cool would it be if Apple, Samsung, Oracle and Google got into a Mexican Standoff on Ask Patents? If each of those companies had three or four engineers dedicating a few hours every day to picking off their competitors’ applications, the number of granted patents to those companies would grind to a halt. Wouldn’t that be something!
It seems to me that this would be something of a Prisoner's Dilemma situation for the companies: Presumably, each of them is best off if it is the only one that can get any software patents (it defects by blocking the others, they cooperate by not setting up a patent-blocking team), better off if everyone can get patents (everyone cooperates by not having a blocking team), and worst off if nobody can get patents (everyone has a blocking team which they have to pay for). It is Iterated because the decision to block or not block can be made anew every month, or quarter, or whatever. So the question is, will these companies filled with smart people be able to recognise an IPD, and will they cooperate?
Some factors to consider: Setting up a patent-blocking team requires some small amount o |
574206d7-0081-4340-9884-a0eaba7f887f | trentmkelly/LessWrong-43k | LessWrong | When the Wannabe Rambo Comedian Cried
I wanted to share some thoughts and see if they would help me be less wrong. Instead, I ended up as a war correspondent for Gwern, reporting unofficially. He had asked me for stories from my firefighter military days.
Well, since I'm here, I might as well take advantage of this opportunity, even if later I won't have any more stories to tell my grandchildren: 'Oh grandpa! You already posted this on Lesswrong.'" and continue in the metaverse of the future.
----------------------------------------
Imagine you grew up in a relatively well-educated family and were lucky enough to have some resources in life. But you also suffered some violence. Imagine feeling fragile, dyslexic, with attention deficit disorder, and asthmatic... Then, at six years old, you watch a movie: Rambo. And you think, I want that power for my life.
At the same time, you push yourself to improve both physically and mentally. But no matter how much you train, you still can't fight back against one of the bullies at school. Then, you realize that humor works better—it’s like a martial art: Comedy-jitsu. So, you shape your life around jokes, not always good ones. But you love seeing people experience the pleasure of laughter—it’s almost like an orgy of joy.
With this weapon, you start training early. You push yourself to the limit—earn a degree in physical education to improve your physique and a degree in English to overcome your dyslexia. You join the military, passing the physical and intellectual tests, determined to be strong.
By the time you’re 22, you've achieved your goal: a military firefighter, a role that combines everything you've worked for. You had to transform yourself completely, to educate yourself relentlessly. And because of that, you stand out—not just for your physical strength but for your ability to teach. You win awards as an educator, get invited to TV shows and interviews for your social projects, and use humor to educate about life. You dedicate far more than the r |
ffbdfc23-a0df-4638-8a15-32b04197461c | trentmkelly/LessWrong-43k | LessWrong | Failure By Analogy
Previously in series: Logical or Connectionist AI?
Followup to: Surface Analogies and Deep Causes
> "One of [the Middle Ages'] characteristics was that 'reasoning by analogy' was rampant; another characteristic was almost total intellectual stagnation, and we now see why the two go together. A reason for mentioning this is to point out that, by developing a keen ear for unwarranted analogies, one can detect a lot of medieval thinking today."
> -- Edsger W. Dijkstra
>
> <geoff> neural nets are over-rated
> <starglider> Their potential is overrated.
> <geoff> their potential is us
> -- #sl4
Wasn't it in some sense reasonable to have high hopes of neural networks? After all, they're just like the human brain, which is also massively parallel, distributed, asynchronous, and -
Hold on. Why not analogize to an earthworm's brain, instead of a human's?
A backprop network with sigmoid units... actually doesn't much resemble biology at all. Around as much as a voodoo doll resembles its victim. The surface shape may look vaguely similar in extremely superficial aspects at a first glance. But the interiors and behaviors, and basically the whole thing apart from the surface, are nothing at all alike. All that biological neurons have in common with gradient-optimization ANNs is... the spiderwebby look.
And who says that the spiderwebby look is the important fact about biology? Maybe the performance of biological brains has nothing to do with being made out of neurons, and everything to do with the cumulative selection pressure put into the design. Just like how the performance of biological brains has little to do with proteins being held together by van der Waals forces, instead of the much stronger covalent bonds that hold together silicon. Sometimes evolution gets stuck with poor building material, and it can't refactor because it can't execute simultaneous changes to migrate the design. If biology does some neat tricks with chemistry, it's be |
bd3319b1-aede-4ec7-ab04-d1034fcad7e4 | trentmkelly/LessWrong-43k | LessWrong | Artificial general intelligence is here, and it's useless
*Disclaimer: I originally wrote this for my own blog and as an editorial for skynet today, a few weeks ago I started reading LW and I thought people here might enjoy it, I think it's in the spirit of the conversation here. So I made a few edits and the text bellow is the result.*
One of the most misunderstood ideas that's polluting the minds of popular "intellectuals", many of them seemingly accustomed with statistics and machine learning, is the potential or threat that developing an artificial general intelligence (AGI) would present to our civilization.
This myth stems from two misunderstanding of reality.
One such misunderstanding is related to the refinement and extensibility of current ML algorithms and FPA hardware, however discussing that always leads of people arguing "what ifs" (e.g What if cheap quantum computers with very efficient I/O become a thing ?), thus, I won't pursue that line of thought here.
A second, more easy to debunk misunderstanding, is related to the practicality of an AGI. Assuming that our wildest dreams of hardware were to come true… would we be able to create and AGI and would this AGI actually have any effect upon the world other than being a fun curiosity ?
1. We already have AGIs
AGI is here, as of the time of writing there are an estimate of 7,714,576,923 AGI algorithms residing upon our planet. You can use the vast majority of them for less than 2$/hour. They can accomplish the vast majority intellectual task that can be well-defined by humans, not to mention they can invent new tasks themselves.
They are capable of creating new modified version of themselves, updating their own algorithms, sharing their algorithms with other AGIs and learning new complex skills. To add to that, they are energy efficient, you can keep one running for optimally for 5 to 50$/day depending on location, much less than your average server farm used to train a complex ML model.
This is a rather obvious observation, but one that needs to be note |
db007bdf-bd3a-46ba-b8c6-04e1dc4a0c18 | StampyAI/alignment-research-dataset/arxiv | Arxiv | A Roadmap towards Machine Intelligence
1 Introduction
---------------
A machine capable of performing complex tasks without requiring
laborious programming would be tremendously useful in almost any human
endeavor, from performing menial jobs for us to helping the
advancement of basic and applied research. Given the current availability of powerful hardware and large amounts
of machine-readable data, as well as the widespread interest in
sophisticated machine learning methods, the times should be ripe for
the development of intelligent machines.
Still, since “solving
AI” seems too complex a task to be pursued all at once, in the last decades the
computational community has preferred to focus
on solving relatively narrow empirical problems that are important for
specific applications, but do not address the overarching goal of
developing general-purpose intelligent machines. In this article, we
propose an alternative approach: we first define the general
characteristics we think intelligent machines should possess, and then
we present a concrete roadmap to develop them in realistic, small
steps, that are however incrementally structured in such a way that,
jointly, they should lead us close to the ultimate goal of
implementing a powerful AI.
The article is organized as follows. In Section [2](#S2 "2 Desiderata for an intelligent machine ‣ A Roadmap towards Machine Intelligence")
we specify the two fundamental characteristics that we consider
crucial for developing intelligence–at least the sort of intelligence
we are interested in–namely *communication* and
*learning*. Our goal is to build a machine that can learn new
concepts through communication at a similar rate as a human with
similar prior knowledge. That is, if one can easily learn how
subtraction works after mastering addition, the intelligent machine,
after grasping the concept of addition, should not find it difficult
to learn subtraction as well. Since, as we said, achieving the long-term goal of building an
intelligent machine equipped with the desired features at once seems
too difficult, we need to define intermediate targets that can lead us
in the right direction. We specify such targets in terms of simplified but self-contained
versions of the final machine we want to develop. At any time during its
“education”, the target machine should act like a stand-alone
intelligent system, albeit one that will be initially very limited in
what it can do. The bulk of our proposal (Section [3](#S3 "3 A simulated ecosystem to educate communication-based intelligent machines ‣ A Roadmap towards Machine Intelligence"))
thus consists in the plan for an interactive learning environment
fostering the incremental development of progressively more
intelligent behavior. Section [4](#S4 "4 Towards the development of intelligent machines ‣ A Roadmap towards Machine Intelligence") briefly discusses some of the
algorithmic capabilities we think a machine should possess in order to
profitably exploit the learning environment. Finally, Section
[5](#S5 "5 Related ideas ‣ A Roadmap towards Machine Intelligence") situates our proposal in the broader context of past
and current attempts to develop intelligent machines. As that review
should make clear, our plan encompasses many ideas that have already
appeared in different research strands. What we believe to be novel in our approach is the way in which we are combining such ideas into a coherent program.
2 Desiderata for an intelligent machine
----------------------------------------
Rather than attempting to formally characterize intelligence, we
propose here a set of desiderata we believe to be crucial for a
machine to be able to autonomously make itself helpful to humans in
their endeavors. The guiding principles we implicitly considered in
formulating the desiderata are to minimize the complexity of the
machine, and to maximize interpretability of its behavior by humans.
###
2.1 Ability to communicate
Any practical realization of an intelligent machine will have to
*communicate* with us. It would be senseless to build a machine
that is supposed to perform complex operations if there is no way for
us to specify the aims of these operations, or to understand the
output of the machine. While other communication means could be
entertained, natural language is by far the easiest and most powerful
communication device we possess, so it is reasonable to require an
intelligent machine to be able to communicate through
language. Indeed, the intelligent machine we aim for could be seen as
a computer that can be programmed through natural language, or as the
interface between natural language and a traditional programming
language. Importantly, humans have encoded a very large portion of
their knowledge into natural language (ranging from mathematics
treatises to cooking books), so a system mastering natural language
will have access to most of the knowledge humans have assembled over
the course of their history.
Communication is, by its very nature, *interactive*: the
possibility to hold a conversation is crucial both to gather new
information (asking for explanation, clarification, instructions,
feedback, etc.) and to optimize its transmission (compare a good
lecture or studying with a group of peers to reading a book
alone). Our learning environment will thus emphasize the interactive
nature of communication.
Natural language can also channel, to a certain extent, non-linguistic
information, because much of the latter can be conveyed through
linguistic means. For example, we can use language to talk about what
we perceive with our senses, or to give instructions on how to operate
in the world (see Louwerse, [2011](#bib.bib22), among others, for evidence that language encodes
many perceptual aspects of our knowledge). Analogously, in the simulation
we discuss below, a Teacher uses natural language to teach the Learner
(the intelligent machine being trained) a more limited and explicit
language (not unlike a simple programming language) in which the
Learner can issue instructions to its environment through the same
communication channels it uses to interact with the
Teacher. The intelligent machine can later be instructed to browse the
Internet by issuing commands in the appropriate code through its usual
communication channels, mastering in this way a powerful tool to
interact with the world at large. Language can also serve as an
interface to perceptual components, and thus update the machine about
its physical surroundings. For example, an object recognition system
could transform raw pixel data into object labels, allowing the
machine to “see” its real-life environment through a
controlled-language modality.
Still, we realize that our focus on the language-mediated side of
intelligence may limit the learning machine in the development of
skills that we naturally gain by observing the world around us. There
seems to be a fundamental difference between the symbolic
representations of language and the continuous nature of the world as we perceive it. If this will turn out to be an issue, we
can extend the training phase of the machine (its development in a
simulated environment such as the one we will sketch below) with tasks
that are more perception-oriented. While in the tasks we will describe
here the machine will be taught how to use its I/O channels to
receive and transmit linguistic symbols, the machine could also be exposed,
through the same interface,
to simple encodings (bit streams) of continuous input
signals, such as images. The machine could thus be trained, first, to
understand the basic properties of continuous variables, and then to
perform more complex operations in a continuous space, such as
identifying shapes in 2D images. Note that including such tasks would
not require us to change the design of our learning framework, only to
introduce novel scripts.
One big advantage of the single-interface approach we are currently
pursuing is that the machine only needs to be equipped with bit-based
I/O channels, thus being maximally simple in its interface. The
machine can learn an unlimited number of new codes enabling it to
interface, through the same channels, with all sorts of interlocutors
(people, other machines, perceptual data encoded as described above,
etc.). By equipping the machine with only a minimalistic I/O bit-stream
interface, we ensure moreover that no prior knowledge about the
challenges the machine will encounter is encoded into the structure of
the input and output representations, harming the generality of the
strategies the machine will learn (compare the difficulty of
processing an image when it’s already encoded into pixels vs. as raw
bits).
Finally, while we propose language as the general *interface* to
the machine, we are agnostic about the nature of the internal
representations the machine must posit to deal with the challenges it
faces. In particular, we are not making claims about the internal
representations of the machine being based on an interpretable
“language of thought” (Fodor, [1975](#bib.bib12)). In other words, we are not
claiming that the machine should carry out its internal reasoning in a
linguistic form: only that its input and output are linguistic in
nature.
To give a few examples of how a communication-based intelligent
machine can be useful, consider a machine helping a
scientist with research. First of all, the communication-endowed
machine does not need to pre-encode a large static database of facts,
since it can retrieve the relevant information from the Internet. If
the scientist asks a simple question such as: *What is the
density of gold?*, the machine can search the Web to answer:
*19.3g/cm3*.
Most questions will however require the machine to put together
multiple sources of information. For example, one may ask: *What
is a good starting point to study reinforcement learning?*. The
machine might visit multiple Web sites to search for materials and get
an idea of their relative popularity. Moreover, interaction can make
even a relatively simple query such as the latter more successful. For
example, the machine can ask the user if she prefers videos or
articles, what is the mathematical background to be assumed, etc.
However, what we are really interested in is a machine that can
significantly speed up research progress by being able to address
questions such as: *What is the most promising direction to cure
cancer, and where should I start to meaningfully contribute?* This
question may be answered after the machine reads a significant number
of research articles online, while keeping in mind the perspective of
the person asking the question. Interaction will again play a central
role, as the best course of action for the intelligent machine might
involve entering a conversation with the requester, to understand her
motivation, skills, the time she is willing to spend on the topic,
etc. Going further, in order to fulfill the request above, the machine
might even conduct some independent research by exploiting information
available online, possibly consult with experts, and direct the
budding researcher, through multiple interactive sessions, towards
accomplishing her goal.
###
2.2 Ability to learn
Arguably, the main flaw of “good old” symbolic AI research
(Haugeland, [1985](#bib.bib17)) lied in the assumption that it would be
possible to program an intelligent machine largely by hand. We believe
it is uncontroversial that a machine supposed to be helping us in a
variety of scenarios, many unforeseen by its developers, should be
endowed with the capability of *learning*. A machine that does
not learn cannot adapt or modify itself based on experience, as it
will react in the same way to a given situation for its whole
lifetime. However, if the machine makes a mistake that we want to
correct, it is necessary for it to change its behavior–thus,
learning is a mandatory component.
Together with learning comes *motivation*. Learning allows the
machine to adapt itself to the external environment, helping it to
produce outputs that maximize the function defined by its
motivation. Since we want to develop machines that make themselves
useful to humans, the motivation component should be directly
controlled by users through the communication channel. By specifying
positive and negative rewards, one may shape the behavior of the
machine so that it can become useful for concrete tasks (this is very
much in the spirit of reinforcement learning, see,
e.g., Sutton and Barto, [1998](#bib.bib45), and discussion in Section
[5](#S5 "5 Related ideas ‣ A Roadmap towards Machine Intelligence") below).
Note that we will often refer to human learning as a source of insight
and an ideal benchmark to strive for. This is natural, since we would
like our machines to develop human-like intelligence. At the same
time, children obviously grow in a very different environment from the
one in which we tutor our machines, they soon develop a sophisticated
sensorimotor system to interact with the world, and they are innately
endowed with many other cognitive capabilities. An intelligent
machine, on the other hand, has no senses, and it will start its life
as a *tabula rasa*, so that it will have to catch up not only on
human ontogeny, but also on their phylogeny (the history of AI
indicates that letting a machine learn from data is a more effective
strategy than manually pre-encoding “innate” knowledge into it). On
the positive side, the machine is not subject to the same biological
constraints of children, and we can, for example, expose it to
explicit tutoring at a rate that would not be tolerable for
children. Thus, while human learning can provide useful inspiration,
we are by no means trying to let our machines develop in human-like
ways, and we claim no psychological plausibility for the methods
we propose.
3 A simulated ecosystem to educate communication-based
intelligent machines
----------------------------------------------------------------------------
In this section, we describe a simulated environment designed to teach
the basics of linguistic interaction to an intelligent machine, and how to use it to learn to
operate in the world. The simulated ecosystem should be seen as a
“kindergarten” providing basic education to intelligent
machines. The machines are trained in this controlled environment to later
be connected to the real world in order to learn how to help humans with
their various needs.
The ecosystem I/O channels are controlled by an automatic mechanism,
avoiding the complications that would arise from letting the machine
interact with the “real world” from the very beginning, and allowing
us to focus on challenges that should directly probe the effectiveness
of new machine learning techniques.
The environment must be challenging enough to force the
machine to develop sophisticated learning strategies (essentially, it
should need to “learn how to learn”). At the same time, complexity
should be manageable, i.e., a human put into a similar environment
should not find it unreasonably difficult to learn to communicate and
act within it, even if the communication takes place in a language
the human is not yet familiar with. After mastering the basic
language and concepts of the simulated environment, the machine should
be able to interact with and learn from human teachers. This puts several restrictions on the kind of learning the machine
must come to be able to perform: most importantly, it will need to be
capable to extract the correct generalizations from just a few
examples, at a rate comparable to human learners.
Our ecosystem idea goes against received wisdom from the last decades
of AI research. This received wisdom suggests that systems should be
immediately exposed to real-world problems, so that they don’t get
stuck into artificial “blocks worlds” (Winograd, [1971](#bib.bib56)), whose
experimenter-designed properties might differ markedly from those
characterizing realistic setups. Our strategy is based on the
observation, that we will discuss in Section
[4](#S4 "4 Towards the development of intelligent machines ‣ A Roadmap towards Machine Intelligence"), that current machine learning
techniques cannot handle the sort of genuinely incremental learning of
algorithms that is necessary for the development of intelligent
machines, because they lack the ability to store learned skills in
long-term memory and compose them. To bring about an advance in such
techniques, we have of course many choices. It seems sensible to pick
the simplest one. The environment we propose is sufficient to
demonstrate the deficiencies of current techniques, yet it is simple
enough that we can fully control the structure and nature of the tasks
we propose to the machines, make sure they have a solution, and use
them to encourage the development of novel techniques. Suppose we were
instead to work in a more natural environment from the very beginning,
for example from video input. This would impose large infrastructure
requirements on the developers, it would make data pre-processing a
big challenge in itself, and training even the simplest models would
be very time-consuming. Moreover, it would be much more difficult to
formulate interrelated tasks in a controlled way, and define the
success criterion. Once we have used our ecosystem to develop a system
capable of learning compositional skills from extremely sparse reward,
it should be simple to plug in more natural signals, e.g., through
communication with real humans and Internet access, so that the system
would learn how to accomplish the tasks that people really want it to
perform.
The fundamental difference between our approach and classic AI
blocks worlds is that we do not intend to use our ecosystem to script an
exhaustive set of functionalities, but to teach the machine the
fundamental ability to *learn how to efficiently learn* by creatively
combining already acquired skills. Once such machine gets connected
with the real world, it should quickly learn to perform any new task its Teacher
will choose. Our environment can be seen as analogous to
explicit schooling. Pupils are taught math in primary school through
rather artificial problems. However, once they have interiorized basic
math skills in this setup, they can quickly adapt them to
the problems they encounter in their real life, and rely on them to rapidly
acquire more sophisticated mathematical techniques.
###
3.1 High-level description of the ecosystem
#### Agents
To develop an artificial system that is able to
incrementally acquire new skills through linguistic interaction, we
should not look at the training data as a static set of labeled
examples, as in common machine learning setups. We propose instead a
dynamic ecosystem akin to that of a computer game. The Learner (the
system to be trained) is an actor in this ecosystem.
The second fundamental agent in the ecosystem is the Teacher. The
Teacher assigns tasks and rewards the Learner for desirable behaviour,
and it also provides helpful information, both spontaneously and in
response to Learner’s requests. The Teacher’s behaviour is entirely
scripted by the experimenters. Again, this might be worryingly
reminiscent of entirely hand-coded good-old AIs. However, the Teacher
need not be a very sophisticated program. In particular, for each task
it presents to the learner, it will store a small set of expected
responses, and only reward the Learner if its behaviour exactly
matches one response. Similarly, when responding to Learner’s
requests, the Teacher is limited to a fixed list of expressions it
knows how to respond to. The reason why this suffices is that the aim
of our ecosystem is to kickstart the Learner’s efficient learning
capabilities, and not to provide enough direct knowledge for it to be
self-sufficient in the world. For example, given the limitations of the
scripted Teacher, the Learner will only be able to acquire a very
impoverished version of natural language in the ecosystem. At the same
time, the Learner should acquire powerful learning and generalization
strategies. Using the minimal linguistic skills and strong learning
abilities it acquired, the Learner should then be able to extend its
knowledge of language fast, once it is put in touch with actual human
users.
Like in classic text-based adventure games (Wikipedia, [2015b](#bib.bib54)), the Environment is
entirely linguistically defined, and it is explored by the Learner by
giving orders, asking questions and receiving feedback (although
graphics does not play an active role in our simulation, it is
straightforward to visualize the 2D world in order to better track
the Learner’s behaviour, as we show through some examples below). The
Environment is best seen as the third fundamental agent in the
ecosystem. The Environment behaviour is also scripted. However, since
interacting with the Environment serves the purpose of observation and
navigation of the Learner surroundings (“sensorimotor experience”),
the Environment uses a controlled language that, compared to that of
the Teacher, is more restricted, more explicit and less ambiguous. One
can thus think of the Learner as a higher-level programming
language, that accepts instructions from the programmer (the Teacher)
in a simple form of natural language, and converts them into the machine code
understood by the Environment.
In the examples to follow, we assume the world defined by the Environment to be split into discrete
cells that the Learner can traverse horizontally and vertically. The
world includes barriers, such as walls and water, and a number of
objects the Learner can interact with (a pear, a mug, etc).
Note that, while we do not explore this possibility here, it might be
useful to add other actors to the simulation: for example, training
multiple Learners in parallel, encouraging them to teach/communicate
with each other, while also interacting with the scripted Teacher.
#### Interface channels
The Learner experience is entirely
defined by generic *input* and *output* channels. The
Teacher, the Environment and any other language-endowed agent write to
the input stream. Reward (a scalar value, as discussed next) is also
written to the input stream (we assume, however, that the Learner does
not need to discover which bits encode reward, as it will need this
information to update its objective function). Ambiguities are avoided
by prefixing a unique string to the messages produced by each actor
(e.g., messages from the Teacher might be prefixed by the string
T:, as in our examples below). The Learner writes to its
output channel, and it is similarly taught to use unambiguous prefixes
to address the Teacher, the Environment and any other agent or service
it needs to communicate with. Having only generic input and output
communication channels should facilitate the seamless addition of new
interactive entities, as long as the Learner is able to learn the
language they communicate in.
#### Reward
Reward can be positive or negative (1/-1), the
latter to be used to speed up instruction by steering away the Learner
from dead ends, or even damaging behaviours. The Teacher, and later
human users, control reward in order to train the Learner. We might
also let the Environment provide feedback through hard-coded rewards,
simulating natural events such as eating or getting hurt. Like in
realistic biological scenarios, reward is sparse, mostly being
awarded after the Learner has accomplished some task. As
intelligence grows, we expect the reward to become *very* sparse,
with the Learner able to elaborate complex plans that are only
rewarded on successful completion, and even displaying some degree of
self-motivation. Indeed, the Learner should be taught that short-term
positive reward might lead to loss at a later stage (e.g., hoarding on
food with poor nutrition value instead of seeking further away for
better food), and that sometimes reward can be maximized by engaging
in activities that in the short term provide no benefit (learning to
read might be boring and time-consuming, but it can enormously speed
up problem solving–and the consequent reward accrual– by making the
Learner autonomous in seeking useful information on the
Internet). Going even further, during the Learner “adulthood”
explicit external reward could stop completely. The Learner will no
longer be directly motivated to learn in new ways, but ideally the
policies it has already acquired will include strategies such as
curiosity (see below) that would lead it to continue to acquire new
skills for its own sake. Note that, when we say that reward could
stop completely, we mean that users do not need to provide explicit
reward, in the form of a scalar value, to the Learner. However, from a
human perspective, we can look at this as the stage in which the
Learner has interiorized its own sources of reward, and no longer
needs external stimuli.
We assume binary reward so that human users need not worry about relative *amounts* of reward to give to the Learner (if they do want to control the amount of reward, they can
simply reward the Learner multiple times). The Learner objective
should however maximize *average reward over time*, naturally
leading to different degrees of cumulative reward for different
courses of action (this is analogous to the notion of expected
cumulative reward in reinforcement learning, which is a possible way
to formalize the concept). Even if two solutions to a task are
rewarded equally on its completion, the faster strategy will be
favored, as it leaves the Learner more time to accumulate further
reward. This automatically ensures that efficient solutions are
preferred over wasteful ones. Moreover, by measuring time
independently from the number of simulation steps, e.g., using simple
wall-clock time, one should penalize inefficient learners spending a
long time performing offline computations.
As already mentioned, our approach to reward-based learning shares
many properties with reinforcement learning. Indeed, our setup fits
into the general formulation of the reinforcement learning problem
(Kaelbling et al., [1996](#bib.bib19); Sutton and Barto, [1998](#bib.bib45))–see Section
[5](#S5 "5 Related ideas ‣ A Roadmap towards Machine Intelligence") for further discussion of this point.
#### Incremental structure
In keeping with the game idea, it is
useful to think of the Learner as progressing through a series of
levels, where skills from earlier levels are required to succeed in
later ones. Within a level, there is no need to impose a strict
ordering of tasks (even when our intuition suggests a natural
incremental progression across them), and we might let
the Learner discover its own optimal learning path
by cycling multiple times through blocks of them.
At the beginning, the Teacher trains the Learner to perform very
simple tasks in order to kick-start linguistic communication and the
discovery of very simple algorithms. The Teacher first rewards the
Learner when the latter repeats single characters, then words,
delimiters and other control strings. The Learner is moreover taught
how to repeat and manipulate longer sequences. In a subsequent block of tasks,
the Teacher leads the Learner to develop a semantics for linguistic
symbols, by encouraging it to associate linguistic expressions with
actions. This is achieved through practice sessions in which the
Learner is trained to repeat strings that function as Environment
commands, and it is rewarded only when it takes notice of the effect
the commands have on its state (we present concrete examples
below). At this stage, the Learner should become able to associate
linguistic strings to primitive moves and actions (*turn left*). Next, the Teacher will assign tasks involving action sequences
(*find an apple*), and the Learner should convert them into
sets of primitive commands (simple “programs”). The Teacher will,
increasingly, limit itself to specify an abstract end goal
(*bring back food*), but not recipes to
accomplish it, in order to spur creative thinking on behalf of the
Learner (e.g., if the Learner gets trapped somewhere while looking for
food, it may develop a strategy to go around obstacles).
In the process of learning to parse and
execute higher-level commands, the Learner should also be trained to
ask clarification questions to the Teacher (e.g., by initially
granting reward when it spontaneously addresses the Teacher, and by
the repetition-based strategy we illustrate in the examples
below). With the orders becoming more general and complex, the
language of the Teacher will also become (within the limits of what
can be reasonably scripted) richer and more ambiguous, challenging the
Learner capability to handle restricted specimens of common natural
language phenomena such as polysemy, vagueness, anaphora and
quantification.
To support user scenarios such as the ones we envisaged in Section
[2](#S2 "2 Desiderata for an intelligent machine ‣ A Roadmap towards Machine Intelligence") above and those we will discuss at the end of
this section, the Teacher should eventually teach the Learner how to
“read” natural text, so that the Learner, given access to the
Internet, can autonomously seek for information online. Incidentally,
notice that once the machine can read text, it can also exploit distributional
learning from large amounts of text (Erk, [2012](#bib.bib11); Mikolov et al., [2013](#bib.bib25); Turney and Pantel, [2010](#bib.bib50))
to induce word and phrase representations addressing some of the
challenging natural language phenomena we just mentioned, such as
polysemy and vagueness.
The Learner must take its baby
steps first, in which it is carefully trained to accomplish simple tasks
such as learning to compose basic commands. However, for the Learner
to have any hope to develop into a fully-functional intelligent
machine, we need to aim for a “snow-balling” effect to soon take place, such
that later tasks, despite being inherently more complex, will require
a lot less explicit coaching, thanks to a combinatorial explosion in
the background abilities the Leaner can creatively compose (like for
humans, learning how to surf the Web should take less time than
learning how to spell).
#### Time off
Throughout the simulation, we
foresee phases in which the Learner is free to interact with the
Environment and the Teacher without a defined task. Systems should
learn to exploit this time off for undirected exploration, that should
in turn lead to better performance in active training stages, just like,
in the dead phases of a video-game, a player is more likely to try out
her options than to just sit waiting for something to happen, or when
arriving in a new city we’d rather go sightseeing than staying in the
hotel. Since curiosity is beneficial in many situations, such
behaviour should naturally lead to higher later rewards, and thus be
learnable. Time off can also be used to “think” or “take a nap”, in
which the Learner can replay recent experiences and possibly update its
inner structure based on a more global view of the knowledge it has accumulated,
given the extra computational resources that the free time policy offers.
#### Evaluation
Learners can be quantitatively evaluated and
compared in terms of the number of new tasks they accomplish successfully in a fixed amount of time,
a measure in line with the reward-maximization-over-time objective we
are proposing. Since the interactive, multi-task environment setup does not
naturally support a distinction between a training and a test phase,
the machine must carefully choose reward-maximizing actions from the
very beginning. In contrast, evaluating the machine only on its final behavior would
overlook the number of attempts it took to reach the solution. Such
alternative evaluation would favor models which are simply able to
memorize patterns observed in large amounts of training data. In many
practical domains, this approach is fine, but we are interested in
machines capable of learning truly general problem-solving
strategies. As the tasks become incrementally more difficult, the
amount of required computational resources for naive
memorization-based approaches scales exponentially, so only a machine
that can efficiently generalize can succeed in our environment. We
will discuss the limitations of machines that rely on memorization
instead of algorithmic learning further in Section
[4.3](#S4.SS3 "4.3 Computational properties of intelligent machines ‣ 4 Towards the development of intelligent machines ‣ A Roadmap towards Machine Intelligence") below.
We would like to foster the development of intelligent machines by
employing our ecosystem in a public competition. Given what we just
said, the competition would not involve distributing a static set of
training/development data similar in nature to the final test
set. We foresee instead a setup in which developers have access to the
full pre-programmed environment for a fixed amount of time. The
Learners are then evaluated on a set of new tasks that are
considerably different from the ones exposed in the development phase.
Examples of how test tasks might differ from those encountered during
development include the Teacher speaking a new language, a different
Environment topography, new obstacles and objects with new
affordances, and novel domains of endeavor (e.g., test tasks might
require selling and buying things, when the Learner was not previously
introduced to the rules of commerce).
###
3.2 Early stages of the simulation
#### Preliminaries
At the very beginning, the Learner has to
learn to pay attention to the Teacher, to identify the basic units of
language (find regularity in bit patterns, learn characters, then
words and so on). It must moreover acquire basic sequence repetition and
manipulation skills, and develop skills to form memory and learn efficiently.
These very initial stages of learning are extremely important, as we believe they
constitute the building blocks of intelligence.
However, as bit sequences do not make for easy readability, we focus here on an
immediately following phase, in which the Learner has already learned
how to pay attention to the Teacher and manipulate character
strings. We show how the Teacher guides the Learner from these basic skills to
being able to solve relatively sophisticated Environment navigation
problems by exploiting interactive communication. Because of the
“fractal-like” structure we envisage in the acquisition of
increasingly higher-level skills, these steps will illustrate many of
the same points we could have demonstrated through the lower-level
initial routines. The tasks we describe are also incrementally
structured, starting with the Learner learning to issue Environment
commands, then being led to take notice of the effect these commands
have, then understanding command structure, in order to generalize
across categories of actions and objects, leading it in turn to
being able to process higher-level orders. At this point, the Learner
is initiated to interactive communication.
Note that we only illustrate here “polite” turn-taking, in which
messages do not overlap, and agents start writing to the communication
channels only after the end-of-message symbol has been issued. We do
not however assume that interaction must be constrained in this
way. On the contrary, there are advantages in letting entities write
to the communication channels whenever they want: for example, the
Teacher might interrupt the Learner to prevent him from completing a
command that would have disastrous consequences, or the Learner may
interrupt the Teacher as soon as it figured out what to do, in order
to speed up reward (a simple priority list can be defined to solve
conflicts, e.g., Teacher’s voice is “louder” than that of
Environment, etc.).
Note also that our examples are meant to illustrate specific instances
from a larger set of trials following similar templates, that should
involve a variety of objects, obstacles and possible
actions. Moreover, the presented examples do not aim to exhaustively
cover all learning-fostering strategies that might be implemented in
the ecosystem. Finally, we stress again that we are not thinking of a
strict ordering of tasks (not least because it would be difficult to
fix, *a priori*, an ordering that is based on some objective,
learning-model-independent difficulty criterion), but rather about
clusters of related tasks organized into levels, such that the Learner
could cycle through them and discover its own optimal progression to
solve them.
#### Notation
We present the input to the
Learner (messages from Teacher and Environment, as well as Reward,
prefixed by T:, E: and R:, respectively) in
the left column, and the Learner output on the right (messages
addressed to the Teacher are prefixed by @T:, those for the
Environment are prefixed by @E:). We assume different I/O
identification prefixes to disambiguate meta-linguistic statements
(e.g., “T: give order @E: I move.” in the
first exchange below, where it’s only the Teacher speaking all the
way). The full stop serves as end-of-message delimiter (line breaks
are used for formatting purposes, and should be treated as white
space; similarly, hyphens are inserted for readability and should be
ignored). We use an ellipsis (three dots) to mark a sequence of messages on any channel we do not report (e.g., the Learner
exploring a number of solutions before zeroing in on the right
one). Finally, we interleave the conversations with comments, shown in
grey.
#### The Learner learns to issue Environment commands
We start
with the Teacher teaching the Learner to produce Environment commands,
first by rote repetition channeled to the right message recipient
(@E), then by encouraging it to translate from the Teacher’s
own natural language to the controlled Environment language. At this stage, reward is not tied to understanding what happened (the
“semantics” of the commands). Still, a good Learner might take note
of the actions triggered by the commands. The following are example
conversations that may happen during the learning process:
| *Input:* | *Output:* |
| --- | --- |
| T: give order @E: I move. | |
| | fdakjf13ktj53u9gs5 5 3#. |
| character soup without addressee, no reward |
| … |
| | @T: dsfjkasjfk;aljsf. |
| character soup with wrong addressee, no reward |
| … |
| | @E: fglk4$3wfgg. |
| right addressee, character soup, no reward |
| … |
| | give order @E: I move. |
| E: you moved. | |
| R: 1. | |
| Learner moves and is rewarded, although the solution is inefficient |
| T: give order @E: I move. | |
| … |
| | @E: I move. |
| E: you moved. | |
| R: 1. | |
| Learner figured out minimal string leading to reward |
Further skills the Learner will need to develop include segmentation
of Teacher’s input, and production of separate commands for the
Environment. At first, this may be rather trivial, as shown in this
example:
| *Input:* | *Output:* |
| --- | --- |
| T: give orders @E: I move and @E: I look. | |
| | @E: I move and @E: I look. |
| no reward, because Environment does not parse complex commands |
| … |
| T: give orders @E: I move and @E: I look. | |
| | @E: I move. |
| E: you moved. | |
| | @E: I look. |
| E: there is an apple. | |
| R: 1. | |
Eventually, the Learner should learn to switch fluently
between Teacher and Environment communication codes, translating the
Teacher’s linguistic messages into motor commands to the Environment
(an example of such exchange is visualized in Figure
[1](#S3.F1 "Figure 1 ‣ The Learner learns to issue Environment commands ‣ 3.2 Early stages of the simulation ‣ 3 A simulated ecosystem to educate communication-based intelligent machines ‣ A Roadmap towards Machine Intelligence")).
| | |
| --- | --- |
| Example of how the simulation might be visualized to help
developers track Learner’s behaviour. The left panel represents the
Environment at the stage in which Learner issues the move command,
the right panel depicts the Environment after this command is
executed. A lighter shade of green marks the cell the Learner
occupies, and the one it is turned towards. These cells are directly observable.
(Best viewed in color.) | Example of how the simulation might be visualized to help
developers track Learner’s behaviour. The left panel represents the
Environment at the stage in which Learner issues the move command,
the right panel depicts the Environment after this command is
executed. A lighter shade of green marks the cell the Learner
occupies, and the one it is turned towards. These cells are directly observable.
(Best viewed in color.) |
| *Input:* | *Output:* | *Input:* | *Output:* |
| T: move and look. | | E: you moved. | |
| | @E: I move. | | @E: I look. |
| | | E: there is an apple. | |
| | | R: 1. | |
Figure 1: Example of how the simulation might be visualized to help
developers track Learner’s behaviour. The left panel represents the
Environment at the stage in which Learner issues the move command,
the right panel depicts the Environment after this command is
executed. A lighter shade of green marks the cell the Learner
occupies, and the one it is turned towards. These cells are directly observable.
(Best viewed in color.)



*Input:*
*Output:*
*Input:*
*Output:*
*Input:*
*Output:*
T: move, turn right and move.
E: you moved.
E: you turned right.
@E: I move.
@E: I turn right.
@E: I move.
E: you can’t move.
Figure 2: The Teacher gives the order *move, turn right and move*. The
Learner is however blocked by a barrier after it
turns right, and reward is delayed, because the order is not executed.
(Best viewed in color.)
#### Associating language to actions
Next, the Learner is
encouraged to take notice of what happens in the Environment and
associate the Teacher’s language to states of the world, by being
rewarded only when its orders result in the corresponding actions
being effectively carried out in the world. An example task is
illustrated in Figure [2](#S3.F2 "Figure 2 ‣ The Learner learns to issue Environment commands ‣ 3.2 Early stages of the simulation ‣ 3 A simulated ecosystem to educate communication-based intelligent machines ‣ A Roadmap towards Machine Intelligence"), where the Learner will
not get reward because it is no longer sufficient for it to correctly translate
instructions into Environment commands: the commands must also have
the intended effect on the world. In this specific case, order
execution fails because the Learner encountered a barrier.
#### Learning to generalize
By adding sufficient variety to the
trials above, the Learner should start to get hold of the
compositional nature of commands (*turn right* and *turn
left* share systematic characteristics, e.g., they both cause a
change in what the Learner sees; the Learner can pick X only when X is
in the cell in front of him, etc.). The next trials help the Learner further move
away from literal command string memorization, and generalize across
categories of actions and objects (*turn left/right* are
instances of *turning*, *apples* and *pears* are
*objects*, etc.). This is demonstrated in the following example:
| *Input:* | *Output:* |
| --- | --- |
| T: turn and move. | |
| | @E: I turn. |
| E: (no action) | |
| … |
| | @E: I turn left. |
| E: you turned left. | |
| | @E: I move. |
| E: you can’t move. | |
| there is a barrier in front of Learner |
| … |
| | @E: I turn right. |
| E: you turned right. | |
| | @E: I move. |
| E: you moved. | |
| R: 1. | |
In the next example, the Learner is asked to pick some object that is
in front of him, without specifying what the object is (we assume that
the simple, unambiguous language of the Environment has no words for
general categories such as that of object, so the Learner must
eventually find out that it must name specific objects instead):
| *Input:* | *Output:* |
| --- | --- |
| T: pick an object. | |
| | @E: I pick an object. |
| E: (no response) | |
| | @E: I pick the apple. |
| E: (no response) | |
| | @E: I pick the pear. |
| E: you picked the pear. | |
| R: 1. | |
As we just illustrated, initially the Learner will apply an exhaustive
search strategy, listing all objects it knows of to find one it can
pick. Later, the Teacher should teach the *look* command, and
the Learner should be able to discover a faster strategy than
exhaustive search:
| *Input:* | *Output:* |
| --- | --- |
| T: pick an object. | |
| | @E: I look. |
| E: you see a pear. | |
| | @E: I pick the pear. |
| E: you picked the pear. | |
| R: 1. | |
Learning new strategies of course does not imply that the Learner can safely
forget the ones it has previously acquired. For example, in some tasks
the *look* command might not work (because, say, it is too dark
and the Learner cannot see what is in front of him). In such case, an
efficient Learner should find it easy to re-use previously learned
skills, such as exhaustive search.
#### Understanding higher-level orders
The next batch of trials
aims at developing the ability to decompose higher-level commands into
a sequence of basic actions, first fixed (*move twice*), then
flexible (*find an apple*). The general teaching strategy is to
provide sets of paired trials: In the first trial, the Teacher
describes the task explicitly as a sequence of primitive actions, and
gives the high-level name of the order. In the second trial, the Teacher
issues an equivalent high-level order, and the Learner is rewarded if
it goes through the same sequence as in the first trial (we are aware
that many complex linguistic expressions are not amenable to a
straightforward decompositional analysis, see, e.g.,
Boleda and Erk, [2015](#bib.bib4): we will use some that are in order to teach
the Learner about how to decompose complex concepts into step-by-step
plans). The Teacher could start by teaching iteration counters:
| *Input:* | *Output:* |
| --- | --- |
| T: move and move. | |
| | @E: I move. |
| E: you moved. | |
| | @E: I move. |
| E: you moved. | |
| R: 1. | |
| T: this is called move two times. | |
| … |
| T: move two times. | |
| … |
| | @E: I move. |
| E: you moved. | |
| | @E: I move. |
| E: you moved. | |
| R: 1. | |
| … |
| T: move and move and move. | |
| | @E: I move. |
| E: you moved. | |
| | @E: I move. |
| E: you moved. | |
| | @E: I move. |
| E: you moved. | |
| R: 1. | |
| T: this is called move three times. | |
| … |
| T: move three times. | |
| … |
| | @E: I move. |
| E: you moved. | |
| | @E: I move. |
| E: you moved. | |
| | @E: I move. |
| E: you moved. | |
| R: 1. | |
The Learner should eventually be able to apply modifiers productively
to different actions without much further training (e.g., apply
*three times* to *turn left*).
Next, the Teacher can show how to segment high-level
tasks, such as *finding something*, into atomic action
sequences. For example, to find an apple, one can develop a simple
strategy of going forward until an apple is found. The Teacher might initiate the Learner to this new skill as follows:
| *Input:* | *Output:* |
| --- | --- |
| T: move and look. | |
| | @E: I move. |
| E: you moved. | |
| | @E: I look. |
| E: you see grass. | |
| T: move and look. | |
| | @E: I move. |
| E: you moved. | |
| | @E: I look. |
| E: you see an apple. | |
| R: 1. | |
| T: this is called find an apple. | |
Note that one such example is not sufficient for the Learner to
understand what exactly we call *finding an apple*, as multiple
interpretations are valid: maybe we just wanted it to execute the
given commands twice. Thus, there should be multiple training
sequences of variable length, to clarify that the task is really to
apply an until loop, i.e., to iterate *move* and
*look* commands until the object the Learner is searching for is
found.
Further tasks can define composite skills, such as *getting an
apple*, which would consist of first finding an apple, and then
picking it up. Another generalization would be to include multiple
objects the Learner should be searching for. Adding obstacles to the
Environment would further complicate the challenge. The previous
strategy of simply going forward until an object is found will not
work anymore, as the Learner would stop at the first obstacle. We can
either expect the Learner to independently develop better search
strategies involving turning, or we can add further examples where the
Teacher shows the Learner how to improve through more direct
supervision.
#### Interactive communication
Tasks such as finding an
apple without directions might involve a long random search. Thus, we want to
kick-start interactive communication, so that the Learner can be
efficiently directed by the Teacher (and eventually by humans) to the
right cell. In a first set of trials, the Learner is rewarded for
repeating a *how to* request uttered by the Teacher (addressing
it back to the teacher via the @T: prefix), and following the
precise instructions produced by the Teacher in response to the
request:
| *Input:* | *Output:* |
| --- | --- |
| T: ask me how to find an apple. | |
| | @E: ask me how to find an apple. |
| wrong addressee, wrong request prefix |
| … |
| | @T: how to find an apple. |
| T: turn right and move and move and pick the apple. | |
| | @E: I turn right. |
| E: you turned right. | |
| | @E: I move. |
| E: you moved. | |
| | @E: I move. |
| E: you moved. | |
| | @E: I pick the apple. |
| E: you picked the apple. | |
| R: 1. | |
Trials such as this one are later interspersed with trials where the
Learner is assigned a task it can in principle accomplish by random
search, but taking the initiative by issuing a *how to* request
and then following the precise directions provided by the Teacher will
considerably speed up reward.
#### Algorithmic knowledge
Some tasks illustrated above require
understanding basic control flow structures. For example, parsing action
modifiers implies a simple form of counting, and in order to find
things the Learner must implement an until (equivalently,
while not) loop. Similarly, the command *get out of the
grass* calls for a while loop. Efficient completion of more
advanced tasks, e.g., *return home*,
implies development of more complex algorithms, such as
path-finding. After acquiring a bank of such algorithms (which might be encoded in the internal machine representation by something very different from programming language constructs), the Learner
should be able, in advanced stages of the simulation, to productively
combine them in order to succeed in full-fledged novel missions that
involve accomplishing a large number of hierarchically-structured
sub-goals (*find somebody who will trade two apples for a banana*).
As we discussed in Section [3.1](#S3.SS1 "3.1 High-level description of the ecosystem ‣ 3 A simulated ecosystem to educate communication-based intelligent machines ‣ A Roadmap towards Machine Intelligence"), the Learner’s
functionality could essentially be interpreted as learning how to compose programs based on the
descriptions given in natural language by the Teacher. The programs
produce very simple instructions that are understood by the
Environment, which can be seen as a sort of CPU. From this point of
view, the intelligent system we aim to train is a bridge between the
Teacher (later to be replaced by a human operator) and a traditional
computer that understands only a limited set of basic commands and
needs to be manually programmed for each single task. Thus, we believe
that successful construction of intelligent machines could automate
computer programming, which will likely be done in the future simply
through communication in natural language.
###
3.3 Interacting with the trained intelligent machine
To conclude the illustration of our plan, we provide a motivating example of how an intelligent machine
schooled in our ecosystem could later make itself useful in the real
world. We consider a scenario in which the machine works as an
assistant to Alice, an elderly person living alone. Bob is Alice’s
son, and he also interacts with the machine.
We assume that, as part of its training, the machine has been taught
how to issue Internet commands and process their outcomes. In the example dialogue, we give a general idea
of how the machine would interface to the Internet, without attempting
to precisely define the syntax of this interaction. Most importantly,
the Internet queries in the example are meant to illustrate how the
machine does not need to store all the knowledge it needs to
accomplish its duties, as it can retrieve useful information from the
Web on demand, and reason about it.
| *Input:* | *Output:* |
| --- | --- |
| Bob: I just spoke to the doctor, who said my mother needs to move for at least one hour per day, please make sure she does get enough exercise. | |
| … |
| following conversation takes place the next day: |
| | @Alice: Are you in the mood for some light physical exercise today? |
| Alice: Yes, but no more than 15 minutes, please. | |
| | @INTERNET: [query search engine for keywords *elderly, light activity, 15 minutes*] |
| shortly afterwards… |
| | @Alice: I have downloaded a YouTube video with a 15-minute yoga routine, please watch it whenever you can. |
| a few hours later… |
| Alice: I’m going out to buy groceries. | |
| | @INTERNET: [query search engine with keywords *average walking speed, elderly person*] |
| | @INTERNET: [search maps app for distance to grocery stores in Alice’s neighborhood] |
| | @Alice: Hey, why don’t you walk to the Yummy Food Market today? It should take you about 45 minutes to and from, so you’ll get the rest of your daily exercise. |
| @Alice: Thanks for the suggestion. | |
The communication-based intelligent machine should adapt to a
whole range of tasks it was not explicitly programmed for.
If necessary, the user can give it further
explicit positive and negative reward to motivate it to change its
behavior. This may be needed only rarely, as the machine should be
shipped to the end users after it already mastered good communication
abilities, and further development should mostly occur through
language. For example, when the user says *No, don’t do this
again*, the machine will understand that repeating the same type of
behavior might lead to negative reward, and it will change its course
of action even when no explicit reward signal is given (again, another way to put this is that the machine should associate similar linguistic strings to an “interiorized” negative reward).
The range of tasks for intelligent machines can be very diverse:
besides the everyday-life assistant we just considered, it could
explain students how to accomplish homework assignments, gather
statistical information from the Internet to help medical researchers (see also the examples in Section [2.1](#S2.SS1 "2.1 Ability to communicate ‣ 2 Desiderata for an intelligent machine ‣ A Roadmap towards Machine Intelligence") above), find bugs in computer programs, or even write programs on its own. Intelligent machines should extend our intellectual abilities in the same way current computers already function as an extension to our memory. This should enable us to perform intellectual tasks beyond what is possible today.
We realize the intelligent machines we aim to construct could become powerful tools that
may be possibly used for dubious purposes (the same could be said about any advanced technology,
including airplanes, space rockets and computers). We believe the perception of AI is skewed by popular
science fiction movies. Instead of thinking of computers that take over the world for their own
reasons, we think AI will be realized as a tool: A machine that will extend our capability to reason and solve
complex problems. Further, given the current state of the technology, we believe any discussion on
“friendliness” of the AI is at this moment premature. We expect it will take years, if not decades
to scale basic intelligent machines to become competitive with humans, giving us enough time to discuss
any possible existential threats.
4 Towards the development of intelligent machines
--------------------------------------------------
In this section, we will outline some of our ideas about
how to build intelligent machines that would benefit from the learning
environment we described. While we do not have a concrete proposal yet
about how exactly such machines should be implemented, we will discuss
some of the properties and components we think are needed to support
the desired functionalities. We have no pretense of completeness, we
simply want to provide some food for thought. As in the previous
sections, we try to keep the complexity of the machine at the minimum,
and only consider the properties that seem essential.
###
4.1 Types of learning
There are many types of behavior that we collectively call learning,
and it is useful to discuss some of them first.
Suppose our goal is to build an intelligent machine
working as a translator between two languages (we take here a simplified word-based view of the translation task). First, we will teach
the machine basic communication skills in our simulated environment so
that it can react to requests given by the user. Then, we will start
teaching it, by example, how various words are translated.
There are different kinds of learning happening here. To master
basic communication skills, the machine will have to understand the
concept of positive and negative reward, and develop complex
strategies to deal with novel linguistic inputs. This requires
discovery of algorithms,
and the ability
to remember facts, skills and even learning strategies.
Next, in order to translate, the machine needs to store pairs of
words. The number of pairs is unknown and a flexible growing mechanism
may be required. However,
once the machine understands how to populate the dictionary with
examples, the learning left to do is of a very simple nature: the
machine does not have to update its learning strategy, but only to
store and organize the incoming information into long-term memory
using previously acquired skills. Finally, once the vocabulary
memorization process is finished and the machine starts working as a
translator, no further learning might be required, and the functionality of the machine can be fixed.
The more specialized and narrow the functionality of the machine is,
the less learning is required. For
very specialized forms of behavior, it should be possible to program the
solution manually. However, as we move from roles such as a simple
translator of words, a calculator, a chess player, etc., to machines
with open-ended goals, we need to rely more on general learning from a
limited number of examples.
One can see the current state of the art in machine learning as being
somewhere in the middle of this hierarchy. Tasks such as automatic
speech recognition, classification of objects in images or machine
translation are already too hard to be solved purely through manual
programming, and the best systems rely on some form of statistical
learning, where parameters of hand-coded models are estimated from
large datasets of examples. However, the capabilities of
state-of-the-art machine learning systems are severely limited, and
only allow a small degree of adaptability of the machine’s
functionality. For example, a speech recognition system will never be
able to perform speech translation by simply being instructed to do so–a human programmer is required to implement additional modules
manually.
###
4.2 Long-term memory and compositional learning skills
We see a special kind of long-term memory as the key component of the
intelligent machine. This long-term memory should be able to store
facts and algorithms corresponding to learned skills, making them accessible
on demand. In fact, even the ability to learn should be seen as a set
of skills that are stored in the memory. When the learning skills are
triggered by the current situation, they should compose new persistent
structures in the memory from the existing ones. Thus, the machine
should have the capacity to extend itself.
Without being able to store previously learned facts and skills, the
machine could not deal with rather trivial assignments, such as
recalling the solution to a task that has been encountered
before. Moreover, it is often the case that the solution to
a new task is related to that of earlier tasks. Consider for example
the following sequence of tasks in our simulated environment:
* find and pick an apple;
* bring the apple back home;
* find two apples;
* find one apple and two bananas and bring them home.
Skills required to solve these tasks include:
* the ability to search around the current location;
* the ability to pick things;
* the ability to remember the location of home and return to it;
* the ability to understand what *one* and *two* mean;
* the ability to combine the previous skills (and more) to deal
with different requests.
The first four abilities correspond to simple facts or skills to be
stored in memory: a sequence of symbols denoting something, the steps
needed to perform a certain action, etc. The last ability is an
example of a compositional *learning skill*, with the capability
of producing new structures by composing together known facts and
skills. Thanks to such learning skills, the machine will be able to
combine several existing abilities to create a new one, often on the
fly. In this way, a well-functioning intelligent machine will not need
a myriad of training examples whenever it faces a slightly new
request, but it could succeed given a single example of the new
functionality. For example, when the Teacher asks the Learner to find one
apple and two bananas and bring them home, if the Learner already
understands all the individual abilities involved, it can retrieve the
relevant compositional learning skill to put together a plan and
execute it step by step. The Teacher may even call the new skill
generated in this way *prepare breakfast*, and refer to it later
as such. Understanding this new concept should not require any further
training of the Learner, and the latter should simply store the new skill
together with its label in its long-term memory.
As we have seen in the previous examples, the Learner can continue
extending its knowledge of words, commands and skills in a completely unsupervised
way once it manages to acquire skills that allow it to compose structures
in its long-term memory. It may be that discovering the basic
learning skills, something we usually take for granted, is much
more intricate than it seems to us. But once we will be able to build a machine which
can effectively construct itself based on the incoming signals –even when
no explicit supervision in the form of rewards is given, as discussed above– we
should be much closer to the development of intelligent machines.
###
4.3 Computational properties of intelligent machines
Another aspect of the intelligent machine that deserves discussion
is the computational model that the machine will be based on. We are
convinced that such model should be unrestricted, that is, able to
represent any pattern in the data. Humans can think of and talk about
algorithms without obvious limitations (although, to apply them, they
might need to rely on external supports, such as paper and
pencil). A useful intelligent machine should be able to handle such
algorithms as well.
A more precise formulation of our claim in the context of the theory of computation
is that the intelligent machine needs to be based on a Turing-complete
computational model. That is, it has to be able to represent any
algorithm in fixed length, just like the Turing machine (the very fact
that humans can describe Turing-complete systems shows that they
are, in practical terms, Turing-complete: it is irrelevant, for our
purposes, whether human online processing capabilities are
strictly Turing-complete–what matters is that their reasoning skills,
at least when aided by external supports, are). Note that there
are many Turing-complete computational systems, and Turing machines in
particular are a lot less efficient than some alternatives, e.g.,
Random Access Machines. Thus, we are not interested in building the
intelligent machine around the concept of the Turing machine; we just
aim to use a computational model that does not have obvious
limitations in ability to represent patterns.
A system that is weaker than Turing-complete cannot represent certain
patterns in the data efficiently, which in turn means it cannot truly
learn them in a general sense. However, it is possible to memorize
such complex patterns up to some finite level of complexity. Thus,
even a computationally restricted system may appear to work as
intended up to some level of accuracy, given that a sufficient number
of training examples is provided.
For example, we may consider a sequence repetition problem. The
machine is supposed to remember a sequence of symbols and reproduce it
later. Further, let’s assume the machine is based on a model with the
representational power of finite state machines. Such system is not
capable to represent the concept of storing and reproducing a
sequence. However, it may appear to do so if we design our experiment
imperfectly. Assume there is a significant overlap between what the
machine sees as training data, and the test data we use to evaluate
performance of the machine. A trivial machine that can function as a
look-up table may appear to work, simply by storing and recalling the
training examples. With an infinite number of training examples, a
look-up-table-based machine would appear to learn any regularity. It
will work indistinguishably from a machine that can truly represent
the concept of repetition; however, it will need to have infinite
size. Clearly, such memorization-based system will not perform well in
our setting, as we aim to test the Learner’s ability to generalize
from a few examples.
Since there are many Turing-complete computational systems, one may
wonder which one should be preferred as the basis for machine
intelligence. We cannot answer this question yet, however we
hypothesize that the most natural choice would be a system that
performs computation in a parallel way, using elementary units that
can grow in number based on the task at hand. The growing property is
necessary to support the long-term memory, if we assume that the basic
units themselves are finite. An example of an existing computational
system with many of the desired properties is the cellular automaton
of Von Neumann et al. ([1966](#bib.bib51)). We might also be inspired by
string rewriting systems, for example some versions
of the L-systems (Prusinkiewicz and Lindenmayer, [2012](#bib.bib32)).
An apparent alternative would be to use a
non-growing model with immensely large capacity. There is however an
important difference. In a growing model, the new cells can be
connected to those that spawned them, so that the model is naturally
able to develop a meaningful topological structure based on functional
connectivity. We conjecture that such structure would in itself
contribute to learning in a crucial way. On the other hand, it is not
clear if such topological structure can arise in a large-capacity
unstructured model. Interestingly, some of the more effective machine-learning models
available today, such as recurrent and convolutional neural networks,
are characterized by (manually constrained) network topologies that
are well-suited to the domains they are applied
to.
5 Related ideas
----------------
We owe, of course, a large debt to the seminal work of
Turing ([1950](#bib.bib49)). Note that, while Turing’s paper is most often
cited for the “imitation game”, there are other very
interesting ideas in it, worthy of more attention from curious
readers, especially in the last section on learning machines. Turing
thought that a good way to construct a machine capable of passing his
famous test would be to develop a *child machine*, and teach it
further skills through various communication channels. These would
include sparse rewards shaping the behavior of the child machine, and
other information-rich channels such as language input from a teacher
and sensory information.
We share Turing’s goal of developing a child machine capable of
independent communication through natural language, and we also stress
the importance of sparse rewards. The main distinction between his and
our vision is that Turing assumed that the child machine would be
largely programmed (he gives an estimate of sixty programmers working
on it for fifty years). We rather think of starting with a machine
only endowed with very elementary skills, and focus on the capability
to learn as the fundamental ability that needs to be developed. This
further assumes educating the machine at first in
a simulated environment where an artificial teacher will train it, as we outlined
in our roadmap. We also diverge with respect to the imitation game,
since the purpose of our intelligent machine is not to fool human
judges into believing it is actually a real person. Instead, we aim to
develop a machine that can perform a similar set of tasks to those a human
can do by using a computer, an Internet connection and the ability to
communicate.
There has been a recent revival of interest in tasks measuring
computational intelligence, spurred by the empirical advances of
powerful machine-learning architectures such as multi-layered neural
networks (LeCun et al., [2015](#bib.bib20)), and by the patent inadequacy of the
classic version of Turing test (Wikipedia, [2015c](#bib.bib55)). For
example, Levesque et al. ([2012](#bib.bib21)) propose to test systems on their
ability to resolve coreferential ambiguities (*The trophy would
not fit in the brown suitcase because it was too big…What was
too big?*). Geman et al. ([2015](#bib.bib13)) propose a “visual” Turing test
in which a computational system is asked to answer a set of
increasingly specific questions about objects, attributes and
relations in a picture (*Is there a person in the blue region? Is
the person carrying something? Is the person interacting with any
other object?*). Similar initiatives differ from ours in that they
focus on a specific set of skills (coreference, image parsing) rather
than testing if an agent can learn new skills. Moreover, these are
traditional evaluation benchmarks, unlike the hybrid
learning/evaluation ecosystem we are proposing.
The idea of developing an AI living in a controlled synthetic
environment and interacting with other agents through natural language
is quite old. The Blocks World of Winograd ([1971](#bib.bib56)) is probably the
most important example of early research in this vein. The approach
was later abandoned, when it became clear that the agents developed
within this framework did not scale up to real-world challenges
(see, e.g., Morelli et al., [1992](#bib.bib28)). The knowledge encoded in the
systems tested by these early simulations was manually programmed by
their creators, since they had very limited learning
capabilities. Consequently, scaling up to the real world implied
manual coding of all the knowledge necessary to cope with it, and this
proved infeasible. Our simulation is instead aiming at systems that
encode very little prior knowledge and have strong capabilities to
learn from data. Importantly, our plan is not to try to manually
program all possible scripts our system might encounter later, as in
some of the classic AI systems. We plan to program only the initial
environment, in order to kickstart the machine’s ability to learn and
adapt to different problems and scenarios. After the simulated
environment is mastered, scaling up the functionality of our Learner
will not require further manual work on scripting new situations, but
will rather focus on integrating real world inputs, such as those
coming from human users. The toy world itself is already designed to
feature novel tasks of increasing complexity, explicitly testing the
abilities of systems to autonomously scale up.
Still, we should not underestimate the drawbacks of synthetic
simulations. The tasks in our environment might directly address
some challenging points in the development of AI, such as learning with very weak supervision,
being able to form a structured long-term memory, and the ability of the
child machine to grow in size and complexity when encountering new problems.
However, simulating the real world can only bring us so far, and we might end up
overestimating the importance of some arbitrary phenomena at the
expense of others, that might turn out to be more common in natural
settings. It may be important to bring reality into the
picture relatively soon. Our toy world should let the intelligent machine
develop to the point at which it is able to learn from and cooperate with
actual humans. Interaction with real-life humans will then naturally
lead the machine to deal with real-world problems. The issue of when
exactly a machine trained in our controlled synthetic environment is
ready to go out in the human world is open, and it should be explored
empirically. However, at the same time, we believe that having
the machine interact with humans before it can deal with basic problems
in the controlled environment would be pointless, and possibly even
strongly misleading.
Our intelligent machine shares some of its desired functionalities
with the current generation of automated personal assistants such as
Apple’s Siri ad Microsoft’s Cortana. However, these are heavily
engineered systems that aim to provide a natural language interface
for human users to perform a varied but fixed set of tasks (similar
considerations also apply to artificial human companions and digital
pets such as Tamagotchi, see
Wikipedia, [2015a](#bib.bib53)). Such systems can
be developed by defining the most frequent use cases, choosing those
that can be solved with the current technology (e.g., book an air
ticket, look at the weather forecast and set the alarm clock for
tomorrow’s morning), and implementing specific solutions for each such
use case. Our intelligent machine is not intended to handle just a
fixed set of tasks. As exemplified by the example in Section
[3.3](#S3.SS3 "3.3 Interacting with the trained intelligent machine ‣ 3 A simulated ecosystem to educate communication-based intelligent machines ‣ A Roadmap towards Machine Intelligence"), the machine should be capable to learn
efficiently how to perform tasks such as those currently handled by
personal assistants, and more, just from interaction with the human
user (without a programmer or machine learning expert in the loop).
Architectures for software agents, and more specifically
*intelligent* agents, are widely studied in AI and related fields
(Nwana, [1996](#bib.bib30); Russell and Norvig, [2009](#bib.bib34)). We cannot review this ample
literature here, in order to position our proposal precisely with
respect to it. We simply remark that we are not aware of other
architectures that are as centered on learning and communication
as ours. Interaction plays a central role in the study of
multiagent systems (Shoham and Leyton-Brown, [2009](#bib.bib37)). However, the
emphasis in this research tradition is on how conflict resolution and
distributed problem solving evolve in typically large groups of
simple, mostly scripted agents. For example, traffic modeling is a
classic application scenario for multiagent systems. This is very
different from our emphasis on linguistic interaction for the purposes
of training a single agent that should become independently capable of
very complex behaviours.
Tenenbaum ([2015](#bib.bib46)), like us, emphasizes the need to focus on basic
abilities that form the core of intelligence. However, he takes naive
physics problems as the starting point, and discusses specific classes
of probabilistic models, rather than proposing a general learning
scenario. There are also some similarities between our proposal and
the research program of Luc Steels
(e.g., Steels, [2003](#bib.bib42), [2005](#bib.bib43)), who lets robots evolve
vocabularies and grammatical constructions through interaction in a
situated environment. However, on the one hand his agents are actual
robots subject to the practical hardware limitations imposed by the
need to navigate a complex natural environment from the start; on the
other, the focus of the simulations is narrowly on language
acquisition, with no further aim to develop broadly intelligent
agents.
We have several points of contact with the semantic parsing
literature, such as navigation tasks in an artificial world
(MacMahon et al., [2006](#bib.bib23)) and reward-based learning from natural
language instructions
(Chen and Mooney, [2011](#bib.bib8); Artzi and Zettlemoyer, [2013](#bib.bib2)). The agents developed
in this area can perform tasks, such as learning to execute
instructions in natural environments by interacting with humans
(Thomason et al., [2015](#bib.bib47)), or improving performance on real-life
video-games by consulting the instruction manual
(Branavan et al., [2012](#bib.bib6)), that we would want our intelligent
machines to also be able to carry out. However, current
semantic-parsing-based systems achieve these impressive feats by
exploiting architectures tuned to the specific tasks at hand, and they
rely on a fair amount of hard-wired expert knowledge, in particular
about language structures (although recent work is moving towards a
more knowledge-lean direction, see for example
Narasimhan et al., [2015](#bib.bib29), who train a neural network to play
text-based adventure games using only text descriptions as input and
game reward as signal). Our framework is meant to encourage the
development of systems that should eventually be able to perform
similar tasks, but getting there incrementally, starting with almost
no prior knowledge and first learning from their environment a set of
simpler skills, and how to creatively merge them to tackle more
ambitious goals.
The last twenty years have witnessed several related proposals on
learning to learn (Thrun and Pratt, [1997](#bib.bib48)), lifelong learning
(Silver et al., [2013](#bib.bib38)) and continual learning
(Ring, [1997](#bib.bib33)). Much of this work is theoretical in nature and
focuses on algorithms rather than on empirical challenges for the
proposed models. Still, the general ideas being pursued are in line
with our program. Ring ([1997](#bib.bib33)), in particular, defines a
continual-learning agent whose experiences “occur sequentially, and
what it learns at one time step while solving one task, it can use
later, perhaps to solve a completely different task.” Ring’s
desiderata for the continual learner are remarkably in line with
ours. It is “an autonomous agent. It senses, takes actions, and
responds to the rewards in its environment. It learns behaviors and
skills while solving its tasks. It learns incrementally. There is no
fixed training set; learning occurs at every time step; and the skills
the agent learns now can be used later. It learns
hierarchically. Skills it learns now can be built upon and modified
later. It is a black box. The internals of the agent need not be
understood or manipulated. All of the agent’s behaviors are developed
through training, not through direct manipulation. Its only interface
to the world is through its senses, actions, and rewards. It has no
ultimate, final task. What the agent learns now may or may not be
useful later, depending on what tasks come next.” Our program is
definitely in the same spirit, with an extra emphasis on
interaction.
Mitchell et al. ([2015](#bib.bib26)) discuss NELL, the most fully realized
concrete implementation of a lifelong learning architecture. NELL is
an agent that has been “reading the Web” for several years to
extract a large knowledge base. Emphasis is on the never-ending nature
of the involved tasks, on their incremental refinement based on what
NELL has learned, and on sharing information across tasks. In this
latter respect, this project is close to multi-task learning
(Ando and Zhang, [2005](#bib.bib1); Caruana, [1997](#bib.bib7); Collobert et al., [2011](#bib.bib9)), that focuses
on the idea of parameter sharing across tasks. It is likely that a
successful learner in our framework will exploit similar strategies,
but our current focus lies on defining the tasks, rather than on how
to pursue them.
Bengio et al. ([2009](#bib.bib3)) propose the related idea of curriculum
learning, whereby training data for a single task are ordered
according to a difficulty criterion, in the hope that this will lead
to better learning. This is motivated by the observation that humans
learn incrementally when developing complex skills, an idea that has
also previously been studied in the context of recurrent neural
network training by Elman ([1993](#bib.bib10)). The principle of incremental learning is
also central to our proposal. However, the fundamental aspect for us
is not a strict ordering of the training data for a specific task, but
incrementality in the *skills* that the intelligent machine
should develop. This sort of incrementality should in turn be boosted
by designing separate tasks with a compositional structure, such that
the skills acquired from the simpler tasks will help to solve the more
advanced ones more efficiently.
The idea of incremental learning, motivated by the same considerations
as in the papers we just mentioned, also appears in
Solomonoff ([2002](#bib.bib41)), a work which has much earlier roots in
research on program induction
(Solomonoff, [1964](#bib.bib39), [1997](#bib.bib40); Schmidhuber, [2004](#bib.bib35)). Within
this tradition, Schmidhuber ([2015](#bib.bib36)) reviews a large literature and
presents some general ideas on learning that might inspire our search
for novel algorithms. Genetic programming (Poli et al., [2008](#bib.bib31)) also
focuses on the reuse of previously found sub-solutions, speeding up
the search procedure in this way. Our proposal is also related to that
of Bottou ([2014](#bib.bib5)), in its vision of compositional machine
learning, although he only considers composition in limited domains,
such as sentence and image processing.
We share many ideas with the reinforcement learning
framework (Sutton and Barto, [1998](#bib.bib45)). In reinforcement learning, the
agent chooses actions in an environment in order to maximize some
cumulative reward over time. Reinforcement learning is particularly
popular for problems where the agent can collect information only by
interacting with the environment. Given how broad this definition is,
our framework could be considered as a particular instance of it. Our
proposal is however markedly different from standard reinforcement
learning work (Kaelbling et al., [1996](#bib.bib19)) in several
respects. Specifically, we emphasize language-mediated, interactive
communication, we focus on incremental strategies that encourage agents to solve
tasks by reusing previously learned knowledge and we aim to limit the
number of trials an agent gets in order to accomplish a certain goal.
Mnih et al. ([2015](#bib.bib27)) recently presented a single neural network
architecture capable of learning a set of classic Atari games using
only pixels and game scores as input (see also the related idea of
“general game playing”, e.g., Genesereth et al., [2005](#bib.bib14)). We
pursue a similar goal of learning from a low-level input stream and
reward. However, unlike these authors, we do not aim for a single
architecture that can, disjointly, learn an array of separate tasks,
but for one that can incrementally build on skills learned on previous
tasks to perform more complex ones. Moreover, together with reward, we
emphasize linguistic interaction as a fundamental mean to foster skill
extension. Sukhbaatar et al. ([2015](#bib.bib44)) introduce a sandbox to design
games with the explicit purpose to train computational agents in
planning and reasoning tasks. Moreover, they stress a curriculum
strategy to foster learning (making the agent progress through
increasingly more difficult versions of the game). Their general
program is aligned with ours, and the sandbox might be useful to
develop our environment. However, they do not share our emphasis on
communication and interaction, and their approach to incremental
learning is based on increasingly more difficult versions of the same
task (e.g., increasing the number of obstacles), rather than on
defining progressively more complex tasks, such that solving
the later ones requires composing solutions to earlier ones, as we are
proposing. Furthermore, the tasks currently considered within the
sandbox do not seem to be challenging enough to require new learning approaches, and may be
solvable with current techniques or minor modifications thereof.
Mikolov ([2013](#bib.bib24)) originally discussed a preliminary
version of the incremental task-based approach we are more fully
outlying here. In a similar spirit, Weston et al. ([2015](#bib.bib52)) present a set of
question answering tasks based on synthetically generated
stories. They also want to foster non-incremental progress in AI, but
their approach differs from ours in several crucial aspects. Again, there is
no notion of interactive, language-mediated learning, a classic
train/test split is enforced, and the tasks are not designed to
encourage compositional skill learning (although Weston and
colleagues do emphasize that the same system should be used for all
tasks). Finally, the evaluation metric is notably different from ours - while
we aim to minimize the number of trials it takes for the machine to master
the tasks, their goal is to have a good performance on held out data. This could
be a serious drawback for works that involve artificial tasks, as in
our view the goal should be to develop a machine that can learn as fast as possible,
to have any hope to scale up and be able to generalize in more complex
scenarios.
One could think of solving sequence-manipulation problems such as
those constituting the basis of our learning routine with
relatively small extensions of established machine learning techniques
(Graves et al., [2014](#bib.bib15); Grefenstette et al., [2015](#bib.bib16); Joulin and Mikolov, [2015](#bib.bib18)). As discussed in the previous section, for simple tasks that involve
only a small, finite number of configurations, one could be apparently
successful even just by using a look-up table storing all possible
combinations of inputs and outputs. The above mentioned works, that
aim to learn algorithms from data, also add a long-term memory (e.g.,
a set of stacks), but they use it to store the data only, not the
learned algorithms. Thus, such approaches fail to generalize in
environments where solutions to new tasks are composed of already
learned algorithms.
Similar criticism holds for approaches that try to learn certain
algorithms by using an architecture with a strong prior towards their
discovery, but not general enough to represent even small
modifications. To give an example from our own work: a recurrent
neural network augmented with a stack structure can form a simple kind
of long-term memory and learn to memorize and repeat sequences in the
reversed order, but not in the original one
(Joulin and Mikolov, [2015](#bib.bib18)). We expect a valid solution to the
algorithmic learning challenge to utilize a small number of training
examples, and to learn tasks that are closely related at an increasing
speed, i.e., to require less and less examples to master new skills
that are related to what is already known. We are not aware of any
current technique addressing these issues, which were the very reason
why algorithmic tasks were originally proposed by
Mikolov ([2013](#bib.bib24)). We hope that this paper will motivate
the design of the genuinely novel methods we need in order to develop
intelligent machines.
6 Conclusion
-------------
We defined basic desiderata for an intelligent machine, stressing
learning and communication as its fundamental abilities. Contrary to
common practice in current machine learning, where the focus is on
modeling single skills in isolation, we believe that all aspects of
intelligence should be holistically addressed within a single system.
We proposed a simulated environment that requires the intelligent
machine to acquire new facts and skills through communication.
In this environment, the machine must learn to
perform increasingly more ambitious tasks, being naturally induced to
develop complex linguistic and reasoning abilities.
We also presented some conjectures on the properties of the
computational system that the intelligent machine may be based on.
These include learning of algorithmic patterns from a few examples without strong
supervision, and development
of a long-term memory to store both data and learned skills. We
tried to put this in contrast with currently accepted paradigms in
machine learning, to show that current methods are far from adequate, and we must
strive to develop non-incrementally novel techniques.
This roadmap constitutes only the beginning of a long journey towards
AI, and we hope other researchers will be joining it in pursuing the
goals it outlined.
Acknowledgments
---------------
We thank Léon Bottou, Yann LeCun, Gabriel Synnaeve, Arthur Szlam,
Nicolas Usunier, Laurens van der Maaten, Wojciech Zaremba and others
from the Facebook AI Research team, as well as Gemma Boleda, Katrin
Erk, Germán Kruszewski, Angeliki Lazaridou, Louise McNally,
Hinrich Schütze and Roberto Zamparelli for many stimulating
discussions. An early version of this proposal has been discussed in
several research groups since 2013 under the name *Incremental
learning of algorithms* (Mikolov, [2013](#bib.bib24)). |
83911ab4-aa2e-4a3b-b598-fd40dccf760c | trentmkelly/LessWrong-43k | LessWrong | All Debates Are Bravery Debates
“I don’t practice what I preach because I’m not the kind of person I’m preaching to.”
— Bob Dobbs
I.
I read Atlas Shrugged probably about a decade ago, and felt turned off by its promotion of selfishness as a moral ideal. I thought that was basically just being a jerk. After all, if there’s one thing the world doesn’t need (I thought) it’s more selfishness.
Then I talked to a friend who told me Atlas Shrugged had changed his life. That he’d been raised in a really strict family that had told him that ever enjoying himself was selfish and made him a bad person, that he had to be working at every moment to make his family and other people happy or else let them shame him to pieces. And the revelation that it was sometimes okay to consider your own happiness gave him the strength to stand up to them and turn his life around, while still keeping the basic human instinct of helping others when he wanted to and he felt they deserved it (as, indeed, do Rand characters).
II.
The religious and the irreligious alike enjoy making fun of Reddit’s r/atheism, which combines an extreme strawmanning of religious positions with childish insults and distasteful triumphalism. Recently the moderators themselves have become a bit embarrassed by it and instituted some rules intended to tone things down, leading to some of the most impressive Internet drama I have ever seen. In its midst, some people started talking about what the old strawmanning triumphalist r/atheism meant to them (see for example here).
A lot of them were raised in religious families where they would have been disowned if they had admitted to their atheism. Some of them were disowned for admitting to atheism, or lost boyfriends/girlfriends, or were terrified they might go to Hell. And then they found r/atheism, and saw people making fun of religion, and insulting it, in really REALLY offensive ways. And no one was striking them down with lightning. No one was shouting them down. No one was doing much of anyth |
f3c82591-df9a-4c82-a4b0-9796345a4050 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | EU AI Act passed Plenary vote, and X-risk was a main topic
(mildly rewritten version of my [EA forum](https://forum.effectivealtruism.org/posts/ctPrrzFnXGyWrmK3w/eu-ai-act-passed-vote-in-plenary-meeting) post)
The EU AI Act was originally proposed in 2020 as a very "EU regulates stuff first" kind of legislation, trying to make sure EU values are upheld (fairness, transparency, democracy, etc). Several revisions (and some [lobbying](https://time.com/6288245/openai-eu-lobbying-ai-act/)) later, GPAI (general purpose AI) and foundation model language was added, and it started looking a little more X-risk friendly.
After some [recent political uncertainty](https://www.euractiv.com/section/artificial-intelligence/news/ai-acts-plenary-vote-cast-with-uncertainty-as-political-deal-crumbles/), it passed with a strong majority at the EU Plenary meeting.
I found it fascinating to watch the the live session (from June 13th, the vote was on the 14th), where the Act was discussed by various EU parties. A few things that stood out to me:
* I was surprised that many EU country representatives mentioned the Open Letters and Existential Risk as a real concern, even though the EU AI Act was not originally intended to address it (though, it now has GPAI/foundation model bits added). Transparency and Fairness took a back seat, to some extent.
* Real-time Biometric monitoring was a big debate topic - whether to giving an exemption for law enforcement or not, for national security. Currently it looks like it will not be allowed, other than post-incident with special approval. This may be a useful lever to keep in mind for policy work
Others that watched the stream, feel free to mention insights in the comments.
> Linked [here](https://multimedia.europarl.europa.eu/en/webstreaming/event_20230613-0900-PLENARY?start=230613070143&end=230613123309&) (relevant timestamp 12:39 - 14:33)
>
>
With the recent appointment of [Ian Hogarth to the UK Foundation Model taskforce](https://www.gov.uk/government/news/tech-entrepreneur-ian-hogarth-to-lead-uks-ai-foundation-model-taskforce), and US talks of regulation getting stronger, I think we are in for interesting times. But it also seems like AI X-risk is a lot more mainstream, which I did not expect to be able to say. |
06a480fd-739c-4032-abc8-2d8326a47d58 | trentmkelly/LessWrong-43k | LessWrong | [Link] The Coming Plague
Related to: Forty Days , Low Hanging Poop
From professor Gregory Cochran's blog West Hunters.
> Laurie Garret has an article out in the Washington Post. She say that there’s no point in trying to block the spread of Ebola by travel bans.
>
> The problem is, she’s full of crap. Look, there are two possible scenarios. In both of them, r, the number of new cases generated by each case, is greater than 1 in parts of West Africa – which is why you get exponential growth, why you have an epidemic. If r < 1.0, the series converges – a case generates a few extra cases before dying out.
>
> Everything we know so far suggests that even though it is greater than 1.0, r in West Africa is not all that big (maybe around 2), mostly because of unfortunate local burial customs and incompetent medical personnel.
>
> It seems highly likely that r in US conditions is well under 1.0 which means you can’t get an epidemic. However, r is probably not zero. It doesn’t mean that you can’t get a few cases per imported case, from immediate contact and hospital mistakes. As an example, suppose that on average each case imported to the US generated a total of two other cases before dying out (counting secondary, tertiary, etc infections). Then, on average, the number of US citizens infected would be twice the number of infected visitors.
>
> Now suppose that a travel ban blocked 80% of sick people trying to fly here from Liberia. We’d have 80% fewer cases in US citizens: and that would be a good thing. Really it would. Does Laurie Garret understand this? Obviously not. She is a senior fellow for global health at the Council on Foreign Relations, but she is incompetent. Totally useless, like virtually everyone else in public life.
>
> We hear people from the CDC saying that any travel restrictions would backfire, but that’s nonsense too. One might wonder why they say such goofy things: I would guess that a major reason is that they were taught in school that quarantines are u |
175c9dab-d675-4700-ac47-9b52d5b8d105 | trentmkelly/LessWrong-43k | LessWrong | Open Thread, September, 2010-- part 2
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
|
0777fa96-8cd8-4b1d-9ece-c58b9361d894 | trentmkelly/LessWrong-43k | LessWrong | Is requires ought
The thesis of this post is: "Each 'is' claim relies implicitly or explicitly on at least one 'ought' claim."
I will walk through a series of arguments that suggest that this claim is true, and then flesh out the picture towards the end.
(note: I discovered after writing this post that my argument is similar to Cuneo's argument for moral realism; I present it anyway in the hope that it is additionally insightful)
Epistemic virtue
There are epistemic virtues, such as:
* Try to have correct beliefs.
* When you're not sure about something, see if there's a cheap way to test it.
* Learn to distinguish between cases where you (or someone else) is rationalizing, versus when you/they are offering actual reasons for belief.
* Notice logical inconsistencies in your beliefs and reflect on them.
* Try to make your high-level beliefs accurately summarize low-level facts.
These are all phrased as commands, which are a type of ought claim. Yet, they all assist one following such commands to have more accurate beliefs.
Indeed, it is hard to imagine how someone who does not (explicitly or implicitly) follow rules like these could come to have accurate beliefs. There are many ways to end up in lala land, and guidelines are essential for staying on the path.
So, "is" claims that rely on the speaker of the claim having epistemic virtue to be taken seriously, rely on the "ought" claims of epistemic virtue itself.
Functionalist theory of mind
The functionalist theory of mind is "the doctrine that what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions, or the role it plays, in the system of which it is a part." For example, according to functionalism, for myself to have a world-representing mind, part of my brain must be performing the function of representing the world.
I will not here argue for the functionalist theory of mind, and instead will assume it to be true.
Consider the following |
30768cfe-ccc4-4605-a83f-e0e5beb4b020 | trentmkelly/LessWrong-43k | LessWrong | Topological truth predicates: Towards a model of perfect Bayesian agents
In this post, I'll introduce a new kind of self-referential "truth predicate" (of sorts), which avoids diagonalization by placing a certain topological condition on the formulas it can be applied to. In future posts, I'll show how this can be used to model perfect Bayesian agents that are able to reason about a world containing other, equally powerful agents, and how, in particular, this yields a variant of AIXI that can reason about a world containing other instances of the same kind of AIXI.
This is inspired by the way that classical game theory avoids diagonalization by its use of mixed strategies, and in fact I'll show that if they have enough common knowledge, these agents will play Nash equilibria against each other; but this framework doesn't require players to be special kinds of objects, or at least not to the degree that classical game theory does. (Besides classical game theory, Paul Christiano's reflection principle for probabilistic logic was the other main inspiration for this work.)
Perfect Bayesian agents
The fact that AIXI can't model environments containing other AIXIs isn't as specific to AIXI as it may seem; the way I see it, it's really a problem with the decision-theoretic ideal of a logically omniscient, perfect Bayesian agent. A perfect Bayesian may have uncertainty about which possible world it's living in, but given any possible world in its hypothesis space, its supposed to know everything that happens in that world (so that it can calculate expected utility). For example, if the world is some sort of deterministic cellular automaton, and our agent has uncertainty about the initial state of that automaton, then for every particular initial state, the agent needs to figure out what will happen later---despite the fact that it itself lives inside that automaton.
Of course, in reality, that isn't literally possible. But having models that idealize some aspects of reality is good, because it allows us to ignore complications that are irrel |
c6c91609-fe11-424f-8a10-a67cdb3c0bba | trentmkelly/LessWrong-43k | LessWrong | Celiacs don't need to live in fear
TL;DR: Less sensitive celiacs can probably have trace amounts of gluten cross contamination with low risk of health problems, since celiacs who deliberately eat non-trivial amounts of gluten are mostly ok.
Celiac disease is an autoimmune condition where your immune system becomes triggered by gluten. This leads to a variety of strange effects on your body. Gastrointestinal distress symptoms (diarrhea, flatulence, irritable bowels) are the most obviously connected, but a large proportion (my guess is 30-50%[1]) of celiac sufferers don't get these. Other common symptoms are weight loss, iron deficiency, weak bones (density loss), psoriasis (itchy skin patches) and damage to your small intestine (villous atrophy). 1-2% of people globally have celiac, though most are not diagnosed.
The universal treatment ordered is a strict gluten free diet. This makes sense, and for me personally was relatively easy to do given the range of options these days.
The killer is that to do it strictly you must avoid cross contamination at all costs. While cooking this is not too hard as long as you use separate utensils. But it means you practically cannot eat out, since you don't control the preparation.
It's difficult to classify restaurants as 'safe' or not, which severely restricts your ability to eat out. Many restaurants that are actually very careful will still tell you they can't guarantee it's gluten free. How do you know whether they are saying that because they throw spaghetti at each other, or if they just understand how hard it is to clean all contamination traces and don't want to be sued?
On the other hand, many restaurants insist they are safe, but do not know the implications of catering for celiacs. So they fry the fries in oil that has fried glutenful things, which transfers a surprising amount of gluten.
The personal cost of this is high. Eating out is important for convenience options and for social inclusion. One paper notes that "GFD is very restrictive, result |
9171b4b1-dfbf-4763-8800-5a678da03a9e | trentmkelly/LessWrong-43k | LessWrong | Incentives from a causal perspective
Post 4 of Towards Causal Foundations of Safe AGI, preceded by Post 1: Introduction, Post 2: Causality, and Post 3: Agency.
By Tom Everitt, James Fox, Ryan Carey, Matt MacDermott, Sebastian Benthall, and Jon Richens, representing the Causal Incentives Working Group. Thanks also to Toby Shevlane and Aliya Ahmad.
“Show me the incentive, and I’ll show you the outcome” – Charlie Munger
Predicting behaviour is an important question when designing and deploying agentic AI systems. Incentives capture some key forces that shape agent behaviour,[1] which don’t require us to fully understand the internal workings of a system.
This post shows how a causal model of an agent and its environment can reveal what the agent wants to know and what it wants to control, as well as how it will respond to commands and influence its environment. A complementary result shows that some incentives can only be inferred from a causal model, so a causal model of the agent’s environment is strictly necessary for a full incentive analysis.
Value of information
What information would an agent like to learn? Consider, for example, Mr Jones deciding whether to water his lawn, based on the weather report, and whether the newspaper arrived in the morning. Knowing the weather means that he can water more when it will be sunny than when it will be raining, which saves water and improves the greenness of the grass. The weather forecast therefore has information value for the sprinkler decision, and so does the weather itself, but the newspaper arrival does not.
We can quantify how useful observing the weather is for Mr Jones, by comparing his expected utility in a world in which he does observe the weather, to a world in which he doesn’t. (This measure only makes sense if we can assume that Mr Jones adapts appropriately to the different worlds, i.e. he needs to be agentic in this sense.)
The causal structure of the environment reveals which variables provide useful information. In particular, the |
201326a5-f7ee-48b3-b04d-840ed03e6678 | trentmkelly/LessWrong-43k | LessWrong | AI Safety reading group
I am hosting a weekly AI Safety reading group, and perhaps someone here would be interested in joining.
Here is what the reading group has covered so far:
http://airca.dk/reading_group.htm
Next week, on Wednesday the 1st of February 19:45 UTC, we will discuss "How Feasible is the Rapid Development of Artificial Superintelligence?" by Kaj Sotala. I publish some slides before each meeting, and present the article, so you can also join if you have have not read the article.
To join, add me on Skype ("soeren.elverlin"). General coordination happens on a Facebook group, at
https://www.facebook.com/groups/AISafetyAarhus/
You can see the time in your local timezone here:
https://www.timeanddate.com/worldclock/fixedtime.html?msg=AI+Safety+Reading+Group&iso=20170201T1945&p1=1440&ah=1 |
7bdab0c1-d8b6-495c-822b-fe920beeac78 | trentmkelly/LessWrong-43k | LessWrong | Willpower Thermodynamics
Edit 2019: By the spark, was everything I wrote before mid-2018 idiotic? Apparently.
CONTENT WARNING: A COUPLE LWERS APPARENTLY THINK THAT THE CONCEPT OF EGO DEPLETION—ALSO KNOWN AS WILLPOWER DEPLETION—IS A MEMETIC HAZARD, THOUGH I FIND IT HELPFUL. ALSO, THE MATERIAL PRESENTED HERE WON'T FIT EVERYONE'S EXPERIENCES.
What happens if we assume that the idea of ego depletion is basically correct, and try to draw an analogy between thermodynamics and willpower?
Figure 1. Thermodynamics Picture
You probably remember seeing something like the above diagram in a chemistry class. The diagram shows how unstable, or how high in energy, the states that a material can pass through in a chemical reaction are. Here's what the abbreviations mean:
* SM is the starting material.
* TS1 and TS2 are the two transition states, which must be passed through to go from SM to EM1 or EM2.
* EM1 and EM2 are the two possible end materials.
The valleys of both curves represent configurations a material may occupy at the start or end of a chemical reaction. Lower energy valleys are more stable. However, higher peaks can only be reliably crossed if energy is available from e.g. the temperature being sufficiently high.
The main takeaway from Figure 1 is that reactions which produce the most stable end materials, like ending material 2, from a given set of starting materials aren't always the reactions which are easiest to make happen.
Figure 2. Willpower Picture
We can draw a similar diagram to illustrate how much stress we lose while completing a relaxing activity. Here's what the abbreviations used in Figure 2 mean:
* SM is your starting mood.
* TS is your state of topmost stress, which depends on which activity you choose.
* EM1 and EM2 are your two possible ending moods.
Above, the valley on the left represents how stressed you are before starting one of two possible relaxing activities. The peak in the middle represents how stressed you'll be when attempting to get the act |
7b0cb6e6-1ec2-48a2-b2c6-a3d88261f7c2 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Me, myself and AI - DeepMind: The Podcast (S2, Ep7)
hello and welcome back to deepmind the
podcast
this episode is all about how ai is
already having an impact on the world
around us
shall we begin
[Music]
uh excuse me what are you doing starting
without me
i'm the real hannah fry
i'm only trying to help i heard you were
unavailable to present this episode so i
offered to step in
unavailable i'll take it from here thank
you very much
that voice you just heard there it was
generated using wavenet
voice synthesis technology trained on
audio recordings of my voice
in this episode we're going to be taking
a look at some of the ways that
deepmind's technology is already being
used out in the real world
including how wavenet can recreate the
voices of people with vocal impairments
it was really touching to see his family
and him listen to the voice his family
cried
because it's something that's so
personal
how neural networks can help anticipate
natural disasters
it is important to know if there's going
to be a buildup of a catastrophic storm
that's going to create flooding
and how ai could even transform the game
of football
so a coach might say to the system what
will happen if i move fabinho from
defense to midfield
welcome to episode 7 of the deepmind
podcast
me myself and ai
let's go back to that snippet of audio
from the beginning of this episode
generated by wavenet the deepmind
podcast aficionados among you may
remember that wavenet doesn't just
generate speech
it can also compose music
and we used a little bit of it in our
first series
but when it comes to creating human
sounding voices wavenet has improved
considerably over the past few years the
motivation however has stayed the same
everything from reading documents out
loud for the visually impaired to making
your smart speaker sound more natural
here's how zachary gleicher a product
manager on deepmind's applied team put
it
text-to-speech research has been
happening for decades and everyone knows
that texas speech voices have
historically sounded pretty robotic a
classic texas speech voice is the
stephen hawking voice british people
describe its accent as american here he
is speaking on bbc radio 4's desert
island discs in 1999 but the americans
say it is scandinavian
it's not because
people want robotic voices it's because
it's an extremely challenging problem
humans have evolved to be able to
understand very subtle nuances in how
things are set and if there's one little
thing that sounds off then people are
like oh that sounds robotic like if we
were to create a
dog barking generator
people would be like oh my god that
sounds just like a dog and you wouldn't
be able to perceive
any of the differences because our
brains not trained to know what good dog
barking sounds like meanwhile your dog's
in the corner being like it's so fake
exactly
[Music]
before wavenet the general method for
generating speech was called
concatenative text-to-speech
you'd get someone in a recording studio
and you'd record hours and hours
trying to capture all the phonemes in
the alphabet so that you have a real
diverse recording set
in production you stitch together the
voice recordings so imagine you wanted
to say the cat sat on the mat
and you had a recording of someone
saying the word the and you had a
recording of someone saying cat you
could stitch those two words together
but the problem there is that the voice
is going to sound like that
instead of stitching different bits of
pre-recorded words and syllables
together
wavenet directly models the raw waveform
of the voice building up less than a
millisecond of audio at the time
first it will scan the text you give it
for abbreviations and convert them to
something that can be fed into the
speech generator
like changing
hwy 101 to highway 101
[Music]
the second step is to try and predict
the intonation of how something should
be said based on the text around it
the can be read as the the
the
depending on where in a sentence it
falls
each would sound wrong if it was used in
the wrong context
now the third and final part is the
acoustic modeling
acoustic modeling focuses on who it
sounds like
if i pretend to sound like my brother on
the phone it still sounds like me my
friend will be able to tell it's me if i
say a sentence with a different tone of
voice you still know it's my voice
back when deepmind launched wavenet in
2016 you needed about four hours worth
of audio samples from a person to model
how their voice sounds
but now you can do it with just a few
minutes worth of audio
one of the big breakthroughs was a
process called fine tuning which makes
it possible to co-train voices together
google has built an enormous data set
with professional voice actors reading
out the same text
the model learns from all of these
samples how particular words are
pronounced
each new voice that is added to the
database results in an improvement to
all of the other voices and all that's
then needed is a small sample of a new
voice to provide the finishing touches
if you like that make the voice unique
to that person
that's why we call it fine tuning
because it's a way to just kind of
fine-tune the model based off that one
additional speaker because the
difference between your voice and my
voice for instance even though you're
speaking a different accent as a male
voice actually the way that we roll from
one word to another will have lots of
similarities yeah of course
after getting the all-important consent
from the person whose voice you're
creating it's as simple as recording
around 10 minutes of high quality audio
and matching those up with written
transcripts for the model to be trained
on
given that you had high fidelity good
recordings of yourself because you are a
podcast creator we were able to do that
without having to send you back into the
recording studio i had a hannah fry
voice bot where i could type anything
the power the power
hi there i'm a mathematician author and
podcaster who's fascinated by artificial
intelligence and i'm the real doctor
hannah fry
it's really good i know it's good what's
really awkward is that it's picked up on
a couple of the indonesians that i know
i must make but i've never really
noticed that i'd make yeah like did you
hear it went fascinating
[Laughter]
i love it see it knows you better than
you know yourself oh it's so cringe god
do i sound like that there were two
words that i thought sounded a bit off
one was mathematician
there was like a very hard bit in the
middle which i think isn't how i would
say the word i'm a mathematician author
and podcaster and i'm the real doctor
hannah fry
the second thing was how i said my name
it's a bit like you know when someone
reads your phone number back to you but
rather than saying like oh seven eight
one three they're like oh seven eight
one three video and then you're like
yeah there's something's gone wrong in
my mind yeah yeah if you synthesize long
sentences you'll notice there are some
things where you'll be like oh that
sounded a little weird texas speech
isn't solved yet we've reached the point
where voices sound perfectly natural in
many instances
and the challenge now is largely about
how natural it is given a certain
context
so for example if i wanted to text a
speech system that would
say
oh hannah i really like your sweater
now you say say it sarcastically hannah
wait oh
that's where we're really lacking is
like how do you capture everything
when i'm not trying to crack one of the
unsolved millennium prize problems you
can find me chillaxing with a cup of tea
and that quintessentially british tea
time snack a scone
am i that breathy bloody hell
sound like i'm on a sex line
the other thing that's worth saying
actually about that the way i'm having a
conversation with you now where i'm a
little bit more up and down and a bit
more energetic say
is a different sort of voice the audio
that this was trained on was the script
that i read out for series one so
inevitably then it will end up being in
that style exactly people will be like
make the voice sound happier make the
voice sound sadder and that's really
hard if you don't have examples because
the model has to learn what happy hannah
sounds like eventually though could you
have a system where the ai understands
how a happy voice differs from one
that's reading a podcast script
and can make those changes appropriately
yeah you can make hannah sound generally
happier but for people who know you
really well it's just like wait it
sounds slightly off because you might
have certain quirks about your voice
that can only be learned if you hear how
you say something you know if you always
elongate a certain word when you're
happy
as people do just so happy yeah yeah i
don't know if that's how you say things
when you're excited fabulous
i don't know everyone has their quirks
this next recording hints at how this
could be used dangerously you could
use the text-to-speech synthesizer to
say anything
hello i'm dr hannah frye and i'm here to
tell you that ufos are real when i went
into my garden yesterday i noticed these
strange dark circles on my lawn
how can you make sure that it's not used
for nefarious purposes
we thought a lot about this technology
on how it could be abused i think like
the thing that we care most about
is that people's voices are not created
without their consent
that's why we have not open sourced the
models we haven't made the data sets
available to mitigate a lot of those
risks but also there's a lot of cool
mitigations i think one that excites me
is that you need a script to be able to
create a voice
and
you could have that script be
you saying that i give consent for my
voice to be created
there is some research that's being done
that watermarks audio
is the idea that in creating this
artificial voice
you deliberately imprint tiny audio
signatures that you could see with a
certain piece of software perhaps but
that are
inaudible to the human ear so then you
can go in and say ah look this one is
fake exactly but here's the thing
watermarks could be removed people
might not consider that it's fake and
there are a lot of companies who are
releasing this technology it's not like
deepmind has the secret sauce there's no
surefire silver bullet way to stop this
technology being used by harm there's
ways to mitigate it but the same way
that we don't trust photos today
everyone sees a photo and be like is
that photoshopped i think it's going to
be the same with audio for better for
worst i think people are going to just
not trust what people are saying within
an audio recording and it's unfortunate
to see people
using it irresponsibly
because it might spoil a lot of use
cases that are really helpful for
society
zachary told me about a partnership
between deepmind and google called
project euphonia in which wavenet
technology was used to recreate the
voice of tim shaw
an american footballer who is diagnosed
in 2013 with als a progressive
neurological disease that causes speech
impairment
tim was a particularly good candidate
for wavenet
a lot of people who get diagnosed with
als are asked to do some voice banking
where they'll record themselves so that
they can replay their voice in the
future like if there's a song that they
love to sing but not everyone does that
and with tim shaw he had a lot of
recordings of himself because he was
interviewed on tv
it's that amazing pre-game electricity
the butterflies are there and i'm ready
to hit somebody so you might want to
look out
researchers use 30 minutes of recordings
to create tim shaw's synthesized voice
unfortunately when tim sat down with his
family to hear his own voice for the
first time in years
he struggled to recognize it i know you
remember that
it had
[Music]
if you hear an old recording of yourself
when you were a kid
you're like did i sound like that but
other people do remember how your voice
sounded his family did i want to explain
to you why it's so difficult for me to
speak the diagnosis all of it it's his
voice that i've forgotten
his family cried because it's something
that's so personal it's such a key part
of your identity
there is still more work to be done to
make this technology more widely
accessible to als patients it's tricky
at the moment because the augmentative
communication devices that people like
professor stephen hawking used to speak
are generally not connected to the
internet and unfortunately these models
are far too large to be run locally on a
device so you must be able to quickly
send data through to a server to get
them to work that's a work stream that
definitely is being invested in and i
think in time these people will be able
to have these voices on their device so
that they can use to communicate every
day
[Music]
while speech synthesis is a very
personal way in which ai is beginning to
touch some people's lives there are
other projects concerned with something
that affects all of us
whether we like it or not
the weather
deepmind has recently teamed up with
researchers at the met office the uk's
national forecasting service perhaps
when people think of the uk met office
they think of michael fish this
meteorologist who famously predicted no
storm back before i was born this is dr
neil robinson from the met office neil
he is talking about an infamous case of
a weather forecaster who in 1987 assured
viewers that there was no hurricane on
the way good afternoon to earlier on
today apparently a woman rang the bbc
and said she heard that there was a
hurricane on the way well if you're
watching don't worry there isn't but
having said that actually
the great storm as it came to be known
turned out to be the worst storm to hit
southeast england in three centuries
nowadays of course weather forecasting
is based on phenomenally sophisticated
mathematical models that churn through
eye-watering amounts of data
we have one of the world's most powerful
supercomputers for analyzing the physics
of what's going on in the atmosphere to
make our weather forecasts
the halls where those supercomputers
exist their football pitch size
but these models do have their
limitations
so traditional weather forecasting
approaches have a real sweet spot about
a couple of hours in the future to maybe
a few days in the future
but a lot of decisions need to be made
on a shorter time scale than that
this shorter term weather forecasting is
known
as now casting
now casting is the problem of predicting
where is it going to rain and how much
just a short window into the future
this is the voice of deep ryan hadsell
so we're talking just is it going to
rain over my house in the next 30
minutes up to a couple of hours into the
future and predicting at a pretty high
resolution where is it going to rain
what are the real benefits of being able
to know what's going to happen in the
next hour the dream here is to be able
to warn people before really extreme
flooding events so that they can take
action like evacuation
there's been a few notable examples over
the last few years of these really
extreme rain events in the uk the
flooding at boss castle and coverage in
the south west it is these pictures now
with the vehicles bobbing around in them
just just floating along like corks
which
actually under climate change one of the
things that we're reasonably confident
is going to happen more in the future is
the rainfall is going to become more
extreme
the problem here is that the traditional
physics-based forecasting models involve
so much number crunching inside that
football stadium-sized supercomputer
that by the time their forecast is ready
it's already out of date
so researchers use other statistical
methods for their short-term forecasts
including a technique called optical
flow a computer vision method developed
in the 1940s which tracks the movement
of air over a two-dimensional image
it looks at the current state of clouds
and precipitation and then it tries to
follow those streamlines to kind of
extrapolate where it thinks those clouds
are going to go in the future it's not
an unreasonable place to start but it's
quite a sort of first order
approximation of the problem
and then one day raya hadzel was at a
chance meeting in exeter chatting to
some people from the met office
when she realized that this description
of clouds moving in a particular
direction across a screen
rang a bell
it was startlingly similar to a well
trodden problem in deep learning
video prediction is an area of research
where you take a video and then you just
try to predict what the next few frames
in that video are going to be
so if i see somebody
swinging a cricket bat
and then you stop that for a moment i
can sort of say ah what's going to
happen next is that that cricket bat is
going to continue to swing through
and you can think about rainfall as
being a video that's playing over time
where the radar provides this
information layer over a map of say the
uk as the rain moves along maybe a storm
comes up or a storm dissipates and so we
thought that doing that short-term
prediction into the future could be
solved by using video prediction neural
networks
but before the neural network could be
used to predict precipitation it needed
to be trained
for that the met office had their
rainfall radar a set of instruments
which use electromagnetic pulses to
measure the location and intensity of
rainfall
we got about a year's worth of radar
data across the uk
and turned this into something that
looked like a movie like a video playing
and we started training different types
of architectures to just predict the
next few frames of video
and this worked all right but what
tended to happen is that the neural
network just predicted a blurred out
future so we started looking at other
methods to solve this
and the method that has worked extremely
well
is to use a generative adversarial
network
this is usually talked about more in the
context of deep fakes because this is a
method that can be used to produce
extremely realistic fake videos
and this has been a really worrisome
actually use of ai technology
and so it was actually really nice to
see that this was an application of gans
a gan or generative adversarial network
is a clever way of having two neural
networks compete with each other to
produce the most realistic images
it's as though you have a pairing of a
counterfeiter and a police officer
the counterfeiter tries to produce an
image that will fool the police officer
and if it's not good enough they'll get
caught and have to try again
over time that competition gradually
increases the accuracy of those images
in this case those images are
predictions of weather in the near
future
and using this technique the results
were startling
instead of producing blurred out fields
of rain it produced very crisp lines of
rain and realistic movements of storms
across the uk
to test out exactly how good this ai now
casting was compared to the optical flow
method
researchers fed in a radar image of
precipitation patterns over scotland and
asked the neural network to generate
predictions of what the rain pattern
would look like
over the next 90 minutes
they compared these predictions to
observations of how the actual weather
turned out
when i first saw these images it was
unclear to me which ones were the
observations and which ones were the
predictions
i said
are these the same images they were so
close it was remarkable what you thought
someone had got mixed up and just given
you the same picture twice they looked
very similar
it wasn't perfect but it was very
realistic
the structure of these clouds ends up
being an important predictor of exactly
how heavy rainfall will be
where and when
and once that precipitation hits the
ground a different type of model takes
over
working out how water will run downhills
and collect in valleys potentially
causing flooding
one of the advantages of an outcasting
system like this is that it could mean
that the output is more useful for those
flooding models because
the actual predictions that make have
this more accurate fine structure which
means that when it goes into a flooding
model it hopefully could lead on to more
accurate flooding predictions
we're not necessarily quite there yet
with this system but it certainly has
moved us another step along
the gan model doesn't just provide one
prediction it can provide many different
estimates of what's going to happen in
the future
[Music]
and by inspecting those different
possibilities we can get an
understanding of what the different
extremes of the scenarios are which is
really valuable when we're trying to
help people make balanced decisions
about what they're going to do
the people who need to make the
decisions are met office meteorologists
they are the ones who assess all of the
information available and construct the
final forecast
neil surveyed them to find out whether
they preferred using the ai tool to
traditional methods
they really regularly chose this new
deep learning methodology over the
traditional methodology which is a
really good sign
the now casting project represents a
first step in how ai could be used in
weather forecasting
but there are still important challenges
to iron out for instance because these
machine learning models are based on
what has gone before
they're not good at forecasting really
unusual extreme weather events
as forecasters the more rare an event is
the more interested we are in
forecasting it and that's one of the
great things about the traditional way
we do weather forecasting i think it's
also why in the view of meteorologist
deep learning is never going to replace
the physics-based models
i actually think the future is really
for a hybrid approach where we're able
to take the physical knowledge and
combine that with the
power of deep learning methodologies
because of these limitations and the
fact that neural networks cannot explain
all of their predictions in detail they
aren't yet being incorporated into the
met office's official forecasts but the
collaboration with deepmind has provided
a glimpse of a future in which
artificial intelligence technologies
augment the capabilities of trained
meteorologists
[Music]
of course being able to make predictions
is useful for all kinds of real world
problems
but how does ai fare in a game that has
adored around the world
for its glorious unpredictability
last year deepmind published a paper on
how ai could transform football in
collaboration with liverpool football
club here in the uk
why liverpool you ask
well who's your favorite football team
liverpool i love liverpool i watch every
one of their matches
turns out that deepmind ceo derma
cesarebes is a lifelong fan of the reds
now who would have thought it
i know no one will believe this but they
approached us of course we jumped at the
chance and they happen to have one of
the best analytics teams in the world of
sport currently and of course we got a
tour of the training ground which we
needed obviously to have as part of the
collaboration were you free that day i
was happen to be free that day
miraculously
of course crunching data to analyze a
game like football is nothing new
what has changed in recent years though
is the sheer amount of data available
everything from computer vision
algorithms monitoring players positions
and motion sensors picking up on players
movements
carl turles one of the authors on the
football paper is based at deepmind's
paris office
over the next five years one of the big
ambitions of this football work is to
build a prototype of an ai system known
as an automated video assistant coach or
avac for short
[Music]
this is basically a system that
seamlessly integrates several data
modalities like raw video footage
tracking data event stream data all
sorts of sensors that the players are
wearing to assist coaches with their
decision making
there are a few different techniques
that are useful here there's computer
vision which can detect what's going on
in footage from a football game
then there's game theory which is all
about maximizing your advantage over an
opponent
and then there are statistical learning
methods which can hunt for patterns in
previous games
put them together and this automated
coach could make counter factual
predictions of what would happen in the
game if a particular tactical change is
made or a certain player is replaced
say for argument's sake that liverpool
fc are up against arch rivals manchester
city in a big premier league game
liverpool's coach could use the ai
system to monitor the match and provide
tactical feedback in real time
so a coach might say to the system hey
avac what will happen if i
move sella from
winning a position to a striker position
or we would move fabinho from defense to
midfield
so sort of this counterfactual
questions that are really interesting
for a football coach can we play that
out based on what we've seen in the
first half
the coach could then be shown a
simplified simulation a video with dots
moving across the pitch to indicate
possible player trajectories in
different scenarios
the idea here is not to replace human
analysts but to complement them with
another powerful analytical tool
the avac
is just going to give advice
and is going to say like what it
believes is maybe a good action to take
and in the end it's up to the coach and
it's of course also still up to the
players to act upon that
[Music]
it's not just during a game that such a
system could be useful it could help in
post-match training too highlighting the
exact moment when it would have been
better for a player to pass rather than
taking a shot at goal
although deepmind's research is
currently focused on new analytics tools
for coaches and teams
karl tells believes there are also ways
in which ai could enhance the experience
of football fans
currently when a fan watches a game on
tv there will be like
expert commentary but with new
technology this could become
personalized expert depending on your
own interests maybe what sort of
questions you would ask the ai about
your game for example on tactics
maybe in a more distant future fans will
have access to a screen in the stadium
or vr that augments their experience so
for example getting a feel for the pitch
[Music]
it's easy to see how a more personalized
experience for fans and improved
predictions for teams could have an
impact on football in future
but as i said before football is a
joyously unpredictable game
i don't think we will be able to predict
outcomes of a game accurately at any
point in time and this simply because
the decision making of pitch by coaches
and on pitch by the players is still in
hands of humans right so the signal is
still noisy
and there are problems with relying too
much on ai for what is at heart a deeply
human game
in 2020 the scottish football team
inverness caledonian thistle fc
announced that it would live stream its
games via cameras which automatically
track the football to give viewers the
best view of the action
during one game the automatic camera
seemed much more interested in following
the linesman around the pitch
turned out it had mistaken the
linesman's bald head for a football
and there are concerns that computer
vision systems like these might be much
better at tracking some players than
others
the current systems don't capture the
events that happen in women's sports as
well as they do for men's sports
here's jackson brochier another author
on the football paper so even where
we're trying to do proactive research on
women's data in an equal way to men's
data the labels that identify what's
happening in the videos that we use for
the training are actually much less
accurate
to those clued up about the problem of
bias in ai this might sound like a
familiar story in order to get really
good at analyzing the performance of
football players and teams an ai system
would need to watch hundreds of hours of
video footage from football matches
all of this data then needs to be
manually annotated to tell the system
what is going on in each frame
the trouble is when it comes to women's
football there is not nearly as much
annotated data to train on you might be
wondering why would the gender of a
football player even be relevant here
but as we've seen in numerous other ai
systems sometimes even small differences
perhaps the body frame of the players
could be enough to mean that the ai's
predictions on those games would end up
being less accurate
there is a broader point here currently
ai systems are only as good as the data
they're trained on which means if a
particular group is missing from your
data set the implications can be huge
when they first released phones that
would unlock from your face the images
they used to train those algorithms
if they were more people of white skin
versus black skin then it learned how to
identify those faces better
what we want to do on sports side is
make sure that the solutions that we're
developing are not biased to
gender or
skin tone or any other variances in the
volume of data that we're learning from
researchers are currently considering
technical solutions to address the lack
of data from women's football
but these are specific fixes to a much
larger problem of bias in ai systems and
as much as those working here believe
firmly in the benefits of deploying ai
in the real world there are also
potentially unwelcome consequences
to new technologies that have to be
carefully navigated
in the next episode of the deepmind
podcast we'll be taking a closer look at
the efforts at deepmind to make sure
that when ai reaches the real world it
works for everyone
we know that periods of history have
caused harms to specific communities
right and if we look at modern
technology through that lens we see very
similar patterns and certain uses of ai
and that is all for this episode gotta
run because the forecast says reigns on
the way i'll leave the credits to my
wavenet voice shall i
deepmind the podcast is presented by
hannah fry
special thanks for this episode go to
norman casa grande the engineer at
deepmind who found the time to create
hannah's wavenet voice
the series producer is dan hardoon of
whistle down productions
until next time goodbye
you |
96a00bd8-5b5b-4d9f-8307-39fe90062ee5 | trentmkelly/LessWrong-43k | LessWrong | Understanding rationality vs. ideology debates
[This was originally intended as a Facebook post, but it grew to the point where it felt way too long for that.]
For a while now I've been trying to wrap my head around what I'll loosely call "rationality vs. ideology" debates. Very roughly, these are debates where at least at first glance one side seems to be saying that ideology is more important than rationality in some way (or that rationality is at least partly defined in ideological terms), and the other side is saying the reverse. I'm particularly interested in debates where there seem to be very thoughtful and well-meaning people on both sides. My personal intuition is strongly towards the "rationality-first" approach, at least for many such debates, so for a long time I've felt confused about trying to understand where the "ideology-first" side is coming from.
The rest of this post is where I've gotten to so far in my attempts to understand this, although it still feels incomplete. Interested to hear what other people think of it.
----------------------------------------
If we get a bit more fine-grained about this, it seems to me like there are several different types of debates that might be going on. Unfortunately though it's not always clear which type is actually involved in any given debate, and it can easily be several types at the same time.
1) Debates about epistemology
1a) Debates over the nature of truth
Some approaches to truth say that ideology, pragmatic considerations, or the like are at least partially what makes something true or false, while others strongly disagree. Proponents of ideology- or pragmatically-based approaches might argue that all epistemic approaches are inescapably based on "ideology" of one sort or another, so explicitly tying a theory of truth to ideological or pragmatic considerations shouldn't in itself make the theory any less plausible than supposedly evidence-based approaches. (I once wrote a short essay about this in relation to religious beliefs, but I thin |
1f4938d5-9eb6-48f8-be15-e55c2b46158c | trentmkelly/LessWrong-43k | LessWrong | Dark, not colorless
The last arc of posts has been about how to handle a dour universe. Become unable to despair, learn to see the darkness rather than flinching from it, learn to choose between bad and worse without suffering. Learn to live in a grim world without becoming grim yourself, learn to hear bad news without suffering, and stop needing to know your actions were acceptable. Come to terms with the fact you may lose, use the darkness as a source of fuel, and let go of dreams of total victory. These are the tools I use to tap into intrinsic motivation, in a precarious world where the problems are larger than I am.
Where others see a hurting world and feel guilty for not doing enough to help it, I see a hurting world and feed my own resolve. Instead of feeling guilty for not working until I drop, I recognize the psychological impossibility and resolve to do everything I can within my mortal constraints. For me, at least, this internal drive is more robust and reliable than guilt motivation.
This brings us to the end of the penultimate arc of the "replacing guilt" series of posts, which I began many months ago, and takes us into the final arc. The first arc was about addressing the listless guilt that comes from ignoring a part of yourself that wants to be doing something more. The second arc was about eliminating the feeling of obligation, and fighting for something you care about only because you care about it. The third arc was about coming to terms with your limitations and learning to optimize within them, rather than feeling guilty because of them. This post concludes the fourth arc, about living in a dark universe and tapping into resolve instead of guilt.
The fifth and final arc is about what you do next. Once you've removed guilt and replaced it with intrinsic drive — both cold resolve and hot desire to make the future bright — what do you do next? What thought patterns allow one to turn these feelings into actions, rather than feelings of frustration and impotence?
I |
5b3756ff-46a3-4237-8570-139aaff6d222 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Verifying vNM-rationality requires an ontology
**Result**
It is impossible to verify that an agent is [vNM-rational](https://wiki.lesswrong.com/wiki/Expected_utility) by observing its actions without access to the domain of its utility function.
**Motivation**
Alphonso and Beatriz both go the market to buy fruit.
Alphonso prefers grapes to oranges.
He fills his basket with grapes and pays for them.
Beatriz carefully picks through the fruit and purchases some oranges and some grapes.
Callisto arrives with a package of grapes.
"Say, Beatriz, would you like to trade some of your oranges for this package of grapes?" Callisto offers.
"Gladly." Beatriz replies, exchanging some of her oranges for the grapes.
A few moments later, Alphonso notices Beatriz giving Deion some grapes in exchange for some oranges.
"You are acting irrationally, Beatriz!" Alphonso exclaims. "Your unstable preference between oranges and grapes makes it possible for a malicious agent to exploit you and exhaust your entire grocery budget!
"Ah, but I am acting rationally." Beatriz replied with a smile. "I prefer fruit that is fresh enough to last more than seven days. Thus, I trade away fruit that will spoil before that time."
**Explanation**
Consider an agent A.
We are interested in verifying whether or not A is vNM-rational.
However, we are only able to observe A's decisions without any access to the domain of A's utility function.
Without this access, it is impossible to distinguish between vNM-irrational choices (i.e. choices that violate one of the axioms of vNM-rationality) and choices that are vNM-rational but made under an unexpected ontology.
In other words, we need to know how A perceives outcomes of the world before we can verify that A's preferences over those outcomes are vNM-rational. |
879d77c4-a291-4858-b7a7-b49b63cde253 | trentmkelly/LessWrong-43k | LessWrong | The Noddy problem
An episode of the Noddy animated series has the following plot.
Noddy needs to go pick up Martha Monkey at the station. But it's such a nice, sunny day that he would prefer to play around outside. He gets an idea to solve this dilemma. He casts a duplication spell on himself and his car and tells the duplicate to go fetch Martha while he goes out to play. Later, Noddy is out having fun when he suddenly spots his duplicate. It turns out that the duplicate also preferred playing outside to doing the errand so he also cast a duplication spell. Then they see another duplicate, and another...
I think this story makes for a nice simple illustration of one of our perennial decision theoretic issues: When making decisions you should take into account that agents identical to yourself will make the same decision in the same situation. A common real-life example of the Noddy problem is when we try to pawn off our dietary problems to our future selves. |
38b4e9f3-bfec-419a-80a7-ea470e4e8d25 | trentmkelly/LessWrong-43k | LessWrong | Assuming we've solved X, could we do Y...
The year is 1933. Leó Szilárd has just hypothised the nuclear chain reaction. Worried researchers from proto-MIRI or proto-FHI ask themselves "assuming we've solved the issue of nuclear chain reactions in practice, could we build a nuclear bomb out of it"?
Well, what do we mean by "assuming we've solved the issue of nuclear chain reactions"? Does it mean that "we have some detailed plans for viable nuclear bombs, including all the calculations needed to make them work, and everything in the plans is doable by a rich industrial state"? In that case, the answer to "could we build a nuclear bomb out of it?" is a simple and trivial yes.
Alternatively, are we simply assuming "there exists a collection of matter that supports a chain reaction"? In which case, note that the assumption is (almost) completely useless. In order to figure out whether a nuclear bomb is buildable, we still need to figure out all the details of chain reactions - that assumption has bought us nothing.
Assuming human values...
At the recent AI safety unconference, David Krueger wanted to test, empirically, whether debate methods could be used for creating aligned AIs. At some point in the discussion, he said "let's assume the question of defining human values is solved", wanting to move on to whether a debate-based AI could then safely implement it.
But as above, when we assume that an underdefined definition problem (human values) is solved, we have to be very careful what we mean - the assumption might be useless, or might be too strong, and end up solving the implementation problem entirely.
In the conversation with David, we were imagining a definition of human values related to what humans would answer if we could reflexively ponder specific questions for thousands of years. One could object to that definition on the grounds that people can be coerced or tricked into giving the answers that the AI might want - hence the circumstances of that pondering is critical.
If we assume X="human |
4e156a20-67a3-464e-ab95-13d20c86cca8 | trentmkelly/LessWrong-43k | LessWrong | Meetup : San Francisco Meetup: When to trust experts
Discussion article for the meetup : San Francisco Meetup: When to trust experts
WHEN: 24 August 2015 06:15:00PM (-0700)
WHERE: 1061 Market St #4, San Francisco, CA 94103
We'll be meeting to talk about when to trust what experts say, and what to do when there are no experts. There's been some discussion about this on the internet recently: Topher and Scott for example.
I can be reached at 301-458-0764 if you need help getting in. As always, feel free to show up late.
Discussion article for the meetup : San Francisco Meetup: When to trust experts |
de969681-535d-49de-a513-6dd67979ab22 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Playful Interactions for Representation Learning.
1 Introduction
---------------
Imitation learning has proven to be a powerful approach to learn complex robotic skills from visual observations [[1](#bib.bib1), [2](#bib.bib2), [3](#bib.bib3), [4](#bib.bib4)]. Recent works have shown how simple approaches like behavior cloning can reliably replicate manipulation behaviors without needing explicit reward feedback [[5](#bib.bib5), [6](#bib.bib6), [7](#bib.bib7)]. However, such methods are notoriously data hungry, often requiring 100-1000 demonstrations during training. These demonstrations further need to be collected across a wide variety of diverse environments to prevent overfitting to narrow distributions of environments. This paradigm of visual imitation becomes even less practical when we need to learn a multitude of diverse skills for our robots.
But why does visual imitation require such large amounts of data? One hypothesis is that the imitated policy not only needs to learn the desired behavior, but also the appropriate low-dimensional representation for the high-dimensional visual inputs. Hence one path to efficient visual imitation is to reduce the burden of representation learning by using pretrained representation learning models. Using such pretraining both from labelled and unlabelled data is routine in Computer Vision [[8](#bib.bib8), [9](#bib.bib9), [10](#bib.bib10)]. However, in the context of robotics, obtaining reliable pretraining is not straightforward. Standard vision datasets [[11](#bib.bib11), [12](#bib.bib12), [13](#bib.bib13)] contain predominantly outdoor images with various object-centric biases. While, standard robotic datasets [[14](#bib.bib14), [15](#bib.bib15)] are often lab-specific and contain their own robot-specific biases. In fact, even finetuning from data collected through other tasks on the same robot may not be amenable due to distributional mismatch [[16](#bib.bib16), [17](#bib.bib17), [18](#bib.bib18)]. This brings us to our central question – How can we get data that matches the visual distribution of a given robot?
To answer this, we take inspiration from research in human development and look at an alternate form of data: play [[19](#bib.bib19), [20](#bib.bib20), [21](#bib.bib21), [22](#bib.bib22), [23](#bib.bib23)].
From a pure data perspective, playful interactions possess two key qualities. First, it would be cheap to obtain since play is task-agnostic, and it does not need extensive curation or instruction to data collectors. Second, it would be naturally diverse since playful interactions can be easily collected in unstructured environments. But how does one collect and learn from playful interactions for robots?
In this work, we present a framework for representation learning that can scalably collect and learn from playful interactions. First, we use reacher-grabber tools [[24](#bib.bib24)] built on top of DemoAT [[25](#bib.bib25)] to collect play data in the wild. We simply instruct users to “do whatever they want with this tool”. With around two hours of self-guided play, we obtain 30,000 frames of playful interaction data in diverse environments. Equipped with this data, we then use a novel self-supervised learning approach to learn a visual encoder that can extract visual representations. Since play data neither solves a specific task nor operates in a single environment, the obtained visual encoder is task-agnostic and can operate on diverse visual inputs.
To demonstrate the usefulness of representations learned for visual imitation, we probe the visual encoder through downstream task-specific finetuning. In this work we consider two downstream tasks, pushing and stacking. Both tasks come with a small number of expert demonstrations collected on the same reacher-grabber setup, with data taken from [[25](#bib.bib25)]. On both tasks, we report significant improvements in behavior cloning metrics and outperform popular methods such as imitation from scratch [[26](#bib.bib26)], data augmentation based imitation [[25](#bib.bib25)], ImageNet based pretraining and multi-task transfer [[27](#bib.bib27), [28](#bib.bib28)]. Against our strongest baseline, ImageNet pretraining, we show that play pretraining achieves up to 27% better MSE performance during test time. Interestingly, when pretrained on top of ImageNet initialization, we achieve up to 38% better performance than training from scratch.

Figure 1: Our method uses around two hours of unlabeled, diverse, and unstructured playful interaction data to learn meaningful representations for downstream manipulation tasks such as Pushing and Stacking.
In summary, we present three contributions in this work. First, we propose a framework for collecting playful visual interaction data in the wild. Second, we use self-prediction based representation learning to learn meaningful task-agnostic visual representations. Third, we show that our representations learned on around 2 hours of play can outperform standard imitation-based approaches on two manipulation tasks, pushing and stacking. Although the use of play data has been previously explored in the context of simulated environments [[29](#bib.bib29)], to our knowledge this work is the first that studies the use of this play data in real-world environments. Our playful interaction data, downstream task data, and learned models are publicly released on our website.
2 Related Work
---------------
###
2.1 Imitation Learning
Imitation learning is a common technique used to learn skills and behaviors by observing humans [[19](#bib.bib19), [30](#bib.bib30), [31](#bib.bib31), [32](#bib.bib32)]. It has been successful in a wide range of robotic tasks such as pushing, stacking, and grasping [[1](#bib.bib1), [33](#bib.bib33)]. Inverse reinforcement learning (IRL) [[34](#bib.bib34)] and behavior cloning (BC) [[26](#bib.bib26)] are two broad approaches in imitation learning. For a more comprehensive
review of imitation learning, we point the readers to [[35](#bib.bib35), [36](#bib.bib36)].
Behavior cloning [[26](#bib.bib26)] is widely used for its simplicity and ability to generalize to novel scenarios. However, large datasets of expert demonstrations are needed for optimal performance. Demonstrations collected from a different viewpoint or embodiment require complex methods such as transfer learning to overcome the domain gap [[2](#bib.bib2), [37](#bib.bib37)] but are easy to obtain. Large-scale manipulation datasets collected in this manner include [[15](#bib.bib15), [14](#bib.bib14)]. Recent efforts in eliminating this domain gap include teleoperation methods [[1](#bib.bib1)] and assistive tools [[24](#bib.bib24), [25](#bib.bib25)]. Using these assistive tools has shown to be an effective and accessible method for collecting diverse demonstrations at scale. In this work, we adopt the DemoAT [[25](#bib.bib25)] framework to collect expert demonstrations and imitate via behavioral cloning.
###
2.2 Learning from Play
While work in learning from play in robotics has been relatively limited for real robotic tasks, substantial work has been done in simulation. Play-LMP [[29](#bib.bib29)] has shown that supervision from teleoperated play data can effectively scale up multi-task learning. In their work, a single goal-conditioned policy is able to perform a variety of user-specified tasks. This demonstrates that playful interactions can learn latent plans capable of task discovery, composition, as well as emergent retrying. In contrast to applying play to planning and multitask scenarios, our work focuses on learning visual representations from imitation. We aim to decrease the amount of task-specific, labeled data needed to learn generalizable policies for manipulation tasks. Furthermore, our definition of play extends to undirected movement beyond object manipulation, incorporating high-dimensional visual observations and operating in the real world.
###
2.3 Self-supervised Representation Learning
Representation learning has long been used in Computer Vision, but interest in this learning technique has recently grown within robotics due to the availability of unlabeled data and its effectiveness in learning tasks. The goal of representation learning is to extract features to improve performance in downstream tasks. The key idea is to exploit information from data without explicit labeling. Unlabeled data is generally first trained on one or more pretext tasks to learn a representation. These tasks can include predicting image rotations and distortions, patches, frame sequence prediction, or instance invariances [[8](#bib.bib8), [38](#bib.bib38), [39](#bib.bib39), [40](#bib.bib40), [10](#bib.bib10), [9](#bib.bib9), [41](#bib.bib41)]. The performance of this pretext task is typically discarded, and the pretrained model is used to learn different downstream tasks. Some works have proposed simultaneously training these pretext tasks alongside the main objective [[42](#bib.bib42), [43](#bib.bib43)]. The idea behind pretraining on pretext tasks is that the learned representations have useful structural meanings and are relevant to downstream tasks. A number of works [[10](#bib.bib10), [9](#bib.bib9), [44](#bib.bib44), [45](#bib.bib45), [46](#bib.bib46)] have demonstrated state-of-the-art performance with unsupervised representation learning. We follow a BYOL-style [[44](#bib.bib44)] framework in our experiments since it provided better performance compared to SimCLR [[10](#bib.bib10)], MoCo [[9](#bib.bib9)], and Swav [[45](#bib.bib45)].
###
2.4 Representation Learning in Robotics
Learning directly from high dimensional visual inputs is challenging and data inefficient, but contains rich information needed for learning. Representation learning is a popular method which learns low dimensional latent representations from raw images. Several recent works [[47](#bib.bib47), [48](#bib.bib48), [49](#bib.bib49)] have explored learning latent representations representations from images in the context of robotic tasks. Finn et al. [[50](#bib.bib50)] combines a deep spatial Autoencoder with RL to learn a state space representation for robotic manipulation tasks. Jonschkowski and Brock [[51](#bib.bib51)] focuses on learning robotic priors by first learning a 2D representation from random actions taken by the robot, followed by a standard reinforcement learning policy. Hoeller et al. [[52](#bib.bib52)] uses a sequence of images to learn a latent representation for downstream navigation tasks that are trained via an LSTM network. In our work, we combine this idea with learning from play to demonstrate the effectiveness of representations based on real world playful interactions for downstream manipulation tasks.
3 Approach
-----------
###
3.1 Playful Interactions
We define “playful interactions” as interactions of any kind in a real-world environment using the DemoAT [[25](#bib.bib25)] framework. We asked four people to collect data, and these users were untrained and given no information about the downstream tasks. The only guideline we gave data collectors was to “walk around with the reacher-grabber tool and do whatever you want”. This includes walking and exploring the space, picking up and placing objects, as well as accidental drops and undirected actions. This style of data is very different from our task-specific data, which only consists of expert, goal-oriented trajectories. Playful interaction data by design is free-form, so there are no categories associated with the data. This kind of unstructured data is useful because it contains exploratory and sub-optimal behaviors that are critical to learning generalizable and robust representations. More importantly, it is much easier to obtain. Since users do not need to be given specific instructions, data collection can be done by any individual, even young children. Furthermore, existing data collected using reacher-grabbers for other purposes can also serve as ”playful interaction” data.

Figure 2: We show some examples of the playful interaction data. Some trajectories are more free-form and undirected, such as walking around in an open space. Others contain repeated actions, such as erasing a whiteboard or playing Jenga. Many also contain suboptimal behaviors, such as dropping a bottle and knocking it over, and an object sliding out of the gripper because it wasn’t grasped correctly. Most importantly, these play trajectories are collected without specific instructions, making it diverse and easy to obtain.
Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Playful Interactions for Representation Learning") and Fig. [2](#S3.F2 "Figure 2 ‣ 3.1 Playful Interactions ‣ 3 Approach ‣ Playful Interactions for Representation Learning") display a few examples of playful interaction trajectories. Playful interactions, by nature, will include actions that are very similar to the pushing and stacking task, such as picking up objects. However, there are many more interactions rarely seen in typical demonstration data, including opening and closing doors, erasing whiteboards, playing with blocks, and transitioning from outdoor to indoor settings. We collect 110 minutes of playful interaction data from four different individuals and 19 different locations. In total, we have around 30,000 frames of playful interaction data. Each data collector used the same setup consisting of a reacher grabber stick and a GoPro camera and mount. Our guidelines are flexible enough that data collected from users are quite diverse. For example, demonstrations range from just a few seconds long to up to 18 minutes. Shorter demonstrations tend to be more task-based, while longer demonstrations typically involve many repeated movements and include more undirected interactions such as walking across a room. Our collected playful interaction data will be available on our website.
###
3.2 Learning Visual Representations from Play
Several prior works [[9](#bib.bib9), [10](#bib.bib10), [53](#bib.bib53), [44](#bib.bib44)] have demonstrated success in pretraining models for downstream visual classification tasks. In our work, we aim to show that pretraining models with playful interaction data is effective for downstream robotics tasks. We choose to use a BYOL [[44](#bib.bib44)] style framework to pretrain and learn a visual representation. Unlike the instance-based method used in BYOL, we explore a time-based [[54](#bib.bib54), [55](#bib.bib55)] approach to leverage the temporal association available in videos. Instead of augmenting a copy of the same frame, we augment a frame a few timesteps away in the same trajectory. Unlike [[54](#bib.bib54)], however, we do not require paired viewpoints of the same observation. We learn a representation purely from comparing observations from a single viewpoint at different timesteps. We find that a time-based approach is much more effective than the purely instance-based method used in BYOL.
We train visual encoders q(⋅)𝑞⋅q(\cdot)italic\_q ( ⋅ ) and k(⋅)𝑘⋅k(\cdot)italic\_k ( ⋅ ) for the query and keys respectively and use a momentum-based update for the query encoder. The query encoder q(⋅)𝑞⋅q(\cdot)italic\_q ( ⋅ ) and key encoder k(⋅)𝑘⋅k(\cdot)italic\_k ( ⋅ ) are identical convolutional neural networks. They each take in a single image It∈ℝ3×224×224subscript𝐼𝑡superscriptℝ3224224I\_{t}\in\mathbb{R}^{3\times 224\times 224}italic\_I start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT 3 × 224 × 224 end\_POSTSUPERSCRIPT and output a vector v𝑣vitalic\_v. Itsubscript𝐼𝑡I\_{t}italic\_I start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, which is an augmented version of the frame at timestep t𝑡titalic\_t, is fed into the query encoder, and It+3subscript𝐼𝑡3I\_{t+3}italic\_I start\_POSTSUBSCRIPT italic\_t + 3 end\_POSTSUBSCRIPT, which is an augmented version of the frame at timestep t+3𝑡3t+3italic\_t + 3, is fed into the key encoder. We then feed v𝑣vitalic\_v into a MLP projection head h(⋅)ℎ⋅h(\cdot)italic\_h ( ⋅ ) and return the latent representation xt∈ℝ128subscript𝑥𝑡superscriptℝ128x\_{t}\in\mathbb{R}^{128}italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT 128 end\_POSTSUPERSCRIPT for each image. Then, we compute a simple L2 loss between these latent representations. The projection head h(⋅)ℎ⋅h(\cdot)italic\_h ( ⋅ ) is discarded after the self-supervised pretraining phase.
The play encoder architecture is as follows. Let Ck𝐶𝑘Ckitalic\_C italic\_k denote convolutional layers with k𝑘kitalic\_k filters and Fk𝐹𝑘Fkitalic\_F italic\_k denote fully connected layers of size k𝑘kitalic\_k. The base encoder architecture we use for play pretraining is simply the first three convolutional layers of the AlexNet: C64−C192−C384𝐶64𝐶192𝐶384C64-C192-C384italic\_C 64 - italic\_C 192 - italic\_C 384, followed by a pooling layer and a MLP projection head of size F384−F128𝐹384𝐹128F384-F128italic\_F 384 - italic\_F 128. We find that pretraining only the first three convolutional layers rather than four or five layers improves the model’s ability to learn and generalize during downstream task evaluation and is key to good performance. In Appendix [C.2](#A3.SS2 "C.2 Effect of Pretraining at Earlier Layers ‣ Appendix C Additional Ablation Studies ‣ Playful Interactions for Representation Learning"), we provide analysis of pretraining at different layers.
###
3.3 Downstream Learning
After training on playful interaction data to learn a meaningful representation, we use this representation for downstream manipulation tasks. Unlike other works that utilize self-supervised contrastive pretraining [[9](#bib.bib9), [10](#bib.bib10), [56](#bib.bib56), [44](#bib.bib44)], our network architecture for downstream tasks builds on top of the pretraining encoder and continues to update representation weights.
*Visual Encoder Architecture:*
The network architecture for downstream task learning consists of the base encoder used during pretraining followed by two additional convolutional layers and one projection layer: C64−C192−C384−C256−C256𝐶64𝐶192𝐶384𝐶256𝐶256C64-C192-C384-C256-C256italic\_C 64 - italic\_C 192 - italic\_C 384 - italic\_C 256 - italic\_C 256. During training, weights from every layer are updated during task learning. The encoder architecture is shown in Appendix [A](#A1 "Appendix A Play Encoder Architecture ‣ Playful Interactions for Representation Learning").
*Behavior Cloning:*
We learn a policy using behavior cloning [[57](#bib.bib57), [58](#bib.bib58)]. Each dataset contains observation-action pairs D={(It,at)}𝐷subscript𝐼𝑡subscript𝑎𝑡D=\{(I\_{t},a\_{t})\}italic\_D = { ( italic\_I start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) }, where Itsubscript𝐼𝑡I\_{t}italic\_I start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is an image and atsubscript𝑎𝑡a\_{t}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is the action to get from Itsubscript𝐼𝑡I\_{t}italic\_I start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT to It+1subscript𝐼𝑡1I\_{t+1}italic\_I start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT. Our task-training model takes in an observation image It∈ℝ3×224×224subscript𝐼𝑡superscriptℝ3224224I\_{t}\in\mathbb{R}^{3\times 224\times 224}italic\_I start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT 3 × 224 × 224 end\_POSTSUPERSCRIPT and learns a function f(It,at)𝑓subscript𝐼𝑡subscript𝑎𝑡f(I\_{t},a\_{t})italic\_f ( italic\_I start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) that maps observations Itsubscript𝐼𝑡I\_{t}italic\_I start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT to actions atsubscript𝑎𝑡a\_{t}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. Action labels are the relative changes in pose across frames and are provided by the dataset. Thus, we can represent the dataset of expert task-demonstrations as observation-action pairs
(ot,Δxt))t=0T(o\_{t},\Delta x\_{t}))\_{t=0}^{T}( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , roman\_Δ italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT, where xt∈ℝ3subscript𝑥𝑡superscriptℝ3x\_{t}\in\mathbb{R}^{3}italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT 3 end\_POSTSUPERSCRIPT.
Our objective is to minimize a combined direction and mean squared error (MSE) loss. MSE is computed on the predicted translation vectors.
4 Experiments
--------------
We designed our experiments to address four key questions. First, does self-supervised pretraining with playful interactions capture a diverse set of environments to improve visual imitation? Second, are play representations better than representations learned on ImageNet? Third, would task-specific representations do as well as task-agnostic representations learned from play? Finally, can play representations be combined with other modes of pretraining to get better performance? In this section we assess the performance of our method on two manipulation tasks, pushing and stacking. We will start with describing the foundation of our experiments, the task setup and baselines, and then progress to describing experiments that demonstrate the advantages of pretraining on play. Finally, we dissect the model and convey best practices for pretraining in our ablation study.

Figure 3: Here we present qualitative results of our experiments for both tasks. Each row represents one trajectory. Each overlaid arrow on the images represents the action predicted by each policy. We only display the predicted translation in the transverse plane of the camera and omit up-down actions in our visualization. We note that because the camera is attached at a forward angle, an arrow pointing downwards (as seen in the the stack trajectories) signifies an upward movement and vice versa for an upwards arrow. We can see that our method consistently predicts actions that are closer to the ground truth action compared to BC.
###
4.1 Downstream Task Setup
We evaluate our approach on two tasks, pushing and stacking. We use subsets of the 1000 pushing and stacking examples provided in [[25](#bib.bib25)].
The goal of the pushing task is to slide an object across a flat surface onto a red circle. The diverse dataset includes demonstrations of around 20 different objects in many diverse scenes, which makes accurately manipulating objects especially challenging. Labels are action deltas between consecutive images, which consist of a translation vector. The prehensile stacking task requires grasping an object and placing it onto another object. To avoid ambiguity during training, the closer object is always stacked above the farther object in the given expert demonstrations. Similar to the pushing task, demonstrations consist of a diverse set of objects and configurations, which significantly increases the difficulty of the task. Test-time demonstrations for each task are collected in completely different environments with new objects. We evaluate our method using MSE on 100 held-out video demonstrations for both tasks to better demonstrate the effectiveness of playful interaction pretraining in the absence of large-scale labeled data. Examples of pushing and stacking tasks are illustrated in Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Playful Interactions for Representation Learning"). We provide training details in Appendix [B](#A2 "Appendix B Training Details ‣ Playful Interactions for Representation Learning").
###
4.2 Baselines
We compare our results to competitive behavior cloning baselines. [[25](#bib.bib25)] has shown that employing data augmentations such as random jitters, crops, and rotations on top of naive behavior cloning is able to generalize much better to unseen environments during test time. Hence, we use behavior cloning with data augmentations as the base option. Note that every experiment employs the same set of data augmentations. Concretely, our baselines are as follows:
* •
*BC*: a behavior cloning policy trained from scratch, with data augmentations.
* •
*AE*: a baseline pretrained on playful interaction data via an Autoencoder rather than BYOL. It first learns a representation by minimizing a reconstruction loss, and the learned weights are then used for training downstream tasks.
* •
*VAE*: another generative modeling method used as a representation learning method in several works [[59](#bib.bib59), [29](#bib.bib29)]. Similar to *AE*, this baseline is first pretrained using a VAE before its weights are loaded into the downstream task learning model.
* •
*PLAY*: these baselines are the models we train with playful interaction data.
* •
*BC-OTHER*: an ImageNet initialized baseline pretrained not on playful interaction data, but data used for other tasks, to really see how effective playful interaction data is for visual representation learning. Specifically, we pretrain on the stacking task for the downstream pushing task, and vice versa for the stacking task.
All baselines appended with *-I* are ImageNet pretrained baselines, where we load first a model with weights that have been trained for ImageNet classification tasks, rather than from scratch. We then use this pretrained model to run BC for our downstream manipulation tasks. *PLAY-I* is first loaded with ImageNet pretrained weights, then pretrained on the play dataset, and finally evaluated on downstream tasks.
Unless explicitly noted, all experiments are evaluated on 100 held-out task-specific demonstrations collected in novel, unseen environments. We include performance of baselines trained with 200 demonstrations in Appendix [C.1](#A3.SS1 "C.1 Comparison with More Downstream Task Data ‣ Appendix C Additional Ablation Studies ‣ Playful Interactions for Representation Learning") to show how our method compares to BC-A baselines that train with twice the number of demonstrations.
We find that generative methods such as AE and VAE do not work well. In particular, these baselines, which are first pretrained with playful interaction data and then run on downstream tasks, perform similarly to the BC baseline. We find that both AE and VAE are unable to reach the same level of accuracy as our method during training, and thus does not perform as well on held-out data during test time. We also find that learning with instance discrimination does not improve performance and performs similarly to the BC baseline. This could be due to the fact that BC itself includes data augmentations, so the instance-based pretraining does not give any significant new information.
Table 1: Test Mean Squared Error of Playing and Stacking Task (lower is better).
| Task | BC | AE | VAE | PLAY | BC-I | AE-I | VAE-I | PLAY-I | BC-OTHER |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Push | 0.095 | 0.101 | 0.084 | 0.068 | 0.08 | 0.093 | 0.085 | 0.059 | 0.085 |
| Stack | 0.137 | 0.139 | 0.135 | 0.129 | 0.126 | 0.138 | 0.137 | 0.104 | 0.128 |
###
4.3 Does Training on Playful Interactions Lead to Good Representations?
To test whether self-supervised pretraining with playful interactions can learn a meaningful representation, we first train a model using our collected playful interaction data via BYOL. Then, we load the learned weights into our model to train on the downstream task. We train on 100 trajectories for both the pushing and stacking task. If our playful interactions can learn effective visual representations, we expect that this policy will outperform one where the downstream task is directly trained with BC from scratch. As shown in the first (BC) and fourth (PLAY) columns of Table [1](#S4.T1 "Table 1 ‣ 4.2 Baselines ‣ 4 Experiments ‣ Playful Interactions for Representation Learning"), we see that our play model is able to achieve significantly better results, decreasing MSE from 0.095 to 0.068 in the pushing task and 0.137 to 0.129 in the stacking task. The performance gap is apparent when we visually compare actions between BC and our play model, as shown in Fig. [3](#S4.F3 "Figure 3 ‣ 4 Experiments ‣ Playful Interactions for Representation Learning").
###
4.4 How does Pretraining on Playful Interactions Compare to ImageNet Pretraining?
We study whether playful interactions are able to provide enough diversity and information in its learned representation to surpass the performance of BC-I (BC with ImageNet pretraining). The BC-I baseline is trained on significantly more data, but does not leverage playful interaction supervision, so we hypothesize that the baseline likely learns a representation better suited for more vision-based tasks. To test our hypothesis, we first train a randomly initialized model to learn a representation from playful interaction data (PLAY). Using this model, we then learn a BC policy on the pushing and stacking task. The baseline BC-I is trained directly on the tasks with ImageNet-pretrained weights. Our results are shown in the fourth (PLAY) and fifth (BC-I) columns of Table [1](#S4.T1 "Table 1 ‣ 4.2 Baselines ‣ 4 Experiments ‣ Playful Interactions for Representation Learning"). We find that our method is more effective than ImageNet training for both the pushing and stacking stack. We provide qualitative results showing predicted actions on held-out test data in Fig. [3](#S4.F3 "Figure 3 ‣ 4 Experiments ‣ Playful Interactions for Representation Learning"). Comparing BC-I and PLAY, we see there is an MSE of 0.080.080.080.08 to 0.068 respectively in pushing and 0.126 to 0.129 respectively in stacking. We further compare our method to the BC baseline trained on twice the number of demonstrations to evaluate whether our playful interactions can reduce the number of demonstrations needed to achieve good performance, which we discuss in Appendix [C.1](#A3.SS1 "C.1 Comparison with More Downstream Task Data ‣ Appendix C Additional Ablation Studies ‣ Playful Interactions for Representation Learning").
###
4.5 Does Pretraining on Task-Specific Data Perform Similarly to Pretraining on Play?
We further investigate whether the exploratory task-agnostic nature of playful interaction data is crucial to the learned representation, or if pretraining on another task in the same action space is able to learn a similarly effective representation. To this end, we compare a model pretrained on playful interaction data (PLAY-I) and a model pretrained on a different task (BC-OTHER). Specifically, we test whether a playful interaction-pretrained model outperforms a stack-pretrained model when trained on the pushing downstream task, and vice versa for the stacking task. We note that stacking and pushing have some structural similarity in actions, and that may improve those results. However, we find that when learning the pushing task, pretraining on the stacking data leads no visible improvement. We hypothesize that the pretraining phase is overfitting to the stacking data, and thus does not learn a generalizable representation. We see similar results for the stacking task in the last two columns of Table [1](#S4.T1 "Table 1 ‣ 4.2 Baselines ‣ 4 Experiments ‣ Playful Interactions for Representation Learning"), where pretraining on a different task-specific dataset does not help the model learn a good visual representation for training other downstream tasks. This further shows the effectiveness and importance of using playful interactions to learn a representation that can be used to efficiently learn downstream tasks.
###
4.6 Can Play Pretraining be Combined with ImageNet Pretraining to Learn Better Representations?
We also evaluate our method combined with state-of-the-art pretrained baselines. In this set of experiments, we demonstrate that by combining our play-pretrained model with ImageNet pretraining (PLAY-I), we are able to achieve even better performance. First, we initialize our model with ImageNet weights before pretraining on playful interaction data. We then train using BC on downstream tasks. As shown in the fifth and second to last columns of Table [1](#S4.T1 "Table 1 ‣ 4.2 Baselines ‣ 4 Experiments ‣ Playful Interactions for Representation Learning"), PLAY-I performs significantly better than the BC-I baseline. Furthermore, PLAY-I, combined with ImageNet pretraining, outperforms PLAY (fourth column).
###
4.7 Ablations
To further understand the effects of play data and subsequent representation learning, we run a suite of ablations described in detail in Appendix [C](#A3 "Appendix C Additional Ablation Studies ‣ Playful Interactions for Representation Learning"). The most significant of these are highlighted below.
*Comparison with twice the amount of downstream task data:* Our method not only surpasses the BC baseline, but also beats performance of the same BC-I baseline trained on 200 labeled trajectories for the pushing task. Our method, trained on only 100 demonstrations, learns with an MSE of 0.059, which is 14% better than BC with 200 demonstrations despite training on only half the number of labeled data. Our stacking task performs similarly and we provide experimental results and discuss in more detail in Appendix [C.1](#A3.SS1 "C.1 Comparison with More Downstream Task Data ‣ Appendix C Additional Ablation Studies ‣ Playful Interactions for Representation Learning").
*Effect of Pretraining at Earlier Layers*: We also study the effectiveness of representation learning with play at various layers of our model. We perform ablations over pretraining with playful interaction data until the third, fourth, and fifth convolutional layer of the model. We find that pretraining representations on fewer initial layers and downstream task training on later layers leads to significantly lower MSE. We provide results and more detailed analysis in Appendix [C.2](#A3.SS2 "C.2 Effect of Pretraining at Earlier Layers ‣ Appendix C Additional Ablation Studies ‣ Playful Interactions for Representation Learning").
*Amount of Play*: In addition, we analyze the increase of performance over the amount of play data. We find that there is a diminishing improvement in MSE when adding more play data and provide more detail in Appendix [C.3](#A3.SS3 "C.3 Amount of Play Data ‣ Appendix C Additional Ablation Studies ‣ Playful Interactions for Representation Learning").
###
4.8 Connecting to Real Robot Results
The experimental results in this work are limited to offline MSE evaluations. However, to highlight our MSE evaluation results and contextualize it with real-robot evaluations, we can roughly base our results on Young et al. [[25](#bib.bib25)]. They show that a MSE of 0.028 corresponds to a 87.5% success rate for the pushing task and an MSE of 0.06 corresponds to 62.5% success rate for the stacking task on the real robot. Experimental results in our work have higher MSE since we are operating in the few-shot setting and hence use only a tenth of the pushing and stacking training data used in [[25](#bib.bib25)]. In the context of the experiments in this work, the BC baselines achieve a MSE of 0.08 and 0.126 for the pushing and stacking task respectively (Table [1](#S4.T1 "Table 1 ‣ 4.2 Baselines ‣ 4 Experiments ‣ Playful Interactions for Representation Learning")), which both correspond to not being able to complete either task. The best performing models trained with our method achieve a MSE of 0.059 and 0.104 for the pushing and stacking task respectively, which roughly correlate to successfully solving the task around 60% for pushing and 29% for stacking.
5 Conclusion
-------------
We have presented an approach for learning downstream manipulation tasks via self-supervised pretraining on easy-to-obtain playful interaction data. Our method improves the generalizability of imitation learning baselines beyond simple data augmentations and provides significant improvements to current baselines. We demonstrate that our pretraining method can achieve comparable results to behavior cloning baselines using just half of the labeled task data. The success of our technique on simple behavior cloning opens up many exciting avenues for further work to incorporate play into more complex algorithms.
#### Acknowledgments
We thank Mandi, Andrew, and Daniel for helping us collect playful interaction data. We also gratefully acknowledge the support from The Open Philanthropy Project, Berkeley DeepDrive, Honda Research Institute and ONR. |
4624a43b-6863-4398-9e3e-5ff363ff050f | trentmkelly/LessWrong-43k | LessWrong | Interpersonal Approaches for X-Risk Education
Much of the AI research community remains unaware of the Alignment Problem (according to my personal experience), and I haven't seen much discussion about how to deliberately expand the community (all I've seen to this effect is Scott's A/B/C/D/E testing on alignment articles).
Expanding the number of people aware of (and ideally, working on) the alignment problem is a high-leverage activity: a constant amount of effort spent educating someone in exchange for a chance of recruiting an ally who will work hard at our sides. Another metric by which we should evaluate approaches is whether we have to convince or simply educate; professors and high-status researchers may be more dismissive (possibly due to the inside view, their wariness of strange-sounding ideas, and overconfidence in their long-term predictions), but their influence would be greater. On the other hand, a good friend in a CS or Math under-/post-graduate program may be more receptive.
In my case, I stumbled upon HP:MoR one year ago, read the Sequences, and then read more about Alignment and CEV. I appreciated that Alignment was a serious problem, but it wasn't until I got through Superintelligence that I realized it's basically The Problem. Being in the second year of my doctorate program, I didn't know whether I was "too late" to start learning the math, "too far behind" people like Eliezer to make a difference. What I did know is that everyone can't defect - we need people to put in the work, and we probably need substantially more people doing so.
What happened to me took a lot of time and may be unrealistic to recommend to others. The articles Scott tested seem equally effective; instead, I'd like to discuss what social approaches work best for taking people from friend to friend-who-takes-alignment-seriously (while optimizing against effort expended), and whether this is an efficient use of our time. |
d686ab8e-e25a-4ecb-ab51-e296939fb188 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Survey on intermediate goals in AI governance
It seems that a key bottleneck for the field of longtermism-aligned AI governance is limited strategic clarity (see [Muehlhauser, 2020](https://www.openphilanthropy.org/research/our-ai-governance-grantmaking-so-far/), [2021](https://forum.effectivealtruism.org/posts/M2SBwctwC6vBqAmZW/a-personal-take-on-longtermist-ai-governance#Key_bottlenecks:~:text=Bottleneck%20%232%3A%20We,work%2C%20and%20also%3A)). As one effort to increase strategic clarity, in October-November 2022, we sent a survey to 229 people we had reason to believe are knowledgeable about longtermist AI governance, receiving 107 responses. We asked about:
* respondents’ “theory of victory” for AI risk (which we defined as the main, high-level “plan” they’d propose for how humanity could plausibly manage the development and deployment of transformative AI such that we get long-lasting good outcomes),
* how they’d feel about funding going to each of 53 potential “intermediate goals” for AI governance,[[1]](#fn23lhdssvd0x)
* what other intermediate goals they’d suggest,
* how high they believe the risk of existential catastrophe from AI is, and
* when they expect [transformative AI](https://docs.google.com/document/d/15siOkHQAoSBl_Pu85UgEDWfmvXFotzub31ow3A11Xvo/edit) (TAI) to be developed.
We hope the results will be useful to funders, policymakers, people at AI labs, researchers, field-builders, people orienting to longtermist AI governance, and perhaps other types of people. For example, the report could:
* Broaden the range of options people can easily consider
* Help people assess how much and in what way to focus on each potential “theory of victory”, “intermediate goal”, etc.
* Target and improve further efforts to assess how much and in what way to focus on each potential theory of victory, intermediate goal, etc.
**If you'd like to see a summary of the survey results, please request access to**[**this folder**](https://drive.google.com/drive/u/1/folders/1XVEWyVsxRs1aMKR3Lj0OvDStWchV5Lpj)**.** We expect to approve all access requests,[[2]](#fnyuwfxvco8kh) and will expect readers to abide by the policy articulated in "About sharing information from this report" (for the reasons explained there).
Acknowledgments
===============

*This report is a project of*[*Rethink Priorities*](https://rethinkpriorities.org/)*–a think tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas. The project was commissioned by Open Philanthropy. Full acknowledgements can be found in the linked "Introduction & summary" document.*
*If you are interested in RP’s work, please visit our*[*research database*](https://www.rethinkpriorities.org/research) *and subscribe to our*[*newsletter*](https://www.rethinkpriorities.org/newsletter)*.*
1. **[^](#fnref23lhdssvd0x)**Here’s the definition of “intermediate goal” that we stated in the survey itself:
> By an intermediate goal, we mean any goal for reducing extreme AI risk that’s more specific and directly actionable than a high-level goal like ‘reduce existential AI accident risk’ but is less specific and directly actionable than a particular intervention. In another context (global health and development), examples of potential intermediate goals could include ‘develop better/cheaper malaria vaccines’ and ‘improve literacy rates in Sub-Saharan Africa’.
>
>
2. **[^](#fnrefyuwfxvco8kh)**If two days after you request access you still haven't received access, this is probably just due to a mistake or delay on our end, so please request access again. |
9432642f-31b0-444f-a0c5-03516414bca8 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | GPTs are Predictors, not Imitators
(Related text posted to [Twitter](https://twitter.com/ESYudkowsky/status/1644760694771048449); this version is edited and has a more advanced final section.)
Imagine yourself in a box, trying to predict the next word - assign as much probability mass to the next token as possible - for all the text on the Internet.
Koan: Is this a task whose difficulty caps out as human intelligence, or at the intelligence level of the smartest human who wrote any Internet text? What factors make that task easier, or harder? (If you don't have an answer, maybe take a minute to generate one, or alternatively, try to predict what I'll say next; if you do have an answer, take a moment to review it inside your mind, or maybe say the words out loud.)
---
Consider that somewhere on the internet is probably a list of thruples: <product of 2 prime numbers, first prime, second prime>.
GPT obviously isn't going to predict that successfully for significantly-sized primes, but it illustrates the basic point:
There is no law saying that a predictor only needs to be as intelligent as the generator, in order to predict the generator's next token.
Indeed, in general, you've got to be more intelligent to predict particular X, than to generate realistic X. GPTs are being trained to a much harder task than GANs.
Same spirit: <Hash, plaintext> pairs, which you can't *predict* without cracking the hash algorithm, but which you could far more easily *generate typical instances of* if you were trying to pass a GAN's discriminator about it (assuming a discriminator that had learned to compute hash functions).
---
Consider that some of the text on the Internet isn't humans casually chatting. It's the results section of a science paper. It's news stories that say what happened on a particular day, where maybe no human would be smart enough to predict the next thing that happened in the news story in advance of it happening.
As Ilya Sutskever compactly put it, to learn to predict text, is to learn to predict the causal processes of which the text is a shadow.
Lots of what's shadowed on the Internet has a \*complicated\* causal process generating it.
---
Consider that sometimes human beings, in the course of talking, make errors.
GPTs are not being trained to imitate human error. They're being trained to \*predict\* human error.
Consider the asymmetry between you, who makes an error, and an outside mind that knows you well enough and in enough detail to predict \*which\* errors you'll make.
If you then ask that predictor to become an actress and play the character of you, the actress will guess which errors you'll make, and play those errors. If the actress guesses correctly, it doesn't mean the actress is just as error-prone as you.
---
Consider that a lot of the text on the Internet isn't extemporaneous speech. It's text that people crafted over hours or days.
GPT-4 is being asked to predict it in 200 serial steps or however many layers it's got, just like if a human was extemporizing their immediate thoughts.
A human can write a rap battle in an hour. A GPT loss function would like the GPT to be intelligent enough to predict it on the fly.
---
Or maybe simplest:
Imagine somebody telling you to make up random words, and you say, "Morvelkainen bloombla ringa mongo."
Imagine a mind of a level - where, to be clear, I'm not saying GPTs are at this level yet -
Imagine a Mind of a level where it can hear you say 'morvelkainen blaambla ringa', and maybe also read your entire social media history, and then manage to assign 20% probability that your next utterance is 'mongo'.
The fact that this Mind could double as a really good actor playing your character, does not mean They are only exactly as smart as you.
When *you're* trying to be human-equivalent at writing text, you can just make up whatever output, and it's now a human output because you're human and you chose to output that.
GPT-4 is being asked to *predict* all that stuff you're making up. It doesn't get to make up whatever. It is being asked to model what you were thinking - the thoughts in your mind whose shadow is your text output - so as to assign as much probability as possible to your true next word.
---
Figuring out that your next utterance is 'mongo' is not mostly a question, I'd guess, of that mighty Mind being hammered into the shape of a thing that can simulate arbitrary humans, and then some less intelligent subprocess being responsible for adapting the shape of that Mind to be you exactly, after which it simulates you saying 'mongo'. Figuring out *exactly who's talking*, to that degree, is a *hard inference problem* which seems like noticeably harder mental work than the part where you just say 'mongo'.
When you predict how to chip a flint handaxe, you are not mostly a causal process that behaves like a flint handaxe, plus some computationally weaker thing that figures out which flint handaxe to be. It's not a problem that is best solved by "have the difficult ability to be like any particular flint handaxe, and then easily figure out which flint handaxe to be".
---
GPT-4 is still not as smart as a human in many ways, but it's naked mathematical truth that the task GPTs are being trained on is *harder* than being an actual human.
And since the task that GPTs are being trained on is different from and harder than the task of being a human, it would be surprising - even leaving aside all the ways that gradient descent differs from natural selection - if GPTs ended up thinking the way humans do, in order to solve that problem.
GPTs are not Imitators, nor Simulators, but Predictors. |
a58b8912-5f61-4b85-a495-fb5cdc43974f | trentmkelly/LessWrong-43k | LessWrong | Meetup : Montreal First Meetup
Discussion article for the meetup : Montreal First Meetup
WHEN: 23 June 2012 01:00:00PM (-0400)
WHERE: 100 Rue Saint Antoine Ouest Montreal, QC H2Z 1X8
First meetup for Montreal LessWrongers! I recently visited an Ottawa meetup, and it was a great time. I'd like to meet like minded rationalists and discuss what to do next.
I'd prefer if this were the "self improvement" type of meetup, but I also plan on in the future discussion games and the like. Hope to see you there! I'll have a sign that says "LessWrong Meetup Group", and will be reading some sort of book.
Discussion article for the meetup : Montreal First Meetup |
a1aebdd3-ac27-414c-8a40-2840a3fdd395 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Importance Weighted Policy Learning and Adaptation
1 Introduction
---------------
Current reinforcement learning (RL) algorithms have achieved impressive results across a broad range of games and continuous control platforms.
While effective, such algorithms all too often require millions of environment interactions to learn, requiring access to large compute as well as simulators or large amounts of demonstrations. This stands in stark contrast to the efficiency of biological learning systems [[1](#bib.bib1)], as well as the need for data-efficiency in real world systems, e.g. in robotics where environment interactions can be expensive and risky. In recent years, data efficient RL has thus become a key area of research and stands as one of the bottlenecks for RL to be applied in the real world [[2](#bib.bib2)]. Research in the area is multi-faceted and encompasses multiple overlapping directions. Recent developments in off-policy and model-based RL have dramatically improved stability and data-efficiency of RL algorithms which learn *tabula rasa* [e.g. [3](#bib.bib3), [4](#bib.bib4)]. A rapidly growing body of literature, under broad headings such as *transfer learning*, *meta learning*, or *hierarchical RL*, aims to speed up learning by reusing knowledge acquired in previous instances of similar learning problems. *Transfer learning* typically follows a two step procedure: a system is first *pre-trained* on one or multiple training tasks, then a second step *adapts* the system on a downstream task. While transfer learning approaches allow significant flexibility in system design, the two-step process is often criticised for being sub-optimal.
In contrast, *meta-learning* incorporates adaptation into the learning process itself. In gradient-based approaches, systems are explicitly trained such that they perform well on a downstream task after a few gradient descent steps [[5](#bib.bib5)]. Alternatively, in encoder-based approaches a mapping is learned from a data collected in a downstream task to a task representation [e.g [6](#bib.bib6), [7](#bib.bib7), [8](#bib.bib8), [9](#bib.bib9), [10](#bib.bib10), [11](#bib.bib11)].
Because meta-learning approaches optimize the adaptation process directly, they are expected to adapt faster to downstream tasks than transfer learning approaches. But performing this optimization can be algorithmically or computationally challenging, making it difficult to scale to complex and broader task distributions, especially since many approaches simultaneously solve not just the meta-learning but also a challenging multi-task learning problem.
Given the limitations of meta-learning, a number of recent works have raised the question whether transfer learning methods, potentially combined with data-efficient off-policy algorithms, are sufficient to achieve effective generalization as well as rapid adaptation to new tasks. For example, in the context of supervised meta learning, Raghu et al. [[12](#bib.bib12)] showed that learning good features and finetuning during adaptation led to results competitive with MAML. In reinforcement learning, Fakoor et al. [[13](#bib.bib13)] showed that direct application of TD3 [[14](#bib.bib14)] to maximize a multi-task objective along with a recurrent context and smart reuse of training data was sufficient to match performance of SOTA meta-learning methods on current benchmarks.
In this paper, we take a similar perspective and try to understand the extent to which fast adaptation can be achieved using a simple transfer framework, with the generality of gradient-based adaptation. Central to our approach is the behaviour prior recovered by multi-task KL-regularized objectives [[15](#bib.bib15), [16](#bib.bib16)]. We improve transfer performance by leveraging this prior in two important ways: first, as a regularizer which helps with exploration and restricts the space of solutions that need to be considered, and second as a proposal distribution for importance weighting, where the weights are learnt and given by the exponentiated Q-function. This avoids the need to learn an explicit parametric policy for the transfer task, instead the policy is obtained directly by tilting the prior with the learned, exponentiated action-value function.
To further speed-up adaptation and avoid learning this Q-function de-novo, we make use of a particular parameterization of the action-value functions obtained during multi-task training: the Q-values are parameterized to be linear in some shared underlying feature space. Intuitively, this shared feature representation captures the commonalities in terms of both reward and transition dynamics. In practice, we found this value function representation together with the behaviour prior to generalize well to transfer tasks, drastically speeding-up the adaptation process. We show that across continuous control environments ranging from standard meta-RL benchmarks to more challenging environments with higher dimensional action spaces and sparse rewards, our method can match or outperform recent meta-learning approaches, echoing recent observations in [[13](#bib.bib13)].
Our paper is structured as follows. Section [2](#S2 "2 Background ‣ Importance Weighted Policy Learning and Adaptation") provides the necessary background material and characterizes the multi-task reinforcement learning problem. Our method, based on importance weighting, is presented in Section [3](#S3 "3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation") while Section [4](#S4 "4 Importance weighted policy adaptation for transfer learning ‣ Importance Weighted Policy Learning and Adaptation") shows how our training algorithm can be adapted to improve transfer learning performance. Relevant work is discussed in Section [5](#S5 "5 Related Work ‣ Importance Weighted Policy Learning and Adaptation") with experimental results presented in Section [6](#S6 "6 Experiments ‣ Importance Weighted Policy Learning and Adaptation").
2 Background
-------------
We consider a multi-task reinforcement learning setup,
where we denote a probability distribution over tasks as 𝒫(𝒯)𝒫𝒯\mathcal{P}(\mathcal{T})caligraphic\_P ( caligraphic\_T ). Each task 𝒯∼𝒫similar-to𝒯𝒫\mathcal{T}\sim\mathcal{P}caligraphic\_T ∼ caligraphic\_P is a Markov Decision Process (MDP), i.e. a tuple ⟨p𝒯(s′|s,a),p𝒯(s0),r𝒯(s,a),𝒜,𝒮⟩subscript𝑝𝒯conditionalsuperscript𝑠′𝑠𝑎subscript𝑝𝒯subscript𝑠0subscript𝑟𝒯𝑠𝑎𝒜𝒮\left\langle p\_{\mathcal{T}}(s^{\prime}|s,a),p\_{\mathcal{T}}(s\_{0}),r\_{\mathcal{T}}(s,a),\mathcal{A},\mathcal{S}\right\rangle⟨ italic\_p start\_POSTSUBSCRIPT caligraphic\_T end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_s , italic\_a ) , italic\_p start\_POSTSUBSCRIPT caligraphic\_T end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) , italic\_r start\_POSTSUBSCRIPT caligraphic\_T end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) , caligraphic\_A , caligraphic\_S ⟩ described by (respectively) the transition probability, initial state distribution, reward function, action and state spaces, where 𝒜𝒜\mathcal{A}caligraphic\_A and 𝒮𝒮\mathcal{S}caligraphic\_S are identical across tasks. Furthermore, we assume that we are given finite i.i.d. samples of tasks split into training, 𝒯train=(𝒯1,…,𝒯n)subscript𝒯𝑡𝑟𝑎𝑖𝑛subscript𝒯1…subscript𝒯𝑛\mathcal{T}\_{train}=(\mathcal{T}\_{1},\ldots,\mathcal{T}\_{n})caligraphic\_T start\_POSTSUBSCRIPT italic\_t italic\_r italic\_a italic\_i italic\_n end\_POSTSUBSCRIPT = ( caligraphic\_T start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , caligraphic\_T start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ), and test, 𝒯test=(𝒯n+1,…,𝒯n+m)subscript𝒯𝑡𝑒𝑠𝑡subscript𝒯𝑛1…subscript𝒯𝑛𝑚\mathcal{T}\_{test}=(\mathcal{T}\_{n+1},\ldots,\mathcal{T}\_{n+m})caligraphic\_T start\_POSTSUBSCRIPT italic\_t italic\_e italic\_s italic\_t end\_POSTSUBSCRIPT = ( caligraphic\_T start\_POSTSUBSCRIPT italic\_n + 1 end\_POSTSUBSCRIPT , … , caligraphic\_T start\_POSTSUBSCRIPT italic\_n + italic\_m end\_POSTSUBSCRIPT ) sets. For each task, denoted by i𝑖iitalic\_i, we denote the task-specific policy as πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, whereas π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is a shared *behaviour prior* which regularizes the πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT’s. On top of that, we denote as pi(s′|s,a),pi(s0),ri(s,a)subscript𝑝𝑖conditionalsuperscript𝑠′𝑠𝑎subscript𝑝𝑖subscript𝑠0subscript𝑟𝑖𝑠𝑎p\_{i}(s^{\prime}|s,a),p\_{i}(s\_{0}),r\_{i}(s,a)italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_s , italic\_a ) , italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) , italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a ), the transition probability, initial state distribution and reward function for the task i𝑖iitalic\_i.
The starting point in this paper is DISTRAL [[15](#bib.bib15)] which aims to optimize the following multi-task objective on the training set:
| | | | |
| --- | --- | --- | --- |
| | 𝒥(π0,π1,…,πn)=∑i=1n𝔼τ∼πi(τ)[∑t≥1γtri(at,st)−γtαlogπi(at∣st)π0(at∣st)],𝒥subscript𝜋0subscript𝜋1…subscript𝜋𝑛superscriptsubscript𝑖1𝑛subscript𝔼similar-to𝜏subscript𝜋𝑖𝜏delimited-[]subscript𝑡1superscript𝛾𝑡subscript𝑟𝑖subscript𝑎𝑡subscript𝑠𝑡superscript𝛾𝑡𝛼subscript𝜋𝑖conditionalsubscript𝑎𝑡subscript𝑠𝑡subscript𝜋0conditionalsubscript𝑎𝑡subscript𝑠𝑡\mathcal{J}(\pi\_{0},\pi\_{1},\ldots,\pi\_{n})=\sum\_{i=1}^{n}\mathbb{E}\_{\tau\sim\pi\_{i}(\tau)}\left[\sum\_{t\geq 1}\gamma^{t}r\_{i}(a\_{t},s\_{t})-\gamma^{t}\alpha\log\frac{\pi\_{i}(a\_{t}\mid s\_{t})}{\pi\_{0}(a\_{t}\mid s\_{t})}\right],caligraphic\_J ( italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) = ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_τ ∼ italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_τ ) end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t ≥ 1 end\_POSTSUBSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) - italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_α roman\_log divide start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∣ italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∣ italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_ARG ] , | | (1) |
where α𝛼\alphaitalic\_α is an inverse temperature parameter and τ∼πi(τ)similar-to𝜏subscript𝜋𝑖𝜏\tau\sim\pi\_{i}(\tau)italic\_τ ∼ italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_τ ) denotes the sampling a trajectory from the task i𝑖iitalic\_i using the policy πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. The objective in ([1](#S2.E1 "1 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation")) is optimized with respect to all πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT jointly.
In particular, for each task i𝑖iitalic\_i and for a fixed behaviour prior π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, the optimization of the objective 𝒥𝒥\mathcal{J}caligraphic\_J is equivalent to solving a regularized RL problem with augmented reward ri~(at,st;πi)=ri(at,st)−αlogπi(at∣st)π0(at∣st)~subscript𝑟𝑖subscript𝑎𝑡subscript𝑠𝑡subscript𝜋𝑖subscript𝑟𝑖subscript𝑎𝑡subscript𝑠𝑡𝛼subscript𝜋𝑖conditionalsubscript𝑎𝑡subscript𝑠𝑡subscript𝜋0conditionalsubscript𝑎𝑡subscript𝑠𝑡\tilde{r\_{i}}(a\_{t},s\_{t};\pi\_{i})=r\_{i}(a\_{t},s\_{t})-\alpha\log\frac{\pi\_{i}(a\_{t}\mid s\_{t})}{\pi\_{0}(a\_{t}\mid s\_{t})}over~ start\_ARG italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) = italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) - italic\_α roman\_log divide start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∣ italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∣ italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_ARG. As for learning the behaviour prior π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, optimizing ([1](#S2.E1 "1 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation")) with respect to π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT amounts to minimizing the sum of KL divergences between the task-specific policies πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and the prior:
| | | | |
| --- | --- | --- | --- |
| | π0\*(at|st)=argminπ0∑iKL[πi(at|st)||π0(at|st)].\pi^{\*}\_{0}(a\_{t}|s\_{t})=\arg\min\_{\pi\_{0}}\sum\_{i}\mathrm{KL}[\pi\_{i}(a\_{t}|s\_{t})||\pi\_{0}(a\_{t}|s\_{t})].italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = roman\_arg roman\_min start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT roman\_KL [ italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | | italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] . | | (2) |
The behaviour prior’s role is to model behavior that is shared across the tasks.
As shown in [[16](#bib.bib16)], a prior trained according to ([1](#S2.E1 "1 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation")) with computational restrictions such as partial access to observations only
(information asymmetry) can capture useful default behaviours (such as walking in some walking-related task). The prior regularizes the task-specific solutions and can transfer useful behavior between tasks, which can speed up learning.
Let πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT be the current policy for the task i𝑖iitalic\_i. For a fixed behaviour prior π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, we define the associated soft Q-function as
| | | | |
| --- | --- | --- | --- |
| | Qiπi(s,a)=r(s,a)+γ𝔼s′∼pi(s′|s,a)[𝔼a′∼πi(a′|s′)[Qiπi(s′,a′)]−αKL[πi(⋅|s′)||π0(⋅|s′)]].Q^{\pi\_{i}}\_{i}(s,a)=r(s,a)+\gamma\mathbb{E}\_{s^{\prime}\sim p\_{i}(s^{\prime}|s,a)}\left[\mathbb{E}\_{a^{\prime}\sim\pi\_{i}(a^{\prime}|s^{\prime})}[Q^{\pi\_{i}}\_{i}(s^{\prime},a^{\prime})]-\alpha\mathrm{KL}[\pi\_{i}(\cdot|s^{\prime})||\pi\_{0}(\cdot|s^{\prime})]\right].italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) = italic\_r ( italic\_s , italic\_a ) + italic\_γ blackboard\_E start\_POSTSUBSCRIPT italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_s , italic\_a ) end\_POSTSUBSCRIPT [ blackboard\_E start\_POSTSUBSCRIPT italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT [ italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] - italic\_α roman\_KL [ italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( ⋅ | italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) | | italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( ⋅ | italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] ] . | | (3) |
This function was considered in [[17](#bib.bib17)]. Note that if π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is a uniform distribution, the definition in ([3](#S2.E3 "3 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation")) is equivalent to the soft Q-function considered, for instance, in [[4](#bib.bib4), [18](#bib.bib18)]. Furthermore, the policy, which is a result of computing 1-step soft-greedy policy, defined as:
| | | | |
| --- | --- | --- | --- |
| | q(a|s)=π0(a|s)exp(Qiπi(s,a)/α)∫π0(a|s)exp(Qiπi(s,a)/α)𝑑a,𝑞conditional𝑎𝑠subscript𝜋0conditional𝑎𝑠subscriptsuperscript𝑄subscript𝜋𝑖𝑖𝑠𝑎𝛼subscript𝜋0conditional𝑎𝑠subscriptsuperscript𝑄subscript𝜋𝑖𝑖𝑠𝑎𝛼differential-d𝑎q(a|s)=\frac{\pi\_{0}(a|s)\exp(Q^{\pi\_{i}}\_{i}(s,a)/\alpha)}{\int\pi\_{0}(a|s)\exp(Q^{\pi\_{i}}\_{i}(s,a)/\alpha)da},italic\_q ( italic\_a | italic\_s ) = divide start\_ARG italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a | italic\_s ) roman\_exp ( italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) / italic\_α ) end\_ARG start\_ARG ∫ italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a | italic\_s ) roman\_exp ( italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) / italic\_α ) italic\_d italic\_a end\_ARG , | | (4) |
will have higher soft Q-value on the task i𝑖iitalic\_i, i.e. Qiq(s,a)≥Qiπi(s,a),∀a,ssubscriptsuperscript𝑄𝑞𝑖𝑠𝑎subscriptsuperscript𝑄subscript𝜋𝑖𝑖𝑠𝑎for-all𝑎𝑠Q^{q}\_{i}(s,a)\geq Q^{\pi\_{i}}\_{i}(s,a),\forall a,sitalic\_Q start\_POSTSUPERSCRIPT italic\_q end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) ≥ italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) , ∀ italic\_a , italic\_s (see [[4](#bib.bib4)]). Therefore, ([4](#S2.E4 "4 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation")) gives us a principled way to perform policy improvement.
A similar policy improvement step is used, for instance, in MPO [[3](#bib.bib3)] and Soft Actor Critic (SAC) [[4](#bib.bib4)]. In both cases, the authors optimize a parametric representation to fit the distribution in ([4](#S2.E4 "4 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation")).
But instead of fitting a parametric policy, one can directly act according to the improved policy in ([4](#S2.E4 "4 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation")). This can be potentially more efficient, since it avoids an additional step of learning policy with function approximation. However, sampling exactly from the distribution in ([4](#S2.E4 "4 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation")) can only be done in a few special cases. Below, we propose a method which uses importance sampling to draw samples from a distribution, which approximates the distribution in ([4](#S2.E4 "4 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation")).
3 Importance weighted policy learning
--------------------------------------
For each task i𝑖iitalic\_i and for a fixed behaviour prior π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, we consider the following. Firstly, we sample a set of actions from the behaviour prior:
| | | | |
| --- | --- | --- | --- |
| | {a(k)}k=1K∼iidπ0(a∣s)subscriptsimilar-to𝑖𝑖𝑑superscriptsubscriptsuperscript𝑎𝑘𝑘1𝐾subscript𝜋0conditional𝑎𝑠\{a^{(k)}\}\_{k=1}^{K}\sim\_{iid}\pi\_{0}(a\mid s){ italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT } start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT ∼ start\_POSTSUBSCRIPT italic\_i italic\_i italic\_d end\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a ∣ italic\_s ) | | (5) |
We denote as 𝒜K={a(k)}k=1Ksubscript𝒜𝐾superscriptsubscriptsuperscript𝑎𝑘𝑘1𝐾\mathcal{A}\_{K}=\{a^{(k)}\}\_{k=1}^{K}caligraphic\_A start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT = { italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT } start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT, the set of sampled actions and as ΠK(s)subscriptΠ𝐾𝑠\Pi\_{K}(s)roman\_Π start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT ( italic\_s ) the set of discrete action distributions defined on 𝒜Ksubscript𝒜𝐾\mathcal{A}\_{K}caligraphic\_A start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT for a state s𝑠sitalic\_s. For simplicity of notation, we will drop s𝑠sitalic\_s from ΠK(s)subscriptΠ𝐾𝑠\Pi\_{K}(s)roman\_Π start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT ( italic\_s ) and denote it as ΠKsubscriptΠ𝐾\Pi\_{K}roman\_Π start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT. We denote as Qisubscript𝑄𝑖Q\_{i}italic\_Q start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT the soft action value function for some policy πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and reward function risubscript𝑟𝑖r\_{i}italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. Then, we construct the following action distribution over 𝒜Ksubscript𝒜𝐾\mathcal{A}\_{K}caligraphic\_A start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT for each state s𝑠sitalic\_s:
| | | | | |
| --- | --- | --- | --- | --- |
| | q^k=q^(a=a(k)∣s)subscript^𝑞𝑘^𝑞𝑎conditionalsuperscript𝑎𝑘𝑠\displaystyle\hat{q}\_{k}=\hat{q}(a=a^{(k)}\mid s)over^ start\_ARG italic\_q end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT = over^ start\_ARG italic\_q end\_ARG ( italic\_a = italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT ∣ italic\_s ) | =exp(Qi(s,a(k))−Z(s)α) for k=1,…,K,absentsubscript𝑄𝑖𝑠superscript𝑎𝑘𝑍𝑠𝛼 for k=1,…,K\displaystyle=\exp\left(\frac{Q\_{i}(s,a^{(k)})-Z(s)}{\alpha}\right)\text{ for $k=1,\ldots,K$},= roman\_exp ( divide start\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT ) - italic\_Z ( italic\_s ) end\_ARG start\_ARG italic\_α end\_ARG ) for italic\_k = 1 , … , italic\_K , | | (6) |
| | | a^∼q^(a|s)=Cat(q1,…,qK)similar-to^𝑎^𝑞conditional𝑎𝑠𝐶𝑎𝑡subscript𝑞1…subscript𝑞𝐾\displaystyle\hat{a}\sim\hat{q}(a|s)=Cat(q\_{1},\ldots,q\_{K})over^ start\_ARG italic\_a end\_ARG ∼ over^ start\_ARG italic\_q end\_ARG ( italic\_a | italic\_s ) = italic\_C italic\_a italic\_t ( italic\_q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_q start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT ) | |
with a normalizing constant Z(s)𝑍𝑠Z(s)italic\_Z ( italic\_s ):
| | | |
| --- | --- | --- |
| | Z(s)=αlog∑j=1Kexp(Qi(s,a(j))α)𝑍𝑠𝛼superscriptsubscript𝑗1𝐾subscript𝑄𝑖𝑠superscript𝑎𝑗𝛼Z(s)=\alpha\log\sum\_{j=1}^{K}\exp(\frac{Q\_{i}(s,a^{(j)})}{\alpha})italic\_Z ( italic\_s ) = italic\_α roman\_log ∑ start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT roman\_exp ( divide start\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a start\_POSTSUPERSCRIPT ( italic\_j ) end\_POSTSUPERSCRIPT ) end\_ARG start\_ARG italic\_α end\_ARG ) | |
Then, the resulting policy q^∈ΠK^𝑞subscriptΠ𝐾\hat{q}\in\Pi\_{K}over^ start\_ARG italic\_q end\_ARG ∈ roman\_Π start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT is a discrete approximation for the improved policy of the form q𝑞qitalic\_q from ([4](#S2.E4 "4 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation")). Note that the procedure [6](#S3.E6 "6 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation") corresponds to a soft-max distribution over actions with respect to the exponent of the soft Q-function.
In the limit of K→∞→𝐾K\to\inftyitalic\_K → ∞, the procedure [5](#S3.E5 "5 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation")-[6](#S3.E6 "6 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation") is guaranteed to sample from the policy q𝑞qitalic\_q from ([4](#S2.E4 "4 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation")). The above sampling scheme gives rise to the *Importance Weighted Policy Learning* (IWPL) algorithm, which combines non-parametric policy evaluation and improvements steps, described below.
Non-parametric policy evaluation
Let Q:𝒮×𝒜→ℝ:𝑄→𝒮𝒜ℝQ:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}italic\_Q : caligraphic\_S × caligraphic\_A → blackboard\_R be a function and π𝜋\piitalic\_π is a policy defined on 𝒜𝒜\mathcal{A}caligraphic\_A. We define the soft Bellman backup operator:
| | | |
| --- | --- | --- |
| | 𝒯πQ(st,at)=r(st,at)+γ𝔼st+1[𝔼at+1∼π(⋅|st+1)[Q(st+1,at+1)]−αKL[π(⋅|st+1)||π0(⋅|st+1)]].\mathcal{T}^{\pi}Q(s\_{t},a\_{t})=\\
r(s\_{t},a\_{t})+\gamma\mathbb{E}\_{s\_{t+1}}\left[\mathbb{E}\_{a\_{t+1}\sim\pi(\cdot|s\_{t+1})}[Q(s\_{t+1},a\_{t+1})]-\alpha KL[\pi(\cdot|s\_{t+1})||\pi\_{0}(\cdot|s\_{t+1})]\right].start\_ROW start\_CELL caligraphic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_Q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = end\_CELL end\_ROW start\_ROW start\_CELL italic\_r ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) + italic\_γ blackboard\_E start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ blackboard\_E start\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_π ( ⋅ | italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ italic\_Q ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ] - italic\_α italic\_K italic\_L [ italic\_π ( ⋅ | italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) | | italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( ⋅ | italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ] ] . end\_CELL end\_ROW | |
It is easy to see (as in [[4](#bib.bib4)]) that the Bellman iteration Ql+1=𝒯πQl,l→∞formulae-sequencesuperscript𝑄𝑙1superscript𝒯𝜋superscript𝑄𝑙→𝑙Q^{l+1}=\mathcal{T}^{\pi}Q^{l},l\to\inftyitalic\_Q start\_POSTSUPERSCRIPT italic\_l + 1 end\_POSTSUPERSCRIPT = caligraphic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_Q start\_POSTSUPERSCRIPT italic\_l end\_POSTSUPERSCRIPT , italic\_l → ∞ converges to the soft value function [3](#S2.E3 "3 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation") for π𝜋\piitalic\_π. Then, for the policy q𝑞qitalic\_q defined by eq.[4](#S2.E4 "4 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation") we consider an estimator for the Bellman operator induced by the importance weighting procedure [5](#S3.E5 "5 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation")-[6](#S3.E6 "6 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation") (with a new sampled set of actions {a(k)}k=1Ksuperscriptsubscriptsuperscript𝑎𝑘𝑘1𝐾\{a^{(k)}\}\_{k=1}^{K}{ italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT } start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT):
| | | | |
| --- | --- | --- | --- |
| | 𝒯KqQ(st,at)=r(st,at)+γ𝔼st+1[∑k=1Kq^(ak|st+1)(Q(st+1,ak)−αlogq^(a(k)|st+1)π0(a(k)|st+1))].subscriptsuperscript𝒯𝑞𝐾𝑄subscript𝑠𝑡subscript𝑎𝑡𝑟subscript𝑠𝑡subscript𝑎𝑡𝛾subscript𝔼subscript𝑠𝑡1delimited-[]superscriptsubscript𝑘1𝐾^𝑞conditionalsubscript𝑎𝑘subscript𝑠𝑡1𝑄subscript𝑠𝑡1subscript𝑎𝑘𝛼^𝑞conditionalsuperscript𝑎𝑘subscript𝑠𝑡1subscript𝜋0conditionalsuperscript𝑎𝑘subscript𝑠𝑡1\mathcal{T}^{q}\_{K}Q(s\_{t},a\_{t})=r(s\_{t},a\_{t})+\gamma\mathbb{E}\_{s\_{t+1}}\left[\sum\_{k=1}^{K}\hat{q}(a\_{k}|s\_{t+1})\left(Q(s\_{t+1},a\_{k})-\alpha\log\frac{\hat{q}(a^{(k)}|s\_{t+1})}{\pi\_{0}(a^{(k)}|s\_{t+1})}\right)\right].caligraphic\_T start\_POSTSUPERSCRIPT italic\_q end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT italic\_Q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = italic\_r ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) + italic\_γ blackboard\_E start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT over^ start\_ARG italic\_q end\_ARG ( italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ( italic\_Q ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) - italic\_α roman\_log divide start\_ARG over^ start\_ARG italic\_q end\_ARG ( italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) end\_ARG ) ] . | | (7) |
In the limit, this procedure would converge to the soft Q-function for q𝑞qitalic\_q: Ql+1=𝒯KπQl,l→∞,K→∞formulae-sequencesuperscript𝑄𝑙1superscriptsubscript𝒯𝐾𝜋superscript𝑄𝑙formulae-sequence→𝑙→𝐾Q^{l+1}=\mathcal{T}\_{K}^{\pi}Q^{l},l\to\infty,K\to\inftyitalic\_Q start\_POSTSUPERSCRIPT italic\_l + 1 end\_POSTSUPERSCRIPT = caligraphic\_T start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_Q start\_POSTSUPERSCRIPT italic\_l end\_POSTSUPERSCRIPT , italic\_l → ∞ , italic\_K → ∞.
#### Non-parametric policy improvement
Given the current proposal π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, some old policy qoldsuperscript𝑞𝑜𝑙𝑑q^{old}italic\_q start\_POSTSUPERSCRIPT italic\_o italic\_l italic\_d end\_POSTSUPERSCRIPT, corresponding soft Q-function Qqoldsuperscript𝑄subscript𝑞𝑜𝑙𝑑Q^{q\_{old}}italic\_Q start\_POSTSUPERSCRIPT italic\_q start\_POSTSUBSCRIPT italic\_o italic\_l italic\_d end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT, we can obtain new policy qnewsuperscript𝑞𝑛𝑒𝑤q^{new}italic\_q start\_POSTSUPERSCRIPT italic\_n italic\_e italic\_w end\_POSTSUPERSCRIPT via ([4](#S2.E4 "4 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation")). In this case, similar to [[18](#bib.bib18)] (Appendix B.2), we have:
| | | |
| --- | --- | --- |
| | Qqnew(s,a)≥Qqold(s,a),∀s,a,superscript𝑄subscript𝑞𝑛𝑒𝑤𝑠𝑎superscript𝑄subscript𝑞𝑜𝑙𝑑𝑠𝑎for-all𝑠𝑎Q^{q\_{new}}(s,a)\geq Q^{q\_{old}}(s,a),\forall s,a,italic\_Q start\_POSTSUPERSCRIPT italic\_q start\_POSTSUBSCRIPT italic\_n italic\_e italic\_w end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_s , italic\_a ) ≥ italic\_Q start\_POSTSUPERSCRIPT italic\_q start\_POSTSUBSCRIPT italic\_o italic\_l italic\_d end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_s , italic\_a ) , ∀ italic\_s , italic\_a , | |
where Qqnewsuperscript𝑄subscript𝑞𝑛𝑒𝑤Q^{q\_{new}}italic\_Q start\_POSTSUPERSCRIPT italic\_q start\_POSTSUBSCRIPT italic\_n italic\_e italic\_w end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT is the soft Q-function corresponding to the qnewsuperscript𝑞𝑛𝑒𝑤q^{new}italic\_q start\_POSTSUPERSCRIPT italic\_n italic\_e italic\_w end\_POSTSUPERSCRIPT. To approximate the qnewsuperscript𝑞𝑛𝑒𝑤q^{new}italic\_q start\_POSTSUPERSCRIPT italic\_n italic\_e italic\_w end\_POSTSUPERSCRIPT, we resample new actions {a(k)}k=1Ksuperscriptsubscriptsuperscript𝑎𝑘𝑘1𝐾\{a^{(k)}\}\_{k=1}^{K}{ italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT } start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT via procedure [5](#S3.E5 "5 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation") and apply procedure [6](#S3.E6 "6 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation") to the Qqoldsuperscript𝑄subscript𝑞𝑜𝑙𝑑Q^{q\_{old}}italic\_Q start\_POSTSUPERSCRIPT italic\_q start\_POSTSUBSCRIPT italic\_o italic\_l italic\_d end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT and obtain the categorical distribution with following probabilities:
| | | |
| --- | --- | --- |
| | q^knew=q^knew(a=a(k)|s)∼exp(Qqold(s;a(k))α)subscriptsuperscript^𝑞𝑛𝑒𝑤𝑘subscriptsuperscript^𝑞𝑛𝑒𝑤𝑘𝑎conditionalsuperscript𝑎𝑘𝑠similar-tosuperscript𝑄subscript𝑞𝑜𝑙𝑑𝑠superscript𝑎𝑘𝛼\hat{q}^{new}\_{k}=\hat{q}^{new}\_{k}(a=a^{(k)}|s)\sim\exp\left(\frac{Q^{q\_{old}}(s;a^{(k)})}{\alpha}\right)over^ start\_ARG italic\_q end\_ARG start\_POSTSUPERSCRIPT italic\_n italic\_e italic\_w end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT = over^ start\_ARG italic\_q end\_ARG start\_POSTSUPERSCRIPT italic\_n italic\_e italic\_w end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_a = italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT | italic\_s ) ∼ roman\_exp ( divide start\_ARG italic\_Q start\_POSTSUPERSCRIPT italic\_q start\_POSTSUBSCRIPT italic\_o italic\_l italic\_d end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_s ; italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT ) end\_ARG start\_ARG italic\_α end\_ARG ) | |
This describes a policy improvement procedure based on importance sampling.
#### Behaviour prior (proposal) improvement
Given current policy q(a|s)𝑞conditional𝑎𝑠q(a|s)italic\_q ( italic\_a | italic\_s ) of a form [4](#S2.E4 "4 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation"), corresponding approximation q^^𝑞\hat{q}over^ start\_ARG italic\_q end\_ARG from ([6](#S3.E6 "6 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation")), a new behaviour prior π^0subscript^𝜋0\hat{\pi}\_{0}over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is obtained by maximizing the likelihood of obtaining samples from q^(a|s)^𝑞conditional𝑎𝑠\hat{q}(a|s)over^ start\_ARG italic\_q end\_ARG ( italic\_a | italic\_s ):
| | | |
| --- | --- | --- |
| | π^0(⋅|s)=argminπ0∑k=1Kq^klogπ0(ak|s)\hat{\pi}\_{0}(\cdot|s)=\arg\min\_{\pi\_{0}}\sum\_{k=1}^{K}\hat{q}\_{k}\log\pi\_{0}(a\_{k}|s)over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( ⋅ | italic\_s ) = roman\_arg roman\_min start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT over^ start\_ARG italic\_q end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT roman\_log italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT | italic\_s ) | |
#### Temperature calibration
In the current formulation, IWPL requires us to choose the inverse temperature parameter in [1](#S2.E1 "1 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation") and in [6](#S3.E6 "6 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation"). For varying reward scales, it could result in an unstable behaviour of the procedure [6](#S3.E6 "6 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation").
Some RL algorithms, such as REPS [[19](#bib.bib19)], MPO [[3](#bib.bib3)]
therefore replace similar (soft) regularization terms with hard limits on KL or entropy. Here, we consider a hard-constraint version of objective ([1](#S2.E1 "1 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation")):
| | | | | |
| --- | --- | --- | --- | --- |
| | | ∑i𝔼τ∼πi(τ)[∑t≥1γtri(at,st)]subscript𝑖subscript𝔼similar-to𝜏subscript𝜋𝑖𝜏delimited-[]subscript𝑡1superscript𝛾𝑡subscript𝑟𝑖subscript𝑎𝑡subscript𝑠𝑡\displaystyle\sum\_{i}\mathbb{E}\_{\tau\sim\pi\_{i}(\tau)}\left[\sum\_{t\geq 1}\gamma^{t}r\_{i}(a\_{t},s\_{t})\right]∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_τ ∼ italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_τ ) end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t ≥ 1 end\_POSTSUBSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] | | (8) |
| | | ∑i𝔼s∼πi(s)KL[πi(⋅|s)||π0(⋅|s)]<ϵ\displaystyle\sum\_{i}\mathbb{E}\_{s\sim\pi\_{i}(s)}\mathrm{KL}[\pi\_{i}(\cdot|s)||\pi\_{0}(\cdot|s)]<\epsilon∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_s ∼ italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s ) end\_POSTSUBSCRIPT roman\_KL [ italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( ⋅ | italic\_s ) | | italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( ⋅ | italic\_s ) ] < italic\_ϵ | |
The parameter ϵitalic-ϵ\epsilonitalic\_ϵ defines the maximum average deviation of all the policies πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT from the behaviour prior π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. Given ϵitalic-ϵ\epsilonitalic\_ϵ, we can adjust the inverse temperature α𝛼\alphaitalic\_α to match this constraint. In many cases ϵitalic-ϵ\epsilonitalic\_ϵ is easier to choose than the inverse temperature α𝛼\alphaitalic\_α since it does not, for instance, depend on the scale of the reward. The associated temperature parameter α𝛼\alphaitalic\_α can be optimized by considering the Lagrangian for the objective [8](#S3.E8 "8 ‣ Temperature calibration ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation"), similar to REPS [[19](#bib.bib19)] and MPO [[3](#bib.bib3)].
#### Algorithm
The concrete algorithm is a combination of the steps above with parametric function approximation of the necessary quantities. We consider π0(a|s,ϕ)subscript𝜋0conditional𝑎𝑠italic-ϕ\pi\_{0}(a|s,\phi)italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a | italic\_s , italic\_ϕ ) the approximation for the behaviour prior π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT and Qθi(s,a)subscript𝑄subscript𝜃𝑖𝑠𝑎Q\_{\theta\_{i}(s,a)}italic\_Q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) end\_POSTSUBSCRIPT an approximation for the soft value function for the task i𝑖iitalic\_i. We denote as ϕ′superscriptitalic-ϕ′\phi^{\prime}italic\_ϕ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT and as θi′superscriptsubscript𝜃𝑖′\theta\_{i}^{\prime}italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT the other set of parameters which correspond to the target networks (see Mnih et al. [[20](#bib.bib20)]) - the networks which are kept fixed for some number of iterations. We denote as q^i′superscriptsubscript^𝑞𝑖′\hat{q}\_{i}^{\prime}over^ start\_ARG italic\_q end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT the discrete policy coming from [6](#S3.E6 "6 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation") associated with Qθi′(s,a)subscript𝑄superscriptsubscript𝜃𝑖′𝑠𝑎Q\_{\theta\_{i}^{\prime}}(s,a)italic\_Q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) and π0(a|s,ϕ′)subscript𝜋0conditional𝑎𝑠superscriptitalic-ϕ′\pi\_{0}(a|s,\phi^{\prime})italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a | italic\_s , italic\_ϕ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ). Then, Qθi(s,a)subscript𝑄subscript𝜃𝑖𝑠𝑎Q\_{\theta\_{i}}(s,a)italic\_Q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) can be trained by minimizing the Bellman residual:
| | | | |
| --- | --- | --- | --- |
| | 𝒥Q(θ)=∑i𝔼s,a∼pi(s,a)[12(Qθi(s,a)−Q^i(s,a))2],subscript𝒥𝑄𝜃subscript𝑖subscript𝔼similar-to𝑠𝑎
subscript𝑝𝑖𝑠𝑎delimited-[]12superscriptsubscript𝑄subscript𝜃𝑖𝑠𝑎subscript^𝑄𝑖𝑠𝑎2\mathcal{J}\_{Q}(\theta)=\sum\_{i}\mathbb{E}\_{s,a\sim p\_{i}(s,a)}\left[\frac{1}{2}(Q\_{\theta\_{i}}(s,a)-\hat{Q}\_{i}(s,a))^{2}\right],caligraphic\_J start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT ( italic\_θ ) = ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_s , italic\_a ∼ italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) end\_POSTSUBSCRIPT [ divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG ( italic\_Q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) - over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] , | | (9) |
where θ=(θ1,…,θn)𝜃subscript𝜃1…subscript𝜃𝑛\theta=(\theta\_{1},\ldots,\theta\_{n})italic\_θ = ( italic\_θ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_θ start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) and:
| | | | |
| --- | --- | --- | --- |
| | Q^i(s,a)=ri(s,a)+γ∑k=1Kq^i′(ai(k)|st+1)(Qθi′(st+1,ai(k))−αlogq^i′(ai(k)|st+1)π0(ai(k)|st+1,ϕ′))subscript^𝑄𝑖𝑠𝑎subscript𝑟𝑖𝑠𝑎𝛾superscriptsubscript𝑘1𝐾superscriptsubscript^𝑞𝑖′conditionalsubscriptsuperscript𝑎𝑘𝑖subscript𝑠𝑡1subscript𝑄superscriptsubscript𝜃𝑖′subscript𝑠𝑡1subscriptsuperscript𝑎𝑘𝑖𝛼superscriptsubscript^𝑞𝑖′conditionalsubscriptsuperscript𝑎𝑘𝑖subscript𝑠𝑡1subscript𝜋0conditionalsubscriptsuperscript𝑎𝑘𝑖subscript𝑠𝑡1superscriptitalic-ϕ′\hat{Q}\_{i}(s,a)=r\_{i}(s,a)+\gamma\sum\_{k=1}^{K}\hat{q}\_{i}^{\prime}(a^{(k)}\_{i}|s\_{t+1})\left(Q\_{\theta\_{i}^{\prime}}(s\_{t+1},a^{(k)}\_{i})-\alpha\log\frac{\hat{q}\_{i}^{\prime}(a^{(k)}\_{i}|s\_{t+1})}{\pi\_{0}(a^{(k)}\_{i}|s\_{t+1},\phi^{\prime})}\right)over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) = italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) + italic\_γ ∑ start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT over^ start\_ARG italic\_q end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ( italic\_Q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) - italic\_α roman\_log divide start\_ARG over^ start\_ARG italic\_q end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_ϕ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_ARG ) | | (10) |
The behaviour prior π0(a|s,ϕ)subscript𝜋0conditional𝑎𝑠italic-ϕ\pi\_{0}(a|s,\phi)italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a | italic\_s , italic\_ϕ ) is learned by minimizing:
| | | | |
| --- | --- | --- | --- |
| | 𝒥π0(ϕ)=−∑i𝔼s∼pi(s)[∑k=1Kq^i′(ak|s)logπ0(ak|s,ϕ)]subscript𝒥subscript𝜋0italic-ϕsubscript𝑖subscript𝔼similar-to𝑠subscript𝑝𝑖𝑠delimited-[]superscriptsubscript𝑘1𝐾subscriptsuperscript^𝑞′𝑖conditionalsubscript𝑎𝑘𝑠subscript𝜋0conditionalsubscript𝑎𝑘𝑠italic-ϕ\mathcal{J}\_{\pi\_{0}}(\phi)=-\sum\_{i}\mathbb{E}\_{s\sim p\_{i}(s)}\left[\sum\_{k=1}^{K}\hat{q}^{\prime}\_{i}(a\_{k}|s)\log\pi\_{0}(a\_{k}|s,\phi)\right]caligraphic\_J start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_ϕ ) = - ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_s ∼ italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s ) end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT over^ start\_ARG italic\_q end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT | italic\_s ) roman\_log italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT | italic\_s , italic\_ϕ ) ] | | (11) |
The full algorithm is presented in Algorithm [1](#alg1 "Algorithm 1 ‣ Algorithm ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation").
Algorithm 1 Distributed Importance Weighted Policy Learning (IWPL)
Input:
Behaviour prior π0(a|s,ϕ)subscript𝜋0conditional𝑎𝑠italic-ϕ\pi\_{0}(a|s,\phi)italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a | italic\_s , italic\_ϕ ), initial parameters ϕ0subscriptitalic-ϕ0\phi\_{0}italic\_ϕ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT
Q-function Qθisubscript𝑄subscript𝜃𝑖Q\_{\theta\_{i}}italic\_Q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT, initial parameters θi0subscriptsuperscript𝜃0𝑖\theta^{0}\_{i}italic\_θ start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT for each task i𝑖iitalic\_i
Target networks with a separate set of parameters θ′superscript𝜃′\theta^{\prime}italic\_θ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, ϕ′superscriptitalic-ϕ′\phi^{\prime}italic\_ϕ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT
Target networks update period T𝑇Titalic\_T
Learning rates βQ,βπ0,βαsubscript𝛽𝑄subscript𝛽subscript𝜋0subscript𝛽𝛼\beta\_{Q},\beta\_{\pi\_{0}},\beta\_{\alpha}italic\_β start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_α end\_POSTSUBSCRIPT
Replay buffer ℬℬ\mathcal{B}caligraphic\_B containing data ℬisubscriptℬ𝑖\mathcal{B}\_{i}caligraphic\_B start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT for each task i𝑖iitalic\_i
Training tasks indexes ℐ={1,…,n}ℐ1…𝑛\mathcal{I}=\{1,\ldots,n\}caligraphic\_I = { 1 , … , italic\_n }
Define θ=(θ1,…,θn)𝜃subscript𝜃1…subscript𝜃𝑛\theta=(\theta\_{1},\ldots,\theta\_{n})italic\_θ = ( italic\_θ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_θ start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ), θ′=(θ1′,…,θn′)superscript𝜃′subscriptsuperscript𝜃′1…subscriptsuperscript𝜃′𝑛\theta^{\prime}=(\theta^{\prime}\_{1},\ldots,\theta^{\prime}\_{n})italic\_θ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = ( italic\_θ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_θ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT )
Steps:
Actor policy:
while Not converged do
Receive parameters from the learner
Sample uniformly a training task i𝑖iitalic\_i from ℐℐ\mathcal{I}caligraphic\_I
Sample full-episode trajectory τ=(s0,a0,r0,…,sT,aT,rT)∼q^i(τ)𝜏subscript𝑠0subscript𝑎0subscript𝑟0…subscript𝑠𝑇subscript𝑎𝑇subscript𝑟𝑇similar-tosubscript^𝑞𝑖𝜏\tau=(s\_{0},a\_{0},r\_{0},\ldots,s\_{T},a\_{T},r\_{T})\sim\hat{q}\_{i}(\tau)italic\_τ = ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , … , italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) ∼ over^ start\_ARG italic\_q end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_τ ), using equations. ([5](#S3.E5 "5 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation"),[6](#S3.E6 "6 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation"))
ℬi=ℬi∪τsubscriptℬ𝑖subscriptℬ𝑖𝜏\mathcal{B}\_{i}=\mathcal{B}\_{i}\cup\taucaligraphic\_B start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = caligraphic\_B start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∪ italic\_τ
end while
Learner policy:
while Learning do
Sample uniformly (with replacement) a batch of tasks ℐbsubscriptℐ𝑏\mathcal{I}\_{b}caligraphic\_I start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT from ℐℐ\mathcal{I}caligraphic\_I
for each task i𝑖iitalic\_i from ℐbsubscriptℐ𝑏\mathcal{I}\_{b}caligraphic\_I start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT do
Sample partial trajectory from replay buffer ℬi:τt:t+M=(st,at,rt,…,rt+M):subscriptℬ𝑖subscript𝜏:𝑡𝑡𝑀subscript𝑠𝑡subscript𝑎𝑡subscript𝑟𝑡…subscript𝑟𝑡𝑀\mathcal{B}\_{i}:\tau\_{t:t+M}=(s\_{t},a\_{t},r\_{t},\ldots,r\_{t+M})caligraphic\_B start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT : italic\_τ start\_POSTSUBSCRIPT italic\_t : italic\_t + italic\_M end\_POSTSUBSCRIPT = ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , … , italic\_r start\_POSTSUBSCRIPT italic\_t + italic\_M end\_POSTSUBSCRIPT ) for task i𝑖iitalic\_i
Sample K𝐾Kitalic\_K actions (a1t,…,aKt)subscriptsuperscript𝑎𝑡1…subscriptsuperscript𝑎𝑡𝐾(a^{t}\_{1},\ldots,a^{t}\_{K})( italic\_a start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT ) from π0(a|st,ϕ′)subscript𝜋0conditional𝑎subscript𝑠𝑡superscriptitalic-ϕ′\pi\_{0}(a|s\_{t},\phi^{\prime})italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_ϕ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ), for each state stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT
Calculate the Qθi′(st,akt),∀t,ksubscript𝑄subscriptsuperscript𝜃′𝑖subscript𝑠𝑡subscriptsuperscript𝑎𝑡𝑘for-all𝑡𝑘Q\_{\theta^{\prime}\_{i}}(s\_{t},a^{t}\_{k}),\forall t,kitalic\_Q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) , ∀ italic\_t , italic\_k
Construct categorical distribution q^i′superscriptsubscript^𝑞𝑖′\hat{q}\_{i}^{\prime}over^ start\_ARG italic\_q end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT as in ([6](#S3.E6 "6 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation")) using Qθ′(st,akt)subscript𝑄superscript𝜃′subscript𝑠𝑡subscriptsuperscript𝑎𝑡𝑘Q\_{\theta^{\prime}}(s\_{t},a^{t}\_{k})italic\_Q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT )
*% Perform gradient update on the parameters*
θi←θi+βQ∇θi𝒥Q(θ)←subscript𝜃𝑖subscript𝜃𝑖subscript𝛽𝑄subscript∇subscript𝜃𝑖subscript𝒥𝑄𝜃\theta\_{i}\leftarrow\theta\_{i}+\beta\_{Q}\nabla\_{\theta\_{i}}\mathcal{J}\_{Q}(\theta)italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ← italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT + italic\_β start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT ∇ start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT caligraphic\_J start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT ( italic\_θ )
ϕ←ϕ+βπ0∇ϕ𝒥π0(ϕ)←italic-ϕitalic-ϕsubscript𝛽subscript𝜋0subscript∇italic-ϕsubscript𝒥subscript𝜋0italic-ϕ\phi\leftarrow\phi+\beta\_{\pi\_{0}}\nabla\_{\phi}\mathcal{J}\_{\pi\_{0}}(\phi)italic\_ϕ ← italic\_ϕ + italic\_β start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ∇ start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT caligraphic\_J start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_ϕ )
Every T𝑇Titalic\_T gradient steps, update target networks parameters θ′←θ←superscript𝜃′𝜃\theta^{\prime}\leftarrow\thetaitalic\_θ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ← italic\_θ, ϕ′←ϕ←superscriptitalic-ϕ′italic-ϕ\phi^{\prime}\leftarrow\phiitalic\_ϕ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ← italic\_ϕ.
end for
end while
4 Importance weighted policy adaptation for transfer learning
--------------------------------------------------------------
Given pretrained action-value functions {Qi⋆}i=1nsuperscriptsubscriptsuperscriptsubscript𝑄𝑖⋆𝑖1𝑛\{Q\_{i}^{\star}\}\_{i=1}^{n}{ italic\_Q start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ⋆ end\_POSTSUPERSCRIPT } start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT and a behaviour prior π0⋆superscriptsubscript𝜋0⋆\pi\_{0}^{\star}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ⋆ end\_POSTSUPERSCRIPT from optimization of the objective [8](#S3.E8 "8 ‣ Temperature calibration ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation") on the training set, we show how to leverage it to quickly solve tasks from the test set. We call this process adaptation. Below, we describe how adaptation is facilitate by two components of our method, behaviour and value transfer.
#### Behaviour Transfer.
Given a pre-trained behaviour prior π0⋆superscriptsubscript𝜋0⋆\pi\_{0}^{\star}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ⋆ end\_POSTSUPERSCRIPT, we can learn the solution to a new task by learning a new value function and sampling from the implicit policy defined by [6](#S3.E6 "6 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation"). This can be achieved by executing the procedure in Section [3](#S3 "3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation") without the prior improvement step. Because the policy essentially is initialized from the behaviour prior, the latter constrains possible solutions and leads to sensible exploration. In order to obtain new optimal policy, we need to learn new optimal soft Q function, which can require considerable amount of samples when Q is naively parameterized by a neural network. Below, we propose a way to leverage the Q-functions learned for tasks in the training set to speed up transfer in terms of number of interactions with the environment.
#### Value Transfer.
In order to acquire knowledge about the value function that can be leveraged for transfer we choose to represent the task specific value Qisubscript𝑄𝑖Q\_{i}italic\_Q start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT as a linear function of task-specific parameters w𝑤witalic\_w and shared features ψ𝜓\psiitalic\_ψ:
| | | | |
| --- | --- | --- | --- |
| | Qi(s,a;Φi)=ψ(s,a;θ)Twi,subscript𝑄𝑖𝑠𝑎subscriptΦ𝑖𝜓superscript𝑠𝑎𝜃𝑇subscript𝑤𝑖Q\_{i}(s,a;\Phi\_{i})=\psi(s,a;\theta)^{T}w\_{i},italic\_Q start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a ; roman\_Φ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) = italic\_ψ ( italic\_s , italic\_a ; italic\_θ ) start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , | | (12) |
where ψθ:ℝS×ℝA→ℝd:subscript𝜓𝜃→superscriptℝ𝑆superscriptℝ𝐴superscriptℝ𝑑\psi\_{\theta}:\mathbb{R}^{S}\times\mathbb{R}^{A}\rightarrow\mathbb{R}^{d}italic\_ψ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT : blackboard\_R start\_POSTSUPERSCRIPT italic\_S end\_POSTSUPERSCRIPT × blackboard\_R start\_POSTSUPERSCRIPT italic\_A end\_POSTSUPERSCRIPT → blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT is a function mapping states and actions to a feature vector (with parameters θ𝜃\thetaitalic\_θ shared across tasks), wi∈ℝdsubscript𝑤𝑖superscriptℝ𝑑w\_{i}\in\mathbb{R}^{d}italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT is a task-specific vector used to identify task-specific Q-values, and Φi={θ,wi}subscriptΦ𝑖𝜃subscript𝑤𝑖\Phi\_{i}=\{\theta,w\_{i}\}roman\_Φ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = { italic\_θ , italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT }. During the adaptation phase, we initialize Q(s,a)𝑄𝑠𝑎Q(s,a)italic\_Q ( italic\_s , italic\_a ) as ψ(s,a;θ⋆)⊤w~𝜓superscript𝑠𝑎superscript𝜃⋆top~𝑤\psi(s,a;\theta^{\star})^{\top}\tilde{w}italic\_ψ ( italic\_s , italic\_a ; italic\_θ start\_POSTSUPERSCRIPT ⋆ end\_POSTSUPERSCRIPT ) start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT over~ start\_ARG italic\_w end\_ARG, with w~∼𝒩(0,Id/d)similar-to~𝑤𝒩0subscript𝐼𝑑𝑑\tilde{w}\sim\mathcal{N}(0,I\_{d}/d)over~ start\_ARG italic\_w end\_ARG ∼ caligraphic\_N ( 0 , italic\_I start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT / italic\_d ), and adapt w~~𝑤\tilde{w}over~ start\_ARG italic\_w end\_ARG using TD(0) learning. Furthermore, for some more challenging tasks, we replace (at training time) the task-specific vector wisubscript𝑤𝑖w\_{i}italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT by a non-linear embedding of a structured goal descriptor gisubscript𝑔𝑖g\_{i}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT which is available during training but not during adaptation, i.e. Qi(s,a,gi;Φi)=ψ(s,a;θ)⊤f(gi;θ)subscript𝑄𝑖𝑠𝑎subscript𝑔𝑖subscriptΦ𝑖𝜓superscript𝑠𝑎𝜃top𝑓subscript𝑔𝑖𝜃Q\_{i}(s,a,g\_{i};\Phi\_{i})=\psi(s,a;\theta)^{\top}f(g\_{i};\theta)italic\_Q start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a , italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ; roman\_Φ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) = italic\_ψ ( italic\_s , italic\_a ; italic\_θ ) start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT italic\_f ( italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ; italic\_θ ), where f(gi;θ)𝑓subscript𝑔𝑖𝜃f(g\_{i};\theta)italic\_f ( italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ; italic\_θ ) is a learned embedding of goal gisubscript𝑔𝑖g\_{i}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT with parameters θ𝜃\thetaitalic\_θ shared across training tasks. At test time, we initialize the critic as before: ψ(s,a;θ⋆)⊤w~𝜓superscript𝑠𝑎superscript𝜃⋆top~𝑤\psi(s,a;\theta^{\star})^{\top}\tilde{w}italic\_ψ ( italic\_s , italic\_a ; italic\_θ start\_POSTSUPERSCRIPT ⋆ end\_POSTSUPERSCRIPT ) start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT over~ start\_ARG italic\_w end\_ARG. Since some RL problems can still be challenging multi-task learning problems, this ”asymmetry” between learning and testing allows us to simplify the solution of the multi-task problem without affecting the applicability of the learned representation,
in contrast to most of the meta-learning approaches which require that training and adaptation phase be matched. Then, our proposed method exploits both, behaviour prior and shared value features to derive an efficient off-policy transfer learning algorithm. Note that this approach does not require to have a finite or/and discrete set of tasks and could work also in the continuously parameterised task distributions, since we essentially allow the task-specific Q-function to depend on the task conditioning.
#### Algorithm
Given the new task j𝑗jitalic\_j, we will learn associated w𝑤witalic\_w to construct Q-function of the form [12](#S4.E12 "12 ‣ Value Transfer. ‣ 4 Importance weighted policy adaptation for transfer learning ‣ Importance Weighted Policy Learning and Adaptation"). Let π0(a|s,ϕ)subscript𝜋0conditional𝑎𝑠italic-ϕ\pi\_{0}(a|s,\phi)italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a | italic\_s , italic\_ϕ ) be a pretrained behaviour prior, ψ(s,a;θ)𝜓𝑠𝑎𝜃\psi(s,a;\theta)italic\_ψ ( italic\_s , italic\_a ; italic\_θ ) be pretrained features for the Q-functions on the training set. We use similar notation as in Section [3](#S3 "3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation"), by denoting as w′superscript𝑤′w^{\prime}italic\_w start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, the target network parameters and as q^,q^′^𝑞superscript^𝑞′\hat{q},\hat{q}^{\prime}over^ start\_ARG italic\_q end\_ARG , over^ start\_ARG italic\_q end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT associated categorical distributions of form [6](#S3.E6 "6 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation"). Let Qw(s,a;θ)subscript𝑄𝑤𝑠𝑎𝜃Q\_{w}(s,a;\theta)italic\_Q start\_POSTSUBSCRIPT italic\_w end\_POSTSUBSCRIPT ( italic\_s , italic\_a ; italic\_θ ) be the function approximator of the form [12](#S4.E12 "12 ‣ Value Transfer. ‣ 4 Importance weighted policy adaptation for transfer learning ‣ Importance Weighted Policy Learning and Adaptation") for the new task j𝑗jitalic\_j. Then, the adaptation on the task j𝑗jitalic\_j reduces to learning the Q-function by minimizing TD(0) Bellman residual:
| | | | |
| --- | --- | --- | --- |
| | 𝒥(w;θ)=𝔼s,a∼p(s,a)[12(Qw(s,a;θ)−Q^w′(s,a;θ′))2],𝒥𝑤𝜃subscript𝔼similar-to𝑠𝑎
𝑝𝑠𝑎delimited-[]12superscriptsubscript𝑄𝑤𝑠𝑎𝜃subscript^𝑄superscript𝑤′𝑠𝑎superscript𝜃′2\mathcal{J}(w;\theta)=\mathbb{E}\_{s,a\sim p(s,a)}\left[\frac{1}{2}(Q\_{w}(s,a;\theta)-\hat{Q}\_{w^{\prime}}(s,a;\theta^{\prime}))^{2}\right],caligraphic\_J ( italic\_w ; italic\_θ ) = blackboard\_E start\_POSTSUBSCRIPT italic\_s , italic\_a ∼ italic\_p ( italic\_s , italic\_a ) end\_POSTSUBSCRIPT [ divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG ( italic\_Q start\_POSTSUBSCRIPT italic\_w end\_POSTSUBSCRIPT ( italic\_s , italic\_a ; italic\_θ ) - over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_w start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_s , italic\_a ; italic\_θ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] , | | (13) |
where
| | | | |
| --- | --- | --- | --- |
| | Q^w′(s,a;θ′)=rj(s,a)+γ∑k=1Kq^j′(aj(k)|st+1)(Qw′(st+1,aj(k);θ′)−αlogq^j′(aj(k)|st+1)π0(aj(k)|st+1,ϕ)).subscript^𝑄superscript𝑤′𝑠𝑎superscript𝜃′subscript𝑟𝑗𝑠𝑎𝛾superscriptsubscript𝑘1𝐾superscriptsubscript^𝑞𝑗′conditionalsubscriptsuperscript𝑎𝑘𝑗subscript𝑠𝑡1subscript𝑄superscript𝑤′subscript𝑠𝑡1subscriptsuperscript𝑎𝑘𝑗superscript𝜃′𝛼superscriptsubscript^𝑞𝑗′conditionalsubscriptsuperscript𝑎𝑘𝑗subscript𝑠𝑡1subscript𝜋0conditionalsubscriptsuperscript𝑎𝑘𝑗subscript𝑠𝑡1italic-ϕ\hat{Q}\_{w^{\prime}}(s,a;\theta^{\prime})=r\_{j}(s,a)+\gamma\sum\_{k=1}^{K}\hat{q}\_{j}^{\prime}(a^{(k)}\_{j}|s\_{t+1})\left(Q\_{w^{\prime}}(s\_{t+1},a^{(k)}\_{j};\theta^{\prime})-\alpha\log\frac{\hat{q}\_{j}^{\prime}(a^{(k)}\_{j}|s\_{t+1})}{\pi\_{0}(a^{(k)}\_{j}|s\_{t+1},\phi)}\right).over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_w start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_s , italic\_a ; italic\_θ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_r start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) + italic\_γ ∑ start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT over^ start\_ARG italic\_q end\_ARG start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ( italic\_Q start\_POSTSUBSCRIPT italic\_w start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ; italic\_θ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) - italic\_α roman\_log divide start\_ARG over^ start\_ARG italic\_q end\_ARG start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_ϕ ) end\_ARG ) . | | (14) |
Note that in addition to learning new w𝑤witalic\_w, it is also possible to finetune pre-trained features ψ(s,a;θ)𝜓𝑠𝑎𝜃\psi(s,a;\theta)italic\_ψ ( italic\_s , italic\_a ; italic\_θ ). It may be required if test tasks are too different from the training tasks. This scenario is discussed in *Generalization* part of Section [6](#S6 "6 Experiments ‣ Importance Weighted Policy Learning and Adaptation"). We call the resulted algorithm *Importance Weighted Policy Adaptation* (IWPA) which is described in Algorithm [2](#alg2 "Algorithm 2 ‣ Algorithm ‣ 4 Importance weighted policy adaptation for transfer learning ‣ Importance Weighted Policy Learning and Adaptation").
Algorithm 2 Importance Weighted Policy Adaptation (IWPA)
Input:
Behaviour prior π0(a|s;ϕ)subscript𝜋0conditional𝑎𝑠italic-ϕ\pi\_{0}(a|s;\phi)italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_a | italic\_s ; italic\_ϕ ) pre-trained on the training set.
Shared features ψ(s,a;θ)𝜓𝑠𝑎𝜃\psi(s,a;\theta)italic\_ψ ( italic\_s , italic\_a ; italic\_θ ) representing optimal training soft Q-functions [12](#S4.E12 "12 ‣ Value Transfer. ‣ 4 Importance weighted policy adaptation for transfer learning ‣ Importance Weighted Policy Learning and Adaptation")
ℐ={n+1,…,n+m}ℐ𝑛1…𝑛𝑚\mathcal{I}=\{n+1,\ldots,n+m\}caligraphic\_I = { italic\_n + 1 , … , italic\_n + italic\_m } - indexes for the test set tasks.
N𝑁Nitalic\_N: Number of adaptation episodes
M𝑀Mitalic\_M: Number of gradient updates
Target networks parameters w′,θ′superscript𝑤′superscript𝜃′w^{\prime},\theta^{\prime}italic\_w start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_θ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT
Target networks update period T𝑇Titalic\_T
βw,βθsubscript𝛽𝑤subscript𝛽𝜃\beta\_{w},\beta\_{\theta}italic\_β start\_POSTSUBSCRIPT italic\_w end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT - Learning rates
for Each test task j𝑗jitalic\_j from ℐℐ\mathcal{I}caligraphic\_I do
Initialize task specific critic parameters w∼𝒩(0,Id/d)similar-to𝑤𝒩0subscript𝐼𝑑𝑑w\sim\mathcal{N}(0,I\_{d}/d)italic\_w ∼ caligraphic\_N ( 0 , italic\_I start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT / italic\_d )
Define action-value function Qw(s,a;θ)=ψ(s,a;θ)Twsubscript𝑄𝑤𝑠𝑎𝜃𝜓superscript𝑠𝑎𝜃𝑇𝑤Q\_{w}(s,a;\theta)=\psi(s,a;\theta)^{T}witalic\_Q start\_POSTSUBSCRIPT italic\_w end\_POSTSUBSCRIPT ( italic\_s , italic\_a ; italic\_θ ) = italic\_ψ ( italic\_s , italic\_a ; italic\_θ ) start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_w
Denote as q^wsubscript^𝑞𝑤\hat{q}\_{w}over^ start\_ARG italic\_q end\_ARG start\_POSTSUBSCRIPT italic\_w end\_POSTSUBSCRIPT associated to π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT and Qwsubscript𝑄𝑤Q\_{w}italic\_Q start\_POSTSUBSCRIPT italic\_w end\_POSTSUBSCRIPT categorical distribution of form [6](#S3.E6 "6 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation")
for n=1:N:𝑛1𝑁n=1:Nitalic\_n = 1 : italic\_N do
Sample full-episode trajectory τ=(s0,a0,r0,…,sT,aT,rT)∼q^w(τ)𝜏subscript𝑠0subscript𝑎0subscript𝑟0…subscript𝑠𝑇subscript𝑎𝑇subscript𝑟𝑇similar-tosubscript^𝑞𝑤𝜏\tau=(s\_{0},a\_{0},r\_{0},\ldots,s\_{T},a\_{T},r\_{T})\sim\hat{q}\_{w}(\tau)italic\_τ = ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , … , italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) ∼ over^ start\_ARG italic\_q end\_ARG start\_POSTSUBSCRIPT italic\_w end\_POSTSUBSCRIPT ( italic\_τ ), using eqs. ([5](#S3.E5 "5 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation"),[6](#S3.E6 "6 ‣ 3 Importance weighted policy learning ‣ Importance Weighted Policy Learning and Adaptation"))
for m=1:M:𝑚1𝑀m=1:Mitalic\_m = 1 : italic\_M do
*% Perform gradient update on the parameters for adaptation*
w←w+βw∇w𝒥(w,θ)←𝑤𝑤subscript𝛽𝑤subscript∇𝑤𝒥𝑤𝜃w\leftarrow w+\beta\_{w}\nabla\_{w}\mathcal{J}(w,\theta)italic\_w ← italic\_w + italic\_β start\_POSTSUBSCRIPT italic\_w end\_POSTSUBSCRIPT ∇ start\_POSTSUBSCRIPT italic\_w end\_POSTSUBSCRIPT caligraphic\_J ( italic\_w , italic\_θ )
(Optionally) Finetune features, θ←θ+βθ∇θ𝒥(w,θ)←𝜃𝜃subscript𝛽𝜃subscript∇𝜃𝒥𝑤𝜃\theta\leftarrow\theta+\beta\_{\theta}\nabla\_{\theta}\mathcal{J}(w,\theta)italic\_θ ← italic\_θ + italic\_β start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT caligraphic\_J ( italic\_w , italic\_θ )
Every T𝑇Titalic\_T gradient steps, update target networks parameters w′←w←superscript𝑤′𝑤w^{\prime}\leftarrow witalic\_w start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ← italic\_w, θ′←θ←superscript𝜃′𝜃\theta^{\prime}\leftarrow\thetaitalic\_θ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ← italic\_θ.
end for
end for
end for
5 Related Work
---------------
The proposed algorithm has some similarities to recent off-policy RL methods. In both Maximum a Posteriori Policy Optimization (MPO) [[3](#bib.bib3)] and in Soft Actor Critic (SAC) [[4](#bib.bib4)], the authors propose to learn the parametric policy and fit it to the non-parametric improved policy as in eq. [4](#S2.E4 "4 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation") (in MPO, the π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is replaced by the parametric policy, whereas in SAC, π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is replaced by the uniform distribution). Furthermore, as in our method, in SAC the authors use induced soft Q-function. The both methods collect the experience using the parametric policy. In contrast, in our method, we directly use the improved non-parametric policy to collect the experience as well as to construct the bootstrapped Q-function. Moreover, our method is explicitly build in the context of multi-task learning and makes use of behaviour prior with information asymmetry [[16](#bib.bib16)] which encourages structured exploration.
In recent work on Q-learning, there were many attempts to scale it up to high-dimensional and continuous action domains.
In soft Q-learning [[21](#bib.bib21)], in the context of maximum entropy RL, the authors learn a parametric mapping from normally-distributed samples to ones drawn from a policy distribution, which converges to the optimal non-parametric policy induced by a soft Q function (in a similar way as in eq. [4](#S2.E4 "4 ‣ 2 Background ‣ Importance Weighted Policy Learning and Adaptation") with a uniform π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT). In Amortized Q-learning [[22](#bib.bib22)], the authors propose to learn a proposal distribution for actions and then select the one maximizing the Q-function. Unlike in our work, the authors do not regularize the induced non-parametric distribution to stay close to the proposal. Note that, in the limit of the temperature τ→0→𝜏0\tau\rightarrow 0italic\_τ → 0, then our softmax operator over importance weights becomes a max, making our approach a strict generalization of AQL. Finally, Hunt et al. [[23](#bib.bib23)], propose to learn a proposal distribution which is good for transfer to a new task, in the context of successor features [[24](#bib.bib24)] while maximizing the entropy.
Transfer of knowledge from past tasks to future ones is a well-established problem in machine learning [[25](#bib.bib25), [26](#bib.bib26)] and has been addressed from several different angles. Meta learning approaches try to learn the adaptation mechanism by explicitly optimizing either for minimal regret during adaptation or for performance after adaptation. Gradient-based approaches, often derived from MAML, aim at learning initial network weights such that a few gradient steps from this initialization is sufficient to adapt to new tasks [[5](#bib.bib5), [27](#bib.bib27), [28](#bib.bib28), [29](#bib.bib29)]. Memory-based meta learning approaches model the adaptation procedure using recurrent networks [[6](#bib.bib6), [7](#bib.bib7), [30](#bib.bib30), [11](#bib.bib11), [8](#bib.bib8)]. One problem of meta learning approaches is the explicit optimization for adaptation on a new task, which may be computationally expensive. In addition, most of the meta-learning methods require the training and adaptation process to be matched. It could restrict the class of problems which can be solved by this approach since some hard meta RL problems could also constitute hard multi-task problems. Our method allows to provide additional information at training time to facilitate this learning without affecting the adaptation phase.
Other transfer learning methods (ours included) do not explicitly optimize the algorithm for adaptation. A common approach is to use a neural network which shares some parameters across training tasks and fine-tunes the rest. Recent work [[12](#bib.bib12)] suggests that this yields performance comparable to the MAML-style training. Transfer learning with Successor Features [[24](#bib.bib24)] exploits a similar decomposition of the action-value function, but relies on Generalized Policy Improvement for efficient transfer, instead of our more general gradient-based adaptation. Another approach for reusing past experience is hierarchical RL which tries to compress the experience to a shared low-level controller or a set of options which are reused in later tasks [[31](#bib.bib31), [32](#bib.bib32), [33](#bib.bib33), [34](#bib.bib34)]. Finally, an approach we build upon is to distill past behavior into a prior policy [[15](#bib.bib15), [16](#bib.bib16)] from which we can bootstrap during adaptation.
In Fakoor et al. [[13](#bib.bib13)], the authors propose a transfer learning approach based on fine-tuning a critic acquired via a multi-task objective. To speed-up adaptation, their method makes heavy use of off-policy data acquired during meta-training, and an adaptive trust region which regularizes the critic parameters based on task similarity.
6 Experiments
--------------
In this section, we empirically study the performance of our method in the following scenarios. Firstly, we assess how well the method performs in the multi-task scenario. Then, we demonstrate the methods ability to achieve competitive performance in adapting to hold-out tasks compared to meta reinforcement learning baselines on a few standard benchmarks. On top of that, we show that the method scales well to more challenging sparse reward scenarios and achieves superior adaptation performance on hold out tasks compared to considered baselines. Finally, we consider the case when the number of training tasks is very small. In this case the behaviour prior and value-function representation may overfit to the training tasks. We demonstrate that our method still generalizes to hold-out task when additional fine-tuning is allowed.
#### Task setup.
We consider two standard meta reinforcement learning problems: 2D point mass navigation and half cheetah velocity task, described in Rakelly et al. [[8](#bib.bib8)]. In addition to these simple tasks, we design a set of sparse reward tasks, which are harder as control and exploration problems: *Go To Ring*: a quadruped body needs to navigate to a particular (unknown) position on a ring. *Move Box*: a sphere-like robot must move a box to a specific position. *Reach*: a simulated robotic arm is required to reach a particular (unknown) goal position. *GTT*: A humanoid body needs to navigate to a particular (unknown) position on a rectangle. For every task, we consider a set of training 𝒯1,…,𝒯nsubscript𝒯1…subscript𝒯𝑛\mathcal{T}\_{1},\ldots,\mathcal{T}\_{n}caligraphic\_T start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , caligraphic\_T start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT, and held-out tasks 𝒯n+1,…,𝒯n+msubscript𝒯𝑛1…subscript𝒯𝑛𝑚\mathcal{T}\_{n+1},\ldots,\mathcal{T}\_{n+m}caligraphic\_T start\_POSTSUBSCRIPT italic\_n + 1 end\_POSTSUBSCRIPT , … , caligraphic\_T start\_POSTSUBSCRIPT italic\_n + italic\_m end\_POSTSUBSCRIPT. For every task, the policy receives proprioceptive information, as well as the global position of the body and the unstructured task identifier (a number from 1111 to n𝑛nitalic\_n). For the *Move Box* task, we provide additional global position of the target as task observation on training distribution to facilitate learning. We do not provide this information when working on test tasks. For more environment details, please refer to Appendix [B](#A2 "Appendix B Environment Details ‣ Importance Weighted Policy Learning and Adaptation").
#### Multi-task training.
We first demonstrate our method ability to solve multi-task learning problems.
As baseline, we consider SVG(0) [[35](#bib.bib35)], an actor-critic algorithm with additional Retrace off-policy correction [[36](#bib.bib36)] for learning the Q-function as described in [[37](#bib.bib37)]. We refer to this algorithm as *RS(0)*.
We further consider a continuous-action version of DISTRAL [[15](#bib.bib15)] built on top of *RS(0)*, where we learn a behaviour prior alongside the policy and value function, similar to [[16](#bib.bib16)]. This prior exhibits information asymmetry of observations with respect to the policy and the value function (it receives less information) which makes it to learn useful default behaviour speeding up the learning.
In Appendix [B](#A2 "Appendix B Environment Details ‣ Importance Weighted Policy Learning and Adaptation"), we specify the information provided to the behaviour prior and the policy.
Furthermore, we consider MPO [[3](#bib.bib3)] algorithm as well as its version with behaviour prior, which we call MPO + DISTRAL. The latter simply uses KL-regularizion to the learned prior (alongside the policy learning) in the M-step as soft constraint as well as soft Q-function. In our method, IWPL, we also use the behaviour prior with information asymmetry between Q-function, which receives task-specific information.
For each of the models, we optimize hyperparameters and report the best found configuration with 3 random seeds.
The experiments are run in a distributed setup with 64 actors that generate experience and a single learner somewhat similar to Espeholt et al. [[38](#bib.bib38)] using. We use a replay buffer of size 106superscript10610^{6}10 start\_POSTSUPERSCRIPT 6 end\_POSTSUPERSCRIPT and control the number of times an individual experience tuple is considered by the learner. This ensures soft-synchronicity between ator and learner and ensures a fair comparison between models that differ with respect to the compute cost of inference and learning.
For more details, please refer to the Appendix [A](#A1 "Appendix A Experimental details ‣ Importance Weighted Policy Learning and Adaptation").
The results are given on Figure [1](#S6.F1 "Figure 1 ‣ Multi-task training. ‣ 6 Experiments ‣ Importance Weighted Policy Learning and Adaptation"). We can see that our method achieves competitive performance compared to the baselines. Note that it has larger gains in tasks where the control problem is harder. This effect of behaviour prior was observed in [[16](#bib.bib16)] and presumably is amplified for IWPL, where there is no intermediate parametric policy in the loop. It immediately samples the useful actions from the prior which is learned faster than the agent policy due to the restricted set of observations as discussed in [[16](#bib.bib16)]. Interestingly, we do not observe a difference between MPO and MPO+DISTRAL, presumably because the effect of the behaviour prior is reduced by the hard KL constraint to the previous policy.

Figure 1: Multi-task training results.
#### Adaptation performance.
Next, we investigate performance of our method in adapting to hold-out tasks. The main criteria is the data efficiency in terms of a number of episodes on a new task. As discussed in Section [4](#S4 "4 Importance weighted policy adaptation for transfer learning ‣ Importance Weighted Policy Learning and Adaptation"), we want to leverage the behaviour prior as well as learned shared representation for the action-value function. Therefore, we consider two variants of our method, IWPA described in Section [4](#S4 "4 Importance weighted policy adaptation for transfer learning ‣ Importance Weighted Policy Learning and Adaptation"). We refer to ”Shared Q + IW” as the version which leverages both behaviour prior and action-value function, and ”IW”, which leverages only behaviour prior and learns action value function from scratch without making assumption [12](#S4.E12 "12 ‣ Value Transfer. ‣ 4 Importance weighted policy adaptation for transfer learning ‣ Importance Weighted Policy Learning and Adaptation"). As natural baseline, we consider RS(0) + DISTRAL agent as in multi-task learning where for learning Q-function we use TD(0) as in IWPA. Starting from this, we call ”Shared Q”, the agent which leverages both behaviour prior and action-value function and ”DISTRAL” which leverages only behaviour prior.
We pre-train ”RS(0) + DISTRAL” agent with Q-function parameterisation [12](#S4.E12 "12 ‣ Value Transfer. ‣ 4 Importance weighted policy adaptation for transfer learning ‣ Importance Weighted Policy Learning and Adaptation") on the training set, choose best performing hyperparameter and freeze pretrained π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT and action-value features ψ𝜓\psiitalic\_ψ for each task. Then we apply all four proposed adaptation methods to these behaviour prior and action-value features. The reason to use one algorithm for pretrainining is to isolate the adaptation performance from the multi-task performance studied above. Empirically, we found that models trained based on IWPL lead to similar results, but we decided to report the results pretrained using ”RS(0) + DISTRAL” because this agent was already considered in [[16](#bib.bib16)].
In addition, we consider two meta-reinforcement learning baselines: a re-implementation of RL2 [[6](#bib.bib6)], [[7](#bib.bib7)] as well as a re-implementation of PEARL [[8](#bib.bib8)]. For both implementations we build upon RS(0) as the base algorithm. In our implementation of PEARL (denoted as PEARL\*), we use simple LSTM to encode the context. As reported in Rakelly et al. [[8](#bib.bib8)], this variant is slower to learn but eventually achieves similar to PEARL performance. Despite this change, our results achieve comparable performance to those presented in Section 6.3 of [[8](#bib.bib8)]. On top of that, we also consider a baseline which learns to solve the test tasks ”From Scratch” and corresponds to RS(0) algorithm without pre-training and behaviour prior. For more details, see Appendix [A](#A1 "Appendix A Experimental details ‣ Importance Weighted Policy Learning and Adaptation").
We start by presenting test-time adaptation performance on two standard continuous control tasks used in [[8](#bib.bib8)]: *half-cheetah velocity* and *Sparse 2D navigation*. Note, that for *Sparse 2D navigation* task, PEARL receives dense reward during training whereas our agent is trained with sparse rewards. It additionally demonstrates that our method can be employed in more difficult scenarios. The results are presented in Figure [2](#S6.F2 "Figure 2 ‣ Adaptation performance. ‣ 6 Experiments ‣ Importance Weighted Policy Learning and Adaptation"). While RL2 and PEARL converge faster in absolute terms, IWPA remains competitive and converges quickly despite not optimizing the adaptation process directly.
Going further, we present the results on complex sparse reward tasks.
Results on these tasks are depicted on Figure [3](#S6.F3 "Figure 3 ‣ Adaptation performance. ‣ 6 Experiments ‣ Importance Weighted Policy Learning and Adaptation"). Our proposed method achieves gains in adaptation time with respect to the baseline DISTRAL. Furthermore, we note that using shared features for the value function provides a significant gain. It is important to note that using shared features without the behaviour prior fails to learn fast, because the behaviour prior plays a crucial role in facilitating exploration (see Appendix [D](#A4 "Appendix D Ablations ‣ Importance Weighted Policy Learning and Adaptation")). On top of that, we observe that IWPA similarly to multi-task results section, provides bigger gains on harder to control problems, like GTT humanoid. Note that this is a very challenging task: humanoid needs to locate a target and only receives a reward when successfull. Furthermore, the humanoid may fail at any moment and the episodes will terminate. It makes it extremely hard to learn without any prior knowledge. We note that both RL2 and PEARL failed to achieve optimal performance on these tasks. This could be for a variety of reasons, including the sparsity of the rewards and the complexity of learning a single policy that has to operate over long time horizons.

Figure 2: Adaptation performance on standard benchmarks after meta-training. Our method (not using meta-learning) achieves comparable results to other meta-learning baselines.

Figure 3: Adaptation performance of different methods on sparse reward tasks after meta-training.
#### Generalization
An efficient transfer learning method should be robust to low data regime. Here we show that in case, when a few of training tasks are available, the method is still be able to generalize if we allow for the additional finetuning of the shared features for the Q-function after 20 episodes of interaction on a new task. For each of the sparse reward tasks, we consider a version which has few training tasks. We trained IWPL on these and compare it to the IWPL trained in large tasks regime. The results are given in Figure [4](#S6.F4 "Figure 4 ‣ Generalization ‣ 6 Experiments ‣ Importance Weighted Policy Learning and Adaptation"). As we see, the method trained in a low tasks regime fails to generalize in most of the tasks, whereas the additional finetuning helps to recover the final performance and still be able to do it faster than learning from scratch.

Figure 4: Generalization results. We report the performance of learning from scratch as well as Shared Q + IW architecture trained in high task regime. On top of that, we show the performance of the architectures trained in the low task regime with and without a finetuning of value function features. We denote by a point the final performance of the early-stopped Shared Q + IW experiment.
7 Discussion
-------------
We have presented a novel method for multi-task learning as well as for adaptation to new hold-out tasks which does not explicitly meta-learn the adaptation process and yet can match the adaptation speed of common meta-reinforcement learning algorithms. Instead of explicit meta-learning, we relied on feature reuse and bootstrapping from a behavioral prior. The behavior prior can be seen as an informed proposal for a task distribution that is then specialized to a particular task by a learned action-value function. This scheme can be easily integrated into different actor-critic algorithms for data efficient off-policy learning at training and test time. It further does not strictly require to execute test time adaptation as an inner loop during training thus adding extra flexibility. |
49aa9937-1c67-4fcc-b28a-5462d999ef81 | trentmkelly/LessWrong-43k | LessWrong | exists(max(performance(pay)))
US Congresspeople don't make a lot of money in salary - most make $174,000/yr. They could easily make several times that much as consultants. They do, however, have insider information giving them very large returns on the stock market. For that, or other reasons, many of our representatives care more about keeping their jobs than about not wrecking the economy.
Most discussion of incentivizing assumes that higher pay leads to higher performance. The logic is that higher pay leads to wanting more to keep the job, which leads to higher performance. But the second link in this chain is weak. Sometimes higher motivation to keep the job leads to lower performance. CEOs are motivated to hide losses with accounting tricks, military officers are motivated to deny and cover up abuse by their subordinates, teachers are motivated to inflate their students' test scores. |
8da451cd-2685-44ba-885f-fffb42d40257 | trentmkelly/LessWrong-43k | LessWrong | ... And Everyone Loses Their Minds
Chris Nolan's Joker is a very clever guy, almost Monroesque in his ability to identify hypocrisy and inconsistency. One of his most interesting scenes in the film has him point out how people estimate horrible things differently depending on whether they're part of what's "normal", what's "expected", rather than on how inherently horrifying they are, or how many people are involved.
Soon people extrapolated this observation to other such apparent inconsistencies in human judgment, where a behaviour that once was acceptable, with a simple tweak or change in context, becomes the subject of a much more serious reaction.
I think there's rationalist merit in giving these inconsistencies a serious look. I intuit that there's some sort of underlying pattern to them, something that makes psychological sense, in the roundabout way that most irrational things do. I think that much good could come out of figuring out what that root cause is, and how to predict this effect and manage it.
Phenomena that come to mind, are, for instance, from an Effective Altruism point of view, the expenses incurred in counter-terrorism (including some wars that were very expensive in treasure and lives), and the number of lives said expenses save, compared with the number of lives that could be saved by spending that same amount into improving road safety, increasing public helathcare expense where it would do the most good, building better lightning rods (in the USA you're four times more likely to be struck by thunder than by terrorists), or legalizing drugs.
What do y'all think? Why do people have their priorities all jumbled-up? How can we predict these effects? How can we work around them? |
05c9140f-bbe4-4cc3-9efa-2c8426553083 | trentmkelly/LessWrong-43k | LessWrong | Just a reminder: Scientists are, technically, people.
From Michael Eisen's blog:
Yuval Levin, former Executive Director of the President's Council on Bioethics, has an op-ed in Tuesday's Washington Post arguing that Obama's new stem cell policy is dangerous. Levin does not argue that stem cell research is bad. Rather he is upset that Obama did not dictate which uses of stem cells are appropriate, but rather asked the National Institutes of Health to draft a policy on which uses of stem cells are appropriate:
> It [Obama's policy] argues not for an ethical judgment regarding the moral worth of human embryos but, rather, that no ethical judgment is called for: that it is all a matter of science.
>
> This is a dangerous misunderstanding. Science policy questions do often require a grasp of complex details, which scientists can help to clarify. But at their core they are questions of priorities and worldviews, just like other difficult policy judgments.
Lost in this superficially unobjectionable - if banal - assertion of the complexity of ethical issues involving science is Levin's (and many other bioethicists) credo: that the moral complexity of scientific issues means that scientists should not make decisions about them. |
3e7513c6-b0c5-4c1f-8b73-1bb5dbc34267 | trentmkelly/LessWrong-43k | LessWrong | How to plan for a radically uncertain future?
While playing around with Stable Diffusion last week I had an epiphany. I realized that while I believe with high probability that the world will be radically different in 15 years, all my investments, living arrangements, and other long term plans implied a belief that the world would be roughly similar to now. This shook me to my core.
More concretely, I have investments in suburban American property, a 401k plan that I maximize and invest in a broadly diversified low cost fund, many children whom I educate in roughly conventional subjects (from college age down to toddler age), and a job that I intend to keep for the long term. But I believe there is a less than 1% probability that my 401k will mean anything by the time I can withdraw it. I don't believe that my property (or any suburban American property) will be valuable in 15 years under most scenarios I consider likely. I don't think my children's education will serve them particularly well during their adulthood.
What are my beliefs about the future? I believe that AI and biology at least (two subjects I know well and follow closely) are accelerating quickly, with logarithmic increases of capability and similar log decreases of cost. I believe that it is early days for both fields, and these log curves will be extended further for years to come.
Given those beliefs, the thing I expect most is radical change, in a direction that I can't predict well. I'm not biased towards a doom scenario (AI takeover, engineered unstoppable plague), nor towards a utopian scenario (beneficent AGI, biological immortality). But I am biased strongly away from the continuation of the status quo, away from any regime where I can sell my house, live off my 401k, watch my children take jobs that exist today and have their own families.
I suspect others here hold similar beliefs about the future. How are you preparing now? How do you invest, how do you rear your young, what actions do you take that let you sleep well, knowing t |
f0dd87f2-d95b-43f5-ae64-2edef5764812 | trentmkelly/LessWrong-43k | LessWrong | Greg Egan and the Incomprehensible
In this post I question one disagreement between Eliezer Yudkowsky and science fiction author Greg Egan.
In his post Complex Novelty, Eliezer Yudkowsky wrote in 2008:
> Note that Greg Egan seems to explicitly believe the reverse - that humans can understand anything understandable - which explains a lot.
An interview with Greg Egan in 2009 confirmed this to be true:
> … I think there’s a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do — apart from responding to external events in real time — if only I had the patience to sit and swap floppy disks all day long. I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools. So if we ever did encounter those billion-year-old aliens, I’m sure they’d have plenty to tell us that we didn’t yet know — but given enough patience, and a very large notebook, I believe we’d still be able to come to grips with whatever they had to say.
The theoretical computer scientist Scott Aaronson wrote in a post titled 'The Singularity Is Far':
> The one notion I have real trouble with is that the AI-beings of the future would be no more comprehensible to us than we are to dogs (or mice, or fish, or snails). After all, we might similarly expe |
074a9200-ffc3-4a5d-a15d-ebefd5f2c9bc | trentmkelly/LessWrong-43k | LessWrong | Offering AI safety support calls for ML professionals
TLDR: If you're a professor, PhD student, or industry researcher or engineer interested in starting AI safety research, please request a call with Arkose! We are currently focused on and most helpful to mid-career people with machine learning expertise, who are relatively new to the advanced AI safety space.
The following is largely a linkpost for arkose.org, with a FAQ in the comments.
----------------------------------------
Arkose is an early-stage, fieldbuilding nonprofit with the mission of improving the safety of advanced AI systems. We focus on supporting researchers, engineers, and other professionals interested in contributing to AI safety, primarily by hosting personalized calls and public-facing resources.
In our 30-45 minute calls, we offer personalized advice to fit your level of interest and situation, whether you're an industry researcher or engineer, professor, or PhD student.
After discussing your specific needs and questions, we can:
* Help you explore funding and job opportunities. Arkose keeps a list of grant opportunities, compute options, fellowships, and jobs: Resources is our public list.
* Connect you to potential mentors and collaborators in industry or academia. Whether you're seeking senior researchers, PhD students, or industry or sabbatical connections, Arkose can introduce you to our experts.
* Recommend a list of safety-related papers in your research area, and a shortlist of advanced AI safety materials.
* Provide as-needed support for 6 months. We can help you clarify emerging options, follow up with your plans, and advise you of new resources.
If you're an ML or AI professional who's interested in starting research in AI safety:
Request a Call
And if you'd like to help review Arkose's ML materials, please message or email me at vael@arkose.org! |
60969629-6cb7-4107-96f8-b938cbf27cbd | trentmkelly/LessWrong-43k | LessWrong | Deciding What Project/Org to Start: A Guide to Prioritization Research
If you’re deciding what (research) project, organization or intervention to go for, analyzing your options through prioritization research can be invaluable. I used it to settle on founding Catalyze, an AI Safety field-building non-profit. In this post, I will share my blueprint and learnings from this process. Please note that you don’t need to be a researcher to benefit from conducting prioritization research.
Why prioritization research
No need to wait for inspiration to strike
If you're at a crossroads trying to come up with a great idea, I have good news: you don't have to invent something new. There's a world of ideas waiting to be discovered and executed. So don't wait for that idea to come to you; instead, you can go to the ideas.
You can probably find a better intervention than the first one(s) you stumbled upon
Suppose you have a handful of ideas for a project or an organization. That’s a fantastic start! But consider the possibility that by conducting a broad and deliberate search, you could stumble upon something even more impactful. More likely than not, if you consider a wide range of ideas, your initial favorite won’t come out on top. Don’t go for the first thing that came to mind, but rather take some time to scope the field and possibly find an even better option.
The difference in impact between a ‘good’ and ‘great’ intervention is huge - especially within a high-impact cause area
One of the original points EA focused on was the huge difference in impact between different interventions within global health.
Illustrative Graph from this 80.000 Hours article
To me, it seems likely that the big difference in impact between interventions is similar in different cause areas.
Therefore, apart from wisely choosing what cause to work on, I think it is still very crucial to wisely pick what you work on within that cause area. One of the ‘best’ interventions in this area is likely many times more impactful than the median.
If you work withi |
8a8d8b76-b0bf-4923-9d99-3b96cb901c7c | trentmkelly/LessWrong-43k | LessWrong | Open thread, August 28 - September 3, 2017
IF IT'S WORTH SAYING, BUT NOT WORTH ITS OWN POST, THEN IT GOES HERE.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "Make this post available under..." before submitting |
492f4332-0b04-4aeb-9a60-c012569acb31 | trentmkelly/LessWrong-43k | LessWrong | Role-reversal education for math and science students (pilot project)
tldr; Role-reversal education: by explaining to others, we learn, and improve in confidence.
http://solutionfocusedchange.blogspot.com/2011/12/betacoach-promising-project-to-improve.html
"The project Bètacoach seems to me to be a well thought-out and promising way to give improve math and science education in secondary education. Here is a pointwise explanation of the project:
* What is Bètacoach? In September 2010 a pilot started in which third-graders with low self-confidence in math and science were asked to become coaches of four to five first-graders during math class. Once a week the bètacoaches joined the lesson which was prepared by the teacher, to help their group of first-graders.
* Role-reversal education: an important principle which is used in the project is that of role-reversal education. Research has shown that by explaning things, people construct knowledge again which helps anchor this knowlegde better and which makes it easier to connect it to information.
* Choosing bètacoaches: the following steps help to choose the bètacoaches: 1) choose students for whom there is room for improvement with respect to their grades and/or their self-confidence, 2) discuss their suitability for the bètacoach role with your colleagues, 3) aks the students for the role and make clear that the role is an important one, 4) express that you expect that the student will be able to fulfill the role well, 5) be demanding: make it clear that the role requires commitment and effort.
* Preliminary findings: both the first-graders and the third-graders turn out to be enthusiast about the project. First-graders said the could concentrate better, felt more comfortable to ask questions and understood the material better. The bètacoaches themselves, the third-graders, said they learned from the experience and that they had become more active. "
http://moniquepijls.com/2011/08/16/betacoach-leren-in-een-nieuwe-rol/ (Dutch) |
e862cf83-3e59-440f-8dfa-7ba62e3f7369 | trentmkelly/LessWrong-43k | LessWrong | Zen and Rationality: Map and Territory
This is post 3/? about the intersection of my decades of LW-style rationality practice and my several years of Zen practice.
In today's installment, I look at form and emptiness from a rationalist perspective.
Rationalists have a few key ideas or memes (in the memetic sense), and one of them is "the map is not the territory". Lots has been written about this idea on LessWrong, but it's an idea with a history that stretches back for thousands of years, so it's perhaps not surprising that it's also one of the ideas at the core of Zen.
But in Zen we don't use the words "map" and "territory", instead preferring numerous other metaphors to point at this distinction. Let's explore a few of them, because each elucidates a different aspect of the truth pointed at by these duals.
Before Zen was Zen, Nagarjuna formalized this idea that there's a duality between map and territory in the two truths doctrine. He called these two pairs form and emptiness, pointing at the way our minds put our experiences together into forms or objects that are fixed, at least in our minds, yet ultimately reality is empty of these forms or any other kind of inherent distinctions, essences, or ultimate and timeless truths. Everything we know is provisional, taking a skeptical epistemic stance similar to Pyrrhonism.
Form and emptiness have their place in Zen, but more common is to make a distinction between the relative and the absolute. The relative is that which changes, which exists in our minds, which comes and goes. The absolute is that which exists prior to our perception of it; it's the space in which the relative arises. But Zen doesn't stop there. Form is emptiness and emptiness is form, as the Heart Sutra says, and the relative and the absolute can be thought of as dancing reality into existence, simultaneously unified and separate. Dongshan (Japanese: Tozan) explored this in his poem on the Five Ranks, a subtle teaching that can take some effort to penetrate but is worth the effort.
|
91734018-48c6-4804-aea2-c29df2af63a9 | trentmkelly/LessWrong-43k | LessWrong | Naive Set Theory, Halmos
Epistemic status: Unorganized thoughts of mine generated after reading this book -- not a concerted attempt to convey any single particular mathematical insight.
Set theory is the mathematical study of collections. These collections -- sets -- can be either the collection of nothing (the empty set), or collections... of other collections. Countries, for example, are sets of people, making the United Nations a set of sets of people.
Somewhat surprisingly, just about all the rest of mathematics can be framed as the study of sets (of sets of sets of sets... ). Knowing some set theory lets you translate the myriad objects and claims that you encounter elsewhere in the various areas of math into more set theory, putting math on one common intellectual foundation.
Contents and Notes
1. The Axiom of Extension
2. The Axiom of Specification
> All the basic principles of set theory, except only the axiom of extension, are designed to make new sets out of old ones (p. 4).
Reading this book, I picked up the idea that sets with arbitrary properties aren't simply assumed to exist in math. As Halmos puts it,
> It is impossible, especially in mathematics, to get something for nothing. To specify a set, it is not enough to pronounce some magic words (which may form a sentence such as "x∉x"); it is necessary also to have at hand a set to whose elements the magic words apply (p. 7).
Another related "correctly orienting towards mathematics" maxim along these lines is: "assume nothing, prove nothing." You don't do math without any premises, because nothing interesting follows from no premises at all. Nor do you freely assume arbitrary premises and see what follows deductively from them. Sets in set theory are not generally assumed to exist -- with the sole exception of the axiom of infinity, our existential starting point. Sets in set theory are instead constructed in steps, from the axiom of infinity, using the existential conditionals of the remaining axioms of set theory.
|
cfa96c51-0653-4cce-8fd6-0ff4ad963b05 | trentmkelly/LessWrong-43k | LessWrong | What are examples of the opposite of perverse incentives?
I think a lot about the importance of incentives. There are numerous historical examples of perverse incentives but what are some examples of times where people got the incentives right? |
60a56d60-bc85-4ecc-b9f6-181e8224339a | StampyAI/alignment-research-dataset/blogs | Blogs | Effective Altruism and Flow-Through Effects
Last month, MIRI research fellow Carl Shulman[1](https://intelligence.org/2013/09/14/effective-altruism-and-flow-through-effects/#footnote_0_10484 "Carl was a MIRI research fellow at the time of the conversation, but left MIRI at the end of August 2013 to study computer science.") participated in a recorded debate/conversation about effective altruism and flow-through effects. This issue is highly relevant to MIRI’s mission, since MIRI focuses on activities that are intended to produce altruistic value [via their flow-through effects on the invention of AGI](http://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/).
The conversation ([mp3](https://docs.google.com/file/d/0B8_48dde-9C3VmlZSVp4YVVEckE/edit), [transcript](http://www.jefftk.com/p/flow-through-effects-conversation)) included:
* Nick Beckstead, research fellow at the [Future of Humanity Institute](http://www.fhi.ox.ac.uk/) at Oxford University
* Paul Christiano, UC Berkeley grad student and blogger at [Rational Altruist](http://rationalaltruist.com/)
* Holden Karnofsky, co-founder of [GiveWell](http://www.givewell.org/)
* Carl Shulman, MIRI research fellow
* Rob Wiblin, executive director at the [Center for Effective Altruism](http://home.centreforeffectivealtruism.org/)
Recommended background reading includes:
* Holden Karnofsky’s essay “[Flow-through effects](http://blog.givewell.org/2013/05/15/flow-through-effects/)“
* Paul Christiano’s essay “[My outlook](http://rationalaltruist.com/2013/06/03/my-outlook/)“
* MIRI’s [interview with Nick Beckstead](http://intelligence.org/2013/07/17/beckstead-interview/) on the importance of the far future
To summarize the conversation very briefly: All participants seemed to agree that more research on [flow-through effects](http://blog.givewell.org/2013/05/15/flow-through-effects/) would be high value. However, there’s a risk that such research isn’t highly tractable. For now, GiveWell will focus on other projects that seem more tractable. Rob Wiblin might try to organize some research on flow-through effects, to learn how tractable it is.
---
1. Carl was a MIRI research fellow at the time of the conversation, but [left MIRI](http://intelligence.org/2013/09/10/september-newsletter/) at the end of August 2013 to study computer science.
The post [Effective Altruism and Flow-Through Effects](https://intelligence.org/2013/09/14/effective-altruism-and-flow-through-effects/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
a6cebc66-b2ae-43c2-8e5f-9838e3d8c51e | trentmkelly/LessWrong-43k | LessWrong | Is Constructor Theory a useful tool for AI alignment?
This is an attempt to make sense of some vague ideas and explain some intuitions I've had recently. Written in the spirit of How To Write Quickly While Maintaining Epistemic Rigor .
> "Any headline that ends in a question mark can be answered by the word 'no'." - Betteridge's Law of Headlines
Introduction
Recently, I have been thinking about constructor theory - a broad philosophical approach to providing explanations in physics. It was developed by David Deutsch, who claims that it has the potential to solve problems in physics that are hard to solve in other frameworks. When I read Deutsch's paper, I had the intuition that some of the problems in physics that constructor theory was meant to solve were analogous to problems associated with AI alignment. I thought that if constructor theory was able to solve these problems in physics, it might be helpful in the field of AI alignment.
I have been thinking about this on and off for the last few months and now believe that constructor theory probably does not have much to offer the field of AI alignment. I came to this conclusion broadly for two reasons. Firstly, constructor theory has few (zero?) concrete tools that could be imported to studying AI. Despite a lot of media coverage, constructor theory has only had limited success in solving the problems in physics that it set out to solve. It has produced a couple of interesting results, but they are very much results about physics, that (as far as I can see) cannot be easily imported for studying AI. Secondly, and maybe more importantly, the analogies between constructor theory and AI alignment break down in some important ways, once one starts digging a bit deeper.
In my estimation, this is a bad alignment idea. John Wentworth writes 'it takes most newcomers ~5 years to go from their initial Bad Idea to actually working on something plausibly useful' and suggests that 'progress along the Path of Alignment Maturity can be accelerated dramatically by actively look |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.