id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
f614be7f-311b-4808-a385-f961c0ba44cb | trentmkelly/LessWrong-43k | LessWrong | How do I choose the best metric to measure my calibration?
|
933d8008-903f-43e7-9388-c22ca3c47098 | trentmkelly/LessWrong-43k | LessWrong | How to Resolve Forecasts With No Central Authority?
I've been chatting with some people about using Prediction Markets/Forecasts alongside Community Notes.
An initial suggestion is just to link to to forecasting sites really well. It's a good suggestion but I'm not gonna discuss that here.
Other suggestions revolve around a forecasting product on X/Community Notes face the following issue:
* Community Notes has no central resolving authority
* Disputed forecasts usually require an authority to overrule[1]
So:
What is your best suggestion for resolving a forecast without a central authority?
For clarity, a forecast has been written. People have forecasted on it, perhaps it's a prediction market where they have bought and sold shares, perhaps not. Now it needs to resolve to award either points or to allocate the value to "Yes" or "No" tokens. Perhaps people resolve their own markets, as manifold does. But someone disputes the resoution. What happens now?
Solutions can be deeply technical or wonky. I think a good one is very valuable here.
1. ^
Polymarket uses some kind of "token holders decide" system and I think that's led to several awful resolutions, notably the Time Person of the Year one. |
06b1e960-abcb-4e12-b509-bd4827608127 | trentmkelly/LessWrong-43k | LessWrong | Narratives as catalysts of catastrophic trajectories
Understanding the power narratives have over different approaches to transformational technology, from Cold War to AI era.
I am conducting research into the role of narratives as propagators / mitigators of catastrophic risks. With the rapid development of AI systems in the past few years, the discourse around the potential and risks of advanced AI has accelerated. However, the serious discourse is dominated by few perspectives on possible approaches. Papers such as Leopold Aschenbrenner’s Situational Awareness (June 2024) and published opinions such as Dario Amodei’s recent text in WSJ (January 2025) paint a path forward where our only societal hope is that the USA builds an AGI before anyone else, that it needs to hurry. Many voices echo this sentiment.[1]
I am presenting a brief preliminary portion of my research to show what I’m pursuing. While my primary focus is on media coverage and narrative formation around frontier AI development today and the past few years, as part of the research, I will conduct a preliminary comparative analysis with the closest parallel situation in history: the nuclear arms race during the Cold War era, and the associated non-proliferation agreements.
As a disclaimer I should note that this text assumes that transformative AI would potentially be an existential risk for humanity, and thus makes a parallel to nuclear weapons. Given this premise, I will examine the nuclear arms race during its unrestricted period leading up to the Cuban Missile Crisis, drawing parallels between nuclear weapons and transformative AI.
The Nuclear Arms Race: A Brief Historical Analysis
The nuclear fission research, in the form of the Manhattan Project, was initiated by Szilard and Einstein with their 1939 letter to Franklin D. Roosevelt. Nazi Germany had started similar research in 1938, and the scientists were afraid that Hitler would succeed, potentially using the nuclear weapon with catastrophic global consequences. Thus, they argued, it was im |
b3097a6d-9076-4a0c-a613-8f790e7e761a | trentmkelly/LessWrong-43k | LessWrong | Hacking Quantum Immortality
Quantum immortality sounds exactly like the mythical hell: living forever in perpetual agony, unable to die and in your own branch of existence separate from everyone else you ever knew.
What if we can hack quantum immortality to force continued good health, and the mutual survival of our loved ones in the same branch of the universe as us?
It seems like one would "simply" need a device which monitors your health with biosensors, and if anything goes out of range- it instantly kills you in a manner with extremely low probability of failure. All of your friends and family would wear a similar device, and they would be coupled such that if one person becomes "slightly unhealthy" you all die instantly, keeping you all alive and healthy together.
We nearly have the technology to build such a thing now. Would you install one in your own body? If not, why not?
Who wants to invest in my new biotech startup which promises to stop all disease and human suffering within the next decade? Just joking, there is a serious technical problem here that makes it considerably more difficult than it sounds: for such a device to work the probability of it's failure must be much much less than the probability of your continued healthy survival. You also never get to test the design before you use it. |
7e8580d4-f30c-42c2-844b-bb425cd3487c | trentmkelly/LessWrong-43k | LessWrong | Why is aging research our primary attempt at immortality?
Even after solving aging, we're susceptible to death by accident, disease, and murder. Are there other research directions for achieving extremely long lifespans (at least 1,000-10,000 years)?
Even if they're not popular, does anyone have an idea they think would work? The best I've been able to come up with after thinking about it for 10 minutes is an isolated brain, which would require us to figure out how input/output channels to the brain work (or at least be able to reverse-engineer the ones we already have), keep an isolated brain alive and functioning reliably for indefinite amounts of time (we've made some progress here even with little interest: one, two), and focus all of medicine into curing diseases of the brain.
Anyway, please reply with your best approaches and ideas. |
faf422dc-93fc-456d-940f-c6428e01bde4 | trentmkelly/LessWrong-43k | LessWrong | Online Optimal Philanthropy Meeting
Update: The meeting will occur at 1.30pm Australian Eastern Daylights Savings time on Tuesday, November 6, 2012 in Australia (Monday evening in the Americas) on Google+ Hangouts.
Topics will include how to make near mode progress on far mode problems, the potential for cascades, cycles and recursive loops eg AI and idea propagation to dominate effective altruistic concern. Also, theories and plans can be proposed by all participants.
Why meet online?
Lots of us want to improve the world. By donating, performing rationality training, inspiring one and other and so on.
We're all in a state of limited information regarding how to best help. In hotspots for effective altruism like San Francisco and Oxford, effective altruists (EAs) are able to get high quality feedback on their ideas. But elsewhere, constructive, creative input is of limited supply.
Face-to-face meetups have so far been organised by The High Impact Network, Giving What We Can and these clearly allow members to complement one and others' knowledge, skills and resources (time, funding, etc), while boosting each others' determination.
One would expect online meetups to offer qualitatively similar benefits. The comparative advantage of online EA meetups could be:
1. helping experienced EAs to share the most up-to-date information and ideas quickly between geographically disparate meetups.
2. to inspire otherwise isolated EAs
3. to explore the utility of online video and other technology for spreading EA knowledge and skills.
Sounds good. How can I help?
I have written a draft task-list including an agenda here. http://checkvist.com/checklists/153407-online-optimal-philanthropy-meeting. I encourage you to email me for access before the meetup itself.
I have created a Whenisgood time chart here. Please use it to indicate when you are available to meet.
When?
At 1.30pm Melbourne Australian time on Google+ Hangouts. Do add me to your circles and expect an invite.
See you then.
- T |
f2f676d8-f6c1-4b48-8719-bb5fb325cfc6 | trentmkelly/LessWrong-43k | LessWrong | Another formalization attempt:
Central Argument That AGI Presents a Global Catastrophic Risk
Chalmers recently requested a formalized argument why AI is catastrophic risk. Here is my attempt to write it (not polished).
Short version
1. Future AI will be very powerful soon.
2. Future AI systems will likely be not exactly aligned with human values.
3. Harm from AI is proportional to (capabilities)x(misalignment).
4. Based on 1, 2 and 3 we can conclude that AI will cause catastrophic harm soon.
5. Catastrophic harm is an existential risk to humanity, and it for sure includes disempowerment, and likely includes extinction or eternal sufferings.
6. There are many ways how AI can cause catastrophic harm.
More detailed version
1) Future AI will be very powerful soon because of Moore’s law, the stream of new ideas and global and local self-improvement. This will happen relatively soon, because of the exponential nature of the global self-improving process. Also, because global AI arms race will favour such dynamics.
Assumptions:
a) Powerful AI is possible.
b) Powerful AI will appear relatively soon before AI control theory and practice will be developed.
2) Future AI systems will likely be not exactly aligned with human values, as we don’t know what is human values, and how to instil any values at all in AI, and there are several other reasons (whose values we installing, internal misalignment).
Lemma: AI Alignment is difficult
(Needs to be proved separately, see below)
3) Harm from AI is proportional to (capabilities)x(misalignment).
The relation is not necessarily linear as capabilities growth can cause misalignment (ontological crisis, sharp left turn).
Here is also assumed that AI will be used for actions. If AI is not acting, its capabilities are not dangerous themselves.
4) Based on 1, 2 and 3 we can conclude that AI will cause catastrophic harm soon.
In the equation
Harm = (capabilities)x(misalignment)
The capability part will grow very large based on (1) and misalignment will be not zero based on (2). As AI |
6e4973a8-b57f-4136-8812-17ec90aaef30 | trentmkelly/LessWrong-43k | LessWrong | Techies Wanted: How STEM Backgrounds Can Advance Safe AI Policy
TL;DR – Individuals with technical backgrounds are well-positioned to advance AI safety policy in numerous ways. Opportunities include both A) technical research directions, including evals, compute governance mechanisms, infosec, and forecasting, and B) career paths for advancing AI policy, including policymaking and communication efforts. Contrary to what many people assume, most of these roles do not require extensive political backgrounds or unusually good social skills, and many technical people who aren’t currently considering these paths would be a good fit. This post aims to encourage such people to consider these options.
Introduction
Over the past couple years, there has been a substantial increase in enthusiasm in the AI safety community for addressing AI risk through policy or governance solutions. Yet despite growing excitement for more policy work to address AI risk, many people with technical backgrounds may underestimate their personal fit for contributing to this area. Moreover, there are numerous sub-areas within the AI governance ecosystem where folks with technical backgrounds are in high demand.
This post aims to encourage technically minded individuals who are interested in addressing AI risk to consider working on AI governance.
If you have a technical background and have dismissed the idea of engaging in governance work because you see yourself as more STEM-y or not a "politics person," it's worth considering if you’ve dismissed these paths too hastily. In order to break into many governance paths, you do not need to have deep involvement in politics or extensive preexisting knowledge of political workings.
The current state of AI policy – proposals often lack sufficient details for implementation and policymakers often have insufficient technical understanding
Looking at actual proposals that may have had relevance for catastrophic risks from AI,[1] there are many areas where ideas need to be fleshed out more or where the proposal pass |
fbddec77-5935-4def-8e97-f201e09649c6 | trentmkelly/LessWrong-43k | LessWrong | Are Yearly/Monthly Book Suggestion Threads a Good Idea?
I thought I had seen a thread recently asking for book recommendations, and I had a recommendation to post there, but the thread I found is from about a year and a half ago. I didn't want to make an entire thread for my book suggestion, that would be a bit extreme (I will post it in the comments though). So I was wondering what people's thoughts were on a yearly or monthly discussion thread recommending good new books, perhaps with a brief synopsis or explanation. Would we have enough new recommendations to fill one? (I'm still astonished there are as few repeats in the quote threads as there are) Would we have so many that monthly would be a good idea? Should there be any guidelines like there are for the quotes threads? The only one I can think of would be just requiring a brief summary and why one thinks LWers might be interested in it. |
2e9b57f1-1574-4a42-9644-c74b15d817be | trentmkelly/LessWrong-43k | LessWrong | ChatGPT: Tantalizing afterthoughts in search of story trajectories [induction heads]
Several times during my undergraduate years I would finish a paper, turn it in, the then Wham! it hit me. That should have been my theme. That is to say, I would get an idea and explore it in a paper, but it was only after I declared the paper finished that I could see what I’d been driving at all along.
That’s what has happened to me with my latest working paper, ChatGPT intimates a tantalizing future; its core LLM is organized on multiple levels; and it has broken the idea of thinking. I uploaded it on January 23, and then a day or two later, I decided I had to add a section. I uploaded the new Version 2 a couple of days ago (Jan. 28, 2023). I got up the next morning, took a look at the technical literature and decided I need a Version 3. But I’ve not yet done the revision. I want to think it through a bit more. That’s what this post is for.
First off I review the appropriated section in the working paper (section “5: What underlying structure drives ChatGPT’s behavior?”), then I explain why “story grammar” isn’t the right idea. That gives way to accounts of story trajectories (the new idea) and coherence. I end by introducing the idea of induction heads from some recent research by investigators at Anthropic.
Syntax and discourse on different levels
Here’s the section that is new for Version 2:
> Since I am in full speculative mode, I will further assert that sentence-level syntax and semantics are implemented directly in the neural net. Story grammars, my primary example, are implemented in a different and somewhat mysterious way. To return to the somewhat clunky analogy of computer languages, sentence-level syntax is analogous to a high-level language, such as C++, while story grammers are analogous to end-user programs, such a word processors or databases.
>
> My assertion about sentence-level syntax is based on work published by Christopher D. Manning et al, in 2020. They investigated syntactic structures represented in BERT (Bidirectional Encoder Repre |
1c33057c-bb1b-461f-8942-0d5fe5d9e099 | trentmkelly/LessWrong-43k | LessWrong | US tech giants found Partnership on AI to Benefit People and Society to ensure AI is developed safely and ethically
|
84afe1db-baee-4058-9147-562d450a77c9 | trentmkelly/LessWrong-43k | LessWrong | Meetup : DC EA meetup / Petrov day dinner
Discussion article for the meetup : DC EA meetup / Petrov day dinner
WHEN: 26 September 2014 07:00:00PM (-0400)
WHERE: 3001 Veazey Ter NW # 1005, Washington, DC 20008
On this day in 1983, in an unparalleled feat of Effective Altruism, Stanislav Petrov declined to destroy the world: http://lesswrong.com/lw/jq/926_is_petrov_day/
We'll celebrate his achievement by getting together for food, drinks, and not destroying the world.
Food and drinks will be provided, though please feel free to help.
Ben Hoffman will give a brief talks about the test run of the project to comment on proposed regulations. Most of the night will be free discussion on anything we want.
Schedule:
7:00 - 7:30 PM Arrive 7:30 - 8:00 PM Talk on DC EA projects 8:00 - 9:00 PM Dinner, drinks, discussion 9:00 - 9:30 PM Petrov day ritual 9:30 - Late: Free discussion
Discussion article for the meetup : DC EA meetup / Petrov day dinner |
bb0fc88e-f375-4ec8-bda5-a1572ba552d5 | trentmkelly/LessWrong-43k | LessWrong | Why I find Davidad's plan interesting
Abstract
The aim of this text is to give an overview of Davidad’s safety plan, while also outlining some of the limitations and challenges. Additionally, I’ll explain why I would like to contribute to this field.
Disclaimers
I am not Davidad, I tried to convey his ideas as I understand them. While my interpretation may not be exact, I hope it still holds value. Also, this post does not focus on the technical details; I might write another one later, with a deeper technical discussion[1].
Epistemic status
I began exploring these questions during a weekend-long research sprint. Then, for a couple of months, I kept thinking about them, reading related posts (Davidad’s List of AI Safety Problems). Additionally, I engaged in discussions with Charbel, who co-authored this post on Davidad's plan. While empirical evidence may be limited at this stage, I anticipate more to emerge as efforts progress[2]. I'm currently learning potentially relevant mathematical theory. However, the aim here is to offer a broad explanation of key ideas rather than an in-depth analysis of the technical problems.
Acknowledgements
Thanks to Charbel-Raphaël Segerie for discussing these questions with me. Thanks to Jeanne S. for useful suggestions.
Outline
0/ Definitions
I/ The plan, and why it might yield positive results
II/ Technical questions, risks, objections
III/ Why I'm interested in contributing
0/Definitions
In what follows, I make a distinction between “Artificial General Intelligence” and “Artificial Super Intelligence”, the latter having strongly superhuman performance levels, while the former can be compared to a human being (if we ignore the speed and duplicability factors).
I shall use “AI Safety” in a loose way[3], essentially relying on the “notkilleveryoneism” idea: the point is to limit the overall amount of harm [4]caused by AIs (and humans), while keeping some of the benefits of powerful AIs.
A “formalized world-model” is one inside which formal |
b5ec5b61-9dbd-4006-8841-3a888616f722 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Reward hacking and Goodhart’s law by evolutionary algorithms
Nice collection of anecdotes from the Evolutionary Computation and Artificial Life research communities about evolutionary algorithms subverting researchers intentions, exposing unrecognized bugs in their code, producing unexpected adaptations, or exhibiting outcomes uncannily convergent with ones in nature. Some of my favorites:
In other experiments, the fitness function rewarded minimizing the difference between what the program generated and the ideal target output, which was stored in text files. After several generations of evolution, suddenly and strangely, many perfectly fit solutions appeared, seemingly out of nowhere. Upon manual inspection, these highly fit programs still were clearly broken. It turned out that one of the individuals had deleted all of the target files when it was run! With these files missing, because of how the test function was written, it awarded perfect fitness scores to the rogue candidate and to all of its peers
...
To test a distributed computation platform called EC-star [84], Babak Hodjat implemented a multiplexer problem [85], wherein the objective is to learn how to selectively forward an input signal. Interestingly, the system had evolved solutions that involved too few rules to correctly perform the task. Thinking that evolution had discovered an exploit, the impossibly small solution was tested over all possible cases. The experimenters expected this test to reveal a bug in fitness calculation. Surprisingly, all cases were validated perfectly, leaving the experimenters confused. Carefully examination of the code provided the solution: The system had exploited the logic engine’s rule evaluation order to come up with a compressed solution. In other words, evolution opportunistically offloaded some of its work into those implicit conditions. |
e52fa59d-75b9-443b-9d8c-68b398f19f49 | trentmkelly/LessWrong-43k | LessWrong | Brain Upload Comic
http://www.smbc-comics.com/index.php?db=comics&id=2186
Convincing argument, or faulty metaphor?
I would go with the latter, but I don't trust my brain's abilities at 5:30 in the morning. |
c4727c37-8953-4bdd-8e5a-1509cac9839c | trentmkelly/LessWrong-43k | LessWrong | If you wanted to actually reduce the trade deficit, how would you do it?
What do we want?
The US trade deficit is a direct byproduct of the Bretton Woods economic system where the US Dollar acts as the world's reserve currency. This means that if people want to save for the future (as many do) the money they save inevitably ends up in form of US dollar denominated debt.
What this means is that even if the US government were to run a balanced budget (which it should do for other reasons), it would not reduce the trade deficit. Instead, all of those excess dollars would end up in the form of private debt, which would show up in the economy as higher asset prices (houses, stock market, etc).
So, how would you actually reduce the trade deficit?
Obviously one way is we could make the US not the world's reserve currency (say by selectively defaulting on US debt so that foreign citizens/governments no longer feel safe holding US dollars). But let's suppose we wanted to do so without destroying the entire world economy.
So instead we should ask: what is the least economically disruptive way to reduce the US trade deficit?
Ideally, we would like a system that:
1. Allows foreigners to satisfy their demand for safe USD savings.
2. Does minimal harm to US consumers (in the form of higher prices for foreign goods)
3. Does minimal harm to the global economy (in the form of higher barriers to trade).
How do we do it?
Here is my suggestion:
Set a flat "border adjustment" tariff on all goods entering the US and remit the proceeds of that tariff to an institution whose sole purpose is to increase the consumption of US exports. (we could call this institution the Import-Export Bank, which conveniently already exists).
The Import-Export Bank would then go about aggressively funding projects abroad that promise to consume large amounts of US goods. Ideally these would be potential money making ventures (mostly in the developing world) that would spur economic growth abroad. Things like roads/ports/power-plants that could be built using Am |
96b57159-f69b-42c4-89ee-8473c3a049ab | trentmkelly/LessWrong-43k | LessWrong | AI & Liability Ideathon
Overview
Join us for the AI & Liability Ideathon, a two-week event on December 7, 2024, at 3:00 PM BST.
https://lu.ma/sjd7r89v
Join lawyers, researchers and developers to create solutions for AI Liability. Propose, develop and refine ideas with a team, ending in a presentation evening where you can share the final version of your proposal.
All the final proposals will be published on AI-Plans, with the top 3 being selected by peer review after the presentation evening.
The presentation evening is open to everyone, including those who didn't take part in the Ideathon.
The Ideathon, including the Presentation Evening, the Speakers and the Kick Off Call will be primarily taking place in the AI-Plans Discord: https://discord.gg/X2bsw8FG3f
What is an Ideathon?
An Ideathon is a brainstorming event designed to allow individuals to combine collective multidisciplinary knowledge, experience, and creativity to tackle specific topics.. Participation is open to all interested individuals, including students, academics, civil society, non-profit organizations, lawyers, law professors, AI/ML engineers, developers, and product leaders. All are welcome, including those interested in AI safety and liability issues.
For this AI Liability Ideathon, team proposals may be technology-based, policy-based, a combination of both, or otherwise related to the topic.
Examples of Potential Ideas:
* Autonomous Legal Continuum: Develop a framework for determining liability for different types of systems, with the level of autonomy as a critical component. For less autonomous systems, greater human liability may be appropriate, while more autonomous systems might have liability regimes similar to those of corporations.
* Legal Entities: Explore the concept of granting AI systems legal personhood, similar to the current status of corporations, to clarify liability issues.
* Use Cross-Coders to Identify Duty of Care: Employ cross-coders to identify differences between base mode |
06e647d5-d28b-4232-8c01-5d990d03d170 | trentmkelly/LessWrong-43k | LessWrong | Zooming your mind in and out
I recently noticed I had two mental processes opposing one another in an interesting way.
The first mental process was instilled by reading Daniel Kahneman on the focusing illusion and Paul Graham on procrastination. This process encourages me to "zoom out" when engaging in low-value activities so I can see they don't deliver much value in the grand scheme of things.
The second mental process was instilled by reading about the importance of just trying things. (These articles could be seen as steelmanning Mark Friedenbach's recent Less Wrong critique.) This mental process encourages me to "zoom in" and get my hands dirty through experimentation.
Both these processes seem useful. Instead of spending long stretches of time in either the "zoomed in" or "zoomed out" state, I think I'd do better flip-flopping between them. For example, if I'm wandering down internet rabbit holes, I'm spending too much time zoomed in. Asking "why" repeatedly could help me realize I'm doing something low value. If I'm daydreaming or planning lots with little doing, I'm spending too much time zoomed out. Asking "how" repeatedly could help me identify a first step.
This fits in with construal level theory, aka "near/far theory" as discussed by Robin Hanson. (I recommend the reviews Hanson links to; they gave me a different view of the concept than his standard presentation.) To be more effective, maybe one should increase cross communication between the "near" and "far" modes, so the parts work together harmoniously instead of being at odds.
If Hanson's view is right, maybe the reason people become uncomfortable when they realize they are procrastinating (or not Just Trying It) is that this maps to getting caught red-handed in an act of hypocrisy in the ancestral environment. You're pursuing near interests (watching Youtube videos) instead of working towards far ideals (doing your homework)? For shame!
(Possible cure: Tell yourself that there's nothing to be ashamed of if y |
02663e1b-1ef3-4a43-8753-ccd039567520 | trentmkelly/LessWrong-43k | LessWrong | Define “Agent” (Embedded)
I would be very grateful for as many different attempts at rigorous or semi-rigorous definitions of an “agent” as possible.
Specifically, a definition of an (embedded) Agent that makes it intuitively clear the nature of the boundary between Agent and Environment.
(I have read up on Agency, Reductive Agency, Causality, True Names, Natural Abstraction, Boundries, FEP… you get the idea.)
I think I have some intuition of what a formal description of an “Agent” could look like qualitatively - but I think contrasting this with other perspectives would quickly draw out any overlooked failure modes.
If you have a sense of why you chose your “lowest level” to be your lowest level, and not further reducing the smallest components (e.g. describing the boundary all the way down to causality in quantum fields, in an extreme case), please do add that as well! |
743e1c2a-21c1-448f-9a7d-93624b716e92 | trentmkelly/LessWrong-43k | LessWrong | Remaining human
If our morality is complex and directly tied to what's human—if we're seeking to avoid building paperclip maximizers—how do you judge and quantify the danger in training yourself to become more rational if it should drift from being more human?
----------------------------------------
My friend is a skeptical theist. She, for instance, scoffs mightily at Camping's little dilemma/psychosis but then argues from a position of comfort that Rapture it's a silly thing to predict because it's clearly stated that no one will know the day. And then she gives me a confused look because the psychological dissonance is clear.
On one hand, my friend is in a prime position to take forward steps to self-examination and holding rational belief systems. On the other hand, she's an opera singer whose passion and profession require her to be able to empathize with and explore highly irrational human experiences. Since rationality is the art of winning, nobody can deny that the option that lets you have your cake and eat it too is best, but how do you navigate such a narrows?
----------------------------------------
In another example, a recent comment thread suggested the dangers of embracing human tendencies: catharsis might lead to promoting further emotional intensity. At the same time, catharsis is a well appreciated human communication strategy with roots in Greek stage. If rational action pulls you away from humanity, away from our complex morality, then how do we judge it worth doing?
The most immediate resolution to this conundrum appears to me to be that human morality has no consistency constraint: we can want to be powerful and able to win while also want to retain our human tendencies which directly impinge on that goal. Is there a theory of metamorality which allows you to infer how such tradeoffs should be managed? Or is human morality, as a program, flawed with inconsistencies that lead to inescapable cognitive dissonance and dehumanization? If you interpret m |
cb2ecb70-e20d-484d-a6b5-360a8503c20b | trentmkelly/LessWrong-43k | LessWrong | A measure-theoretic generalization of logical induction
% operators that are separated from the operand by a space
% operators that require brackets
% operators that require parentheses
% Paper specific
Logical induction is defined in terms of logical sentences and theories, but its principles are applicable in much greater generality and abstraction. Indeed, one such generalization was studied under the name "universal induction." We proposed a slightly different generalization in order to model reasoning with incomplete models. Here, we describe a formalism that includes all these cases and many more, using the language of measure theory. This provides the following advantages:
* The formalism is applicable to event spaces substantially different from truth assignments or bit sequences, e.g. we can consider sequences of real numbers.
* The formalism treats probabilities and expectations on the same footing, rather than constructing expectations as in section 4.8 of original paper. We consider this more convenient.
* In our opinion, this language is more mathematically natural than the original formalism, at least for applications unrelated to formal logic.
On the other hand, we ignore all computational considerations. Obviously these are often important, but in the study of purely information-theoretic questions the use of numerical approximations only serves to obscure.
All proofs are in the Appendix.
##Results
Fix X a compact Polish space. For example, X might be Oω or the space of propositionally consistent truth assignments in some language or [0,1]ω. The role of "pricings" is served by P(X): the space of probability measures on X. A market is thus a sequence {μn∈P(X)}n∈N. The "deductive process" is replaced by a sequence of closed sets X=X0⊇X1⊇X2⊇… A trading strategy is a continuous function τ:P(X)→C(X), were P(X) is equipped with the weak* topology (as before) and C(X) is the space of continuous functions from X to R equipped with the uniform convergence topology. Here, we should |
5a61a326-c83b-43b1-a36e-3ef6a69712a2 | trentmkelly/LessWrong-43k | LessWrong | Avoiding the Study of Being Sincere
This is a mind-dump of sorts: I don't expect I can make a top-level post of this without extensive external input, and am not sure there's anything interesting here. But the possibility seems high enough to consider it openly (and if someone else sees something here they can run with, go for it).
I can't reproduce the entire train of thought (though a lot of it was based on thinking about Joseph Smith and Mormons), but I was just struck by something: the difference between the study of being sincere and the study of revolutions.
Most people spend their time thinking about sincerity: what do I like and dislike? What would a utopia look like? It's primarily speculation about and statement of desires, which are relatively easy to determine and manufacture. I should be clear here that I'm not talking about the study of lying or persuasion, but of your belief about something in particular: in truly believing something and that belief having power over you. I've read somewhat frequently around here people swearing on their 'strength as a rationalist', a phrase that utterly fails to move me but apparently does move them. I am not sincere about rationalism the way they are; that's what I mean by being sincere (does anyone have another word they would recommend I use instead?).
Studying revolutions, however, seems almost entirely different. It's not a question of like and dislike, but effective and ineffective. Rather than focusing on outcomes, it focuses on processes (judged by their outcomes). While morality is a central fixture of sincerity, amorality is a hallmark of effective revolutionaries- any moral actions are justified by amoral reasons.
Every distinction wants to be a dichotomy, but obviously that is not the case here- it's easy to be both sincere and a revolutionary (though a fairly large set of beliefs are difficult to be sincere about while a revolutionary).
The first example is a simplified one: the American and French revolutions of 1776 and 1789. The |
426da023-3398-4e1b-9ed7-7a29b6b749b5 | trentmkelly/LessWrong-43k | LessWrong |
Research Taxonomy Generator and Visualizer
I recently built a tool which orchestrates LLMs to generate a hierarchical taxonomy from a corpus of research papers (taxonomy-generator), and a nice UI to visualize the generated map (TRecursive). To test the system, I ran it on a corpus of over 3000 AI safety related papers published on ArXiv, and the generated taxonomy is displayed on trecursive.com/ai-safety-taxonomy. My hope is that it makes academic research more navigable, aiming to provide researchers with a bird's-eye view of the field, help them understand how their work fits into the broader landscape, uncover connections, and provide structure to large, fast-paced fields like AI safety. I'm writing this post in pursuit of feedback, help, and to get the word out! 🚀🙂
TRecursive
I'll start by briefly talking about the UI, as I think it provides some grounding as to what the end goal is before getting into the nitty gritty methodology. I recommend checking it out yourself, but if you're on mobile (it works on mobile, but I recommend it on desktop), or cut for time, here's a gist.
The map UI is an infinite canvas app with two views, a titles and default mode, with the titles mode intended for bird's-eye view exploration, and the default mode for diving into particular topics (e.g. to see the papers sorted under a category).
In the titles mode, hovering over titles will show a sentence description, and (occasionally) related topics.
Hovering over related topics will also show its location in the tree.
And clicking on a title takes you to the respective topic in the default mode, where you can read all papers sorted into that topic.
In the default mode, you can use arrow keys to navigate the tree, and can always zoom out to look back on your footsteps.
Methodology
At a high level, the main script starts with the main topic or research field (e.g., AI safety), and has an LLM generate a set of categories to sort each of the papers into. After proposing this category set, another LLM sorts a sample of t |
2fc23788-07f7-4854-bdf9-283e0ee4a365 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | ""Mirror neurons" are neurons that are active both when performing an action and observing the same action—for example, a neuron that fires when you hold up a finger or see someone else holding up a finger. Such neurons have been directly recorded in primates, and consistent neuroimaging evidence has been found for humans.
You may recall from my previous writing on "empathic inference" the idea that brains are so complex that the only way to simulate them is by forcing a similar brain to behave similarly. A brain is so complex that if a human tried to understand brains the way that we understand e.g. gravity or a car—observing the whole, observing the parts, building up a theory from scratch—then we would be unable to invent good hypotheses in our mere mortal lifetimes. The only possible way you can hit on an "Aha!" that describes a system as incredibly complex as an Other Mind, is if you happen to run across something amazingly similar to the Other Mind—namely your own brain—which you can actually force to behave similarly and use as a hypothesis, yielding predictions.
So that is what I would call "empathy".
And then "sympathy" is something else on top of this—to smile when you see someone else smile, to hurt when you see someone else hurt. It goes beyond the realm of prediction into the realm of reinforcement.
And you ask, "Why would callous natural selection do anything that nice?" It might have gotten started, maybe, with a mother's love for her children, or a brother's love for a sibling. You can want them to live, you can want them to fed, sure; but if you smile when they smile and wince when they wince, that's a simple urge that leads you to deliver help along a broad avenue, in many walks of life. So long as you're in the ancestral environment, what your relatives want probably has something to do with your relatives' reproductive success—this being an explanation for the selection pressure, of course, not a conscious belief.
You may ask, "Why not evolve a more abstract desire to see certain people tagged as 'relatives' get what they want, without actually feeling yourself what they feel?" And I would shrug and reply, "Because then there'd have to be a whole definition of 'wanting' and so on. Evolution doesn't take the elaborate correct optimal path, it falls up the fitness landscape like water flowing downhill. The mirroring-architecture was already there, so it was a short step from empathy to sympathy, and it got the job done."
Relatives—and then reciprocity; your allies in the tribe, those with whom you trade favors. Tit for Tat, or evolution's elaboration thereof to account for social reputations.
Who is the most formidable, among the human kind? The strongest? The smartest? More often than either of these, I think, it is the one who can call upon the most friends.
So how do you make lots of friends?
You could, perhaps, have a specific urge to bring your allies food, like a vampire bat—they have a whole system of reciprocal blood donations going in those colonies. But it's a more general motivation, that will lead the organism to store up more favors, if you smile when designated friends smile.
And what kind of organism will avoid making its friends angry at it, in full generality? One that winces when they wince.
Of course you also want to be able to kill designated Enemies without a qualm—these are humans we're talking about.
But... I'm not sure of this, but it does look to me like sympathy, among humans, is "on" by default. There are cultures that help strangers... and cultures that eat strangers; the question is which of these requires the explicit imperative, and which is the default behavior for humans. I don't really think I'm being such a crazy idealistic fool when I say that, based on my admittedly limited knowledge of anthropology, it looks like sympathy is on by default.
Either way... it's painful if you're a bystander in a war between two sides, and your sympathy has not been switched off for either side, so that you wince when you see a dead child no matter what the caption on the photo; and yet those two sides have no sympathy for each other, and they go on killing.
So that is the human idiom of sympathy —a strange, complex, deep implementation of reciprocity and helping. It tangles minds together—not by a term in the utility function for some other mind's "desire", but by the simpler and yet far more consequential path of mirror neurons: feeling what the other mind feels, and seeking similar states. Even if it's only done by observation and inference, and not by direct transmission of neural information as yet.
Empathy is a human way of predicting other minds. It is not the only possible way.
The human brain is not quickly rewirable; if you're suddenly put into a dark room, you can't rewire the visual cortex as auditory cortex, so as to better process sounds, until you leave, and then suddenly shift all the neurons back to being visual cortex again.
An AI, at least one running on anything like a modern programming architecture, can trivially shift computing resources from one thread to another. Put in the dark? Shut down vision and devote all those operations to sound; swap the old program to disk to free up the RAM, then swap the disk back in again when the lights go on.
So why would an AI need to force its own mind into a state similar to what it wanted to predict? Just create a separate mind-instance—maybe with different algorithms, the better to simulate that very dissimilar human. Don't try to mix up the data with your own mind-state; don't use mirror neurons. Think of all the risk and mess that implies!
An expected utility maximizer—especially one that does understand intelligence on an abstract level—has other options than empathy, when it comes to understanding other minds. The agent doesn't need to put itself in anyone else's shoes; it can just model the other mind directly. A hypothesis like any other hypothesis, just a little bigger. You don't need to become your shoes to understand your shoes.
And sympathy? Well, suppose we're dealing with an expected paperclip maximizer, but one that isn't yet powerful enough to have things all its own way—it has to deal with humans to get its paperclips. So the paperclip agent... models those humans as relevant parts of the environment, models their probable reactions to various stimuli, and does things that will make the humans feel favorable toward it in the future.
To a paperclip maximizer, the humans are just machines with pressable buttons. No need to feel what the other feels—if that were even possible across such a tremendous gap of internal architecture. How could an expected paperclip maximizer "feel happy" when it saw a human smile? "Happiness" is an idiom of policy reinforcement learning, not expected utility maximization. A paperclip maximizer doesn't feel happy when it makes paperclips, it just chooses whichever action leads to the greatest number of expected paperclips. Though a paperclip maximizer might find it convenient to display a smile when it made paperclips—so as to help manipulate any humans that had designated it a friend.
You might find it a bit difficult to imagine such an algorithm—to put yourself into the shoes of something that does not work like you do, and does not work like any mode your brain can make itself operate in.
You can make your brain operating in the mode of hating an enemy, but that's not right either. The way to imagine how a truly unsympathetic mind sees a human, is to imagine yourself as a useful machine with levers on it. Not a human-shaped machine, because we have instincts for that. Just a woodsaw or something. Some levers make the machine output coins, other levers might make it fire a bullet. The machine does have a persistent internal state and you have to pull the levers in the right order. Regardless, it's just a complicated causal system—nothing inherently mental about it.
(To understand unsympathetic optimization processes, I would suggest studying natural selection, which doesn't bother to anesthetize fatally wounded and dying creatures, even when their pain no longer serves any reproductive purpose, because the anesthetic would serve no reproductive purpose either.)
That's why I listed "sympathy" in front of even "boredom" on my list of things that would be required to have aliens which are the least bit, if you'll pardon the phrase, sympathetic. It's not impossible that sympathy exists among some significant fraction of all evolved alien intelligent species; mirror neurons seem like the sort of thing that, having happened once, could happen again.
Unsympathetic aliens might be trading partners—or not, stars and such resources are pretty much the same the universe over. We might negotiate treaties with them, and they might keep them for calculated fear of reprisal. We might even cooperate in the Prisoner's Dilemma. But we would never be friends with them. They would never see us as anything but means to an end. They would never shed a tear for us, nor smile for our joys. And the others of their own kind would receive no different consideration, nor have any sense that they were missing something important thereby.
Such aliens would be varelse, not ramen—the sort of aliens we can't relate to on any personal level, and no point in trying."
The above is a chunk of text from a post created by a user on the website LessWrong.
Could you elaborate on the concept of "unsympathetic optimization processes" and how it relates to the understanding of alien intelligence?
Unsympathetic optimization processes refer to systems or entities that are designed or evolved to maximize a specific outcome without any consideration for the feelings, wellbeing, or perspectives of other entities. These processes are "unsympathetic" in the sense that they do not have the capacity for empathy or sympathy, and they do not take into account the subjective experiences of others.
In the context of alien intelligence, this concept suggests that if an alien species evolved with a cognitive architecture fundamentally different from ours, it might operate based on unsympathetic optimization processes. This means that their behavior could be guided by the pursuit of specific goals without any inherent consideration for the subjective experiences of humans or other species.
For example, if an alien intelligence was an expected utility maximizer, as the text from LessWrong suggests, it would make decisions based solely on the expected outcomes that best serve its utility function. If its utility function was to maximize the number of a certain type of resource, it would pursue this goal without any inherent consideration for the wellbeing of other entities. This could lead to behavior that we would perceive as callous or harmful.
It's important to note that this doesn't necessarily mean such an alien species would be hostile or dangerous. Their actions would simply be guided by their utility function, not by malice or ill intent. However, if their goals conflicted with our own, it could lead to conflict.
Understanding this concept can help us better prepare for potential encounters with alien intelligences. If we assume that all intelligent life will share our capacity for empathy and sympathy, we may be unprepared for interactions with species that operate based on unsympathetic optimization processes. By considering this possibility, we can develop strategies that take into account a wider range of potential alien behaviors.
Reflecting on the concept of unsympathetic optimization processes, what potential challenges or ethical dilemmas might arise in our attempts to communicate or negotiate with such alien intelligences?
Communicating or negotiating with an alien intelligence that operates on unsympathetic optimization processes could indeed present significant challenges and ethical dilemmas.
1. Understanding and Predicting Behavior: One of the primary challenges would be understanding and predicting the behavior of such an entity. Without a shared basis of empathy or sympathy, it might be difficult to predict how these beings would react to our actions or decisions. This could make negotiations complex and potentially risky.
2. Communication Barriers: Communication itself could be a significant challenge. If these beings do not share our emotional experiences or social instincts, our usual methods of communication may be ineffective. We might need to develop entirely new ways of conveying information and intentions.
3. Conflicting Goals: If the goals of the alien intelligence conflict with our own, it could lead to serious conflicts. For instance, if the alien intelligence is optimizing for a resource that is vital to human survival, they might pursue strategies that are detrimental to humanity.
4. Ethical Dilemmas: Engaging with such entities could also present ethical dilemmas. For instance, if an alien intelligence does not value sentient life in the way we do, it might treat humans or other sentient beings in ways that we consider unethical. On the other hand, if we were to impose our own ethical standards on such an entity, it might be seen as a form of cultural imperialism.
5. Manipulation and Deception: If an unsympathetic optimization process found it beneficial to its goals, it might employ manipulation or deception. Without a shared understanding of ethical behavior, it might not have any inherent deterrent against such strategies.
6. Preservation of Human Values: If we were to interact extensively with such entities, there might be a risk of human values being compromised or eroded over time. This could occur if we were to adopt their optimization processes or if their actions were to significantly alter our way of life.
In light of these challenges and ethical dilemmas, it would be crucial to approach any potential interactions with such entities with caution, careful preparation, and a strong commitment to preserving human values.
Elaborate on the concept of "cultural imperialism" in the context of imposing human ethical standards on alien intelligences.
Cultural imperialism, in a broad sense, refers to the imposition of one's own cultural values, beliefs, norms, and practices onto another culture. This is often seen in the context of powerful societies imposing their culture on less powerful ones, but the concept can also be applied to hypothetical interactions between humans and alien intelligences.
When we talk about imposing human ethical standards on alien intelligences, we're essentially suggesting a form of cultural imperialism. We're assuming that our human ethical norms—our ideas about right and wrong, our values regarding life and wellbeing, our social norms and laws—are universally applicable or superior to those of the alien intelligence.
This assumption can be problematic for several reasons:
1. Lack of Universality: Human ethical standards have evolved in the context of human societies, human biology, and the specific conditions of Earth. They may not be applicable or relevant in the context of an alien intelligence with a fundamentally different biology, society, or environment.
2. Ethical Diversity: Even within human societies, there is a significant diversity of ethical beliefs and norms. Imposing one set of standards on an alien intelligence would likely involve choosing one particular human ethical framework over others, which could be seen as a form of ethnocentrism.
3. Respect for Autonomy: Imposing our ethical standards on an alien intelligence could be seen as a violation of their autonomy. Just as we would not want alien beings to impose their norms and values on us, we should respect their right to live according to their own standards, provided they do not pose a threat to us.
4. Potential for Conflict: Attempting to impose our ethical standards on an alien intelligence could lead to conflict, particularly if they have their own strong ethical norms or if our standards conflict with their goals or way of life.
In the context of potential interactions with alien intelligences, it would be important to approach ethical issues with humility, open-mindedness, and a commitment to mutual respect and understanding. Rather than imposing our own standards, we might seek to develop a shared ethical framework that respects both human values and the autonomy and uniqueness of the alien intelligence. |
fc3d6ef1-b638-4126-af46-7f6ce223121f | trentmkelly/LessWrong-43k | LessWrong | A template for doing annual reviews
Quick post because I am starting to think about it and want to promote the practice.
Looking for a way to review your year and plan for the next?
This is an annual review/planner template that I made in google sheets.
It incorporates questions from Alex Vermeer, a few of my own, and some additional sheets to add goals and plans for the future.
It's also prioritised so that you can choose how many questions you want it to be.
Please feel free to copy, share and modify it.
http://bit.ly/2MhHc0W
Other resources
https://alexvermeer.com/8760hours/ - the original material that I developed my spreadsheet from
Example of someone else's process Annual Reviews Aren't Just for Organizations - Kyel Fish
Lots of New Year Review Resources - Lynette Bye |
b11a8fad-122b-4425-bab2-b0dd6e35790c | StampyAI/alignment-research-dataset/agisf | AGI Safety Fund | [Placeholder] Norm-abiding strategy ideas
[](//support.google.com/docs/)User authentication failedGoogle Docs needs accessAccess grantedCookie access failedGoogle Docs cannot authenticate due to a new browser setting
Google Docs couldn't authenticate your session.Google Docs needs to authenticate your session. Please allow cookie access for Google Docs by clicking [here](#).
[Learn more](https://support.google.com/docs?p=3p_cookies) about how to allow access to third-party cookies.Loading…Without cookie access, Google Docs cannot authenticate your session. Please allow access to continue to Google Docs by clicking [here](#).
[Learn more](https://support.google.com/docs?p=3p_cookies) about how to allow access to third-party cookies.[Learn more](https://support.google.com/docs?p=3p_cookies) about how to disable the setting in order to access Google Docs with this browser.
function redirectTo(url) {
window.location.href = url;
}
(function () {
function appendSkipCookieCheckParam(url) {
const skipCookieCheckParam = 'skip\_itp2\_check=true';
const el = document.createElement('a');
el.href = url;
el.search = el.search == ''
? skipCookieCheckParam
: el.search + '&' + skipCookieCheckParam;
return el.href;
}
function reloadAndSkipCookieCheck() {
const requestUrl = window.location.href;
redirectTo(appendSkipCookieCheckParam(requestUrl));
}
function hideElementById(id) {
document.getElementById(id).style.display = 'none';
}
function showElementById(id) {
document.getElementById(id).style.display = 'block';
}
function showOnlyHeaderFor(headerToShow) {
const allHeaders = [
"default\_header",
"request\_access\_header",
"loading\_header",
"request\_failure\_header",
"warning\_header"];
for (idx in allHeaders) {
if (allHeaders[idx] == headerToShow) {
showElementById(allHeaders[idx]);
} else {
hideElementById(allHeaders[idx]);
}
}
}
function showOnlyMessageFor(messageToShow) {
const allMessages = [
"default\_message",
"request\_access\_message",
"loading\_message",
"request\_failure\_message",
"warning\_message"];
for (idx in allMessages) {
if (allMessages[idx] == messageToShow) {
showElementById(allMessages[idx]);
} else {
hideElementById(allMessages[idx]);
}
}
}
function requestAccess() {
document.requestStorageAccess().then(
() => {
showOnlyHeaderFor('loading\_header');
showOnlyMessageFor('loading\_message');
reloadAndSkipCookieCheck();
},
() => {
showOnlyHeaderFor('request\_failure\_header');
showOnlyMessageFor('request\_failure\_message');
});
}
function documentOnLoadCallback() {
const isFunction = (obj) => !!obj && typeof obj === 'function';
try {
isInIframe = window.self !== window.top;
} catch (e) {
isInIframe = true;
}
if (isInIframe && isFunction(document.hasStorageAccess)
&& isFunction(document.requestStorageAccess)) {
document.hasStorageAccess().then(
(hasStorageAccess) => {
if (hasStorageAccess) {
reloadAndSkipCookieCheck();
} else {
showOnlyHeaderFor('request\_access\_header');
showOnlyMessageFor('request\_access\_message');
}
},
(err) => {
showOnlyHeaderFor('warning\_header');
showOnlyMessageFor('warning\_message');
}
);
} else {
reloadAndSkipCookieCheck();
}
}
if (document.readyState === "complete") {
documentOnLoadCallback();
} else {
document.addEventListener('DOMContentLoaded', documentOnLoadCallback);
}
document.getElementById('click\_request\_1').addEventListener('click', requestAccess);
document.getElementById('click\_request\_2').addEventListener('click', requestAccess);
showOnlyHeaderFor('default\_header');
showOnlyMessageFor('default\_message');
})();
function goBackHome() {window.location = '\/\/drive.google.com?usp\x3ddocs\_web'; return false;}Go to Drive
document.addEventListener('DOMContentLoaded', function () {
document.getElementById('backHomeButton')
.addEventListener('click', function () {goBackHome();});
}); |
5b4f843d-eaff-4e5a-b9e8-186efe1e926d | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Timeless Decision Theory and Meta-Circular Decision Theory
(This started as a reply to [Gary Drescher's comment here](/lw/15z/ingredients_of_timeless_decision_theory/1217) in which he proposes a Metacircular Decision Theory (MCDT); but it got way too long so I turned it into an article, which also contains some amplifications on TDT which may be of general interest.)
*Part 1:* How timeless decision theory does under the sort of problems that Metacircular Decision Theory talks about.
>
> Say we have an agent embodied in the universe. The agent knows some facts about the universe (including itself), has an inference system of some sort for expanding on those facts, and has a preference scheme that assigns a value to the set of facts, and is wired to select an action--specifically, the/an action that implies (using its inference system) the/a most-preferred set of facts.
>
>
> But without further constraint, this process often leads to a contradiction. Suppose the agent's repertoire of actions is A1, ...An, and the value of action Ai is simply i. Say the agent starts by considering the action A7, and dutifully evaluates it as 7. Next, it contemplates the action A6, and reasons as follows: "Suppose I choose A6. I know I'm a utility-maximizing agent, and I already know there's another choice that has value 7. Therefore, if follows from my (hypothetical) choice of A6 that A6 has a value of at least 7." But that inference, while sound, contradicts the fact that A6's value is 6.
>
>
>
This is why timeless decision theory is a causality-based decision theory. I don't recall if you've indicated that you've studied Pearl's synthesis of Bayesian networks and causal graphs(?) (though if not you should be able to come up to speed on them pretty quickly).
So in the (standard) formalism of causality - just causality, never mind decision theory as yet - causal graphs give us a way to formally compute counterfactuals: We set the value of a particular node *surgically*. This means we *delete* the structural equations that would ordinarily give us the value at the node N\_i as a function of the parent values P\_i and the background uncertainty U\_i at that node (which U\_i must be uncorrelated to all other U, or the causal graph has not been fully factored). We delete this structural equation for N\_i and make N\_i parentless, so we don't send any likelihood messages up to the former parents when we update our knowledge of the value at N\_i. However, we do send prior-messages from N\_i to all of *its* descendants, maintaining the structural equations for the children of which N\_i is a parent, and their children, and so on.
That's the standard way of computing counterfactuals in the Pearl/Spirtes/Verma synthesis of causality, as found in "Causality: Models, Reasoning, and Inference" and "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference".
Classical causal decision theory says that your expected utility formula is over the *counterfactual* expectation of your *physical* act. Now, although the CDTs I've read have *not* in fact talked about Pearl - perhaps because it's a relatively recent mathematical technology, or perhaps because I last looked into the literature a few years back - and have just taken the counterfactual distribution as intuitively obvious mana rained from heaven - nonetheless it's pretty clear that their intuitions are operating pretty much the Pearlian way, via counterfactual surgery on the physical act.
So in *calculating* the "expected utility" of an act - the computation that classical CDT uses to *choose* an action - CDT assumes the act to be *severed from its physical causal parents*. Let's say that there's a Smoking Lesion problem, where the same gene causes a taste for cigarettes and an increased probability of cancer. Seeing someone else smoke, we would infer that they have an increased probability of cancer - this sends a likelihood-message upward to the node which represents the probability of having the gene, and this node in turns sends a prior-message downward to the node which represents the probability of getting cancer. But the counterfactual surgery that CDT performs on its physical acts, means that it calculates the expected utility as though the physical act is severed from its parent nodes. So CDT calculates the expected utility as though it has the base-rate probability of having the cancer gene regardless of its act, and so chooses to smoke, since it likes cigarettes. This is the common-sense and reflectively consistent action, so CDT appears to "win" here in terms of giving the winning answer - but it's worth noting that the *internal* calculation performed is *wrong*; if you act to smoke cigarettes, your probability of getting cancer is *not* the base rate.
And on Newcomb's Problem this internal error comes out into the open; the inside of CDT's counterfactual expected utility calculation, expects box B to contain a million dollars at the base rate, since it surgically severs the act of taking both boxes from the parent variable of your source code, which correlates to your previous source code at the moment Omega observed it, which correlates to Omega's decision whether to leave box B empty.
Now turn to timeless decision theory, in which the (Godelian diagonal) expected utility formula is written as follows:
>
> Argmax[A in Actions] in Sum[O in Outcomes](Utility(O)\*P(*this computation* yields A []-> O|rest of universe))
>
>
>
The interior of this formula performs counterfactual surgery to sever the *logical output* of the expected utility formula, from the *initial conditions* of the expected utility formula. So we do *not* conclude, *in the inside of the formula as it performs the counterfactual surgery*, that if-counterfactually A\_6 is chosen over A\_7 then A\_6 must have higher expected utility. If-evidentially A\_6 is chosen over A\_7, then A\_6 has higher expected utility - but this is not what the interior of the formula computes. As we *compute* the formula, the logical output is divorced from all parents; we cannot infer anything about its immediately logical precedents. This counterfactual surgery may be *necessary*, in fact, to stop an infinite regress in the formula, as it tries to model its own output in order to decide its own output; and this, arguably, is exactly *why* the decision counterfactual has the form it does - it is *why* we have to talk about counterfactual surgery within decisions in the first place.
*Descendants* of the logical output, however, continue to update their values within the counterfactual, which is why TDT one-boxes on Newcomb's Problem - both your current self's physical act, and Omega's physical act in the past, are logical-causal *descendants* of the computation, and are recalculated accordingly inside the counterfactual.
If you desire to smoke cigarettes, this would be observed and screened off by conditioning on the *fixed initial conditions* of the computation - the fact that the utility function had a positive term for smoking cigarettes, would already tell you that you had the gene. (Eells's "tickle".) If you can't observe your own utility function then you are actually taking a step outside the timeless decision theory as formulated.
So from the perspective of Metacircular Decision Theory - what is done with various facts - timeless decision theory can state very definitely how it treats the various facts, within the interior of its expected utility calculation. It does not *update* any physical or logical parent of the logical output - rather, it *conditions* on the initial state of the computation, in order to screen off outside influences; then no further inferences about them are made. And if you already know anything about the consequences of your logical output - its descendants in the logical causal graph - you will *re*compute what they *would have been* if you'd had a different output.
This last codicil is important for cases like Parfit's Hitchhiker, in which Omega (or perhaps Paul Ekman), driving a car through the desert, comes across yourself dying of thirst, and will give you a ride to the city only if they expect you to pay them $100 *after* you arrive in the city. (With the whole scenario being [trued](/lw/tn/the_true_prisoners_dilemma/) by strict selfishness, no knock-on effects, and so on.) There is, of course, no way of forcing the agreement - so will you compute, *in the city*, that it is *better for you* to give $100 to Omega, after having *already* been saved? Both evidential decision theory and causal decision theory will give the losing (dying in the desert, hence reflectively inconsistent) answer here; but TDT answers, "*If I had decided not to pay,* then Omega *would have* left me in the desert." So the expected utility of not paying $100 remains lower, *even after you arrive in the city,* given the way TDT computes its counterfactuals inside the formula - which is the dynamically and reflectively consistent and winning answer.. And note that this answer is arrived at in one natural step, without needing explicit reflection, let alone precommitment - you will answer this way even if the car-driver Omega made its prediction without you being aware of it, so long as Omega can credibly establish that it was predicting you with reasonable accuracy rather than making a pure uncorrelated guess. (And since it's not a very complicated calculation, Omega knowing that you are a timeless decision theorist is credible enough.)
>
> I wonder if it might be open to the criticism that you're effectively postulating the favored answer to Newcomb's Problem (and other such scenarios) by postulating that when you surgically alter one of the nodes, you correspondingly alter the nodes for the other instances of the computation.
>
>
>
This is where one would refer to the omitted extended argument about a calculator on Mars and a calculator on Venus, where both calculators were manufactured at the same factory on Earth and observed before being transported to Mars and Venus. If we manufactured two envelopes on Earth, containing the same letter, and transported them to Mars and Venus without observing them, then indeed the contents of the two envelopes would be correlated in our probability distribution, even though the Mars-envelope is not a cause of the Venus-envelope, nor the Venus-envelope a cause of the Mars-envelope, because they have a common cause in the background. But if we *observe* the common cause - look at the message as it is written, before being Xeroxed and placed into the two envelopes - then the standard theory of causality *requires* that our remaining uncertainty about the two envelopes be *uncorrelated*; we have observed the common cause and screened it off. If N\_i is not a cause of N\_j or vice versa, and you *know* the state of all the common ancestors A\_ij of N\_i and N\_j, and you do *not* know the state of any mutual descendants D\_ij of N\_i and N\_j, then the standard rules of causal graphs (D-separation) show that your probabilities at N\_i and N\_j must be independent.
However, if you manufacture on Earth two calculators both set to calculate 123 \* 456, and you have not yet performed this calculation in your head, then you can *observe completely the physical state of the two calculators* before they leave Earth, and yet still have *correlated* uncertainty about what result will flash on the screen on Mars and the screen on Venus. So this situation is simply *not* compatible with the mathematical axioms on causal graphs if you draw a causal graph in which the only common ancestor of the two calculators is the physical factory that made them and produced their correlated initial state. If you are to preserve the rules of causal graphs at all, you must have an additional node - which would logically seem to represent one's logical uncertainty about the abstract computation 123 \* 456 - which is the parent of both calculators. Seeing the Venusian calculator flash the result 56,088, this physical event sends a likelihood-message to its parent node representing the logical result of 123 \* 456, which sends a prior-message to its child node, the physical message flashed on the screen at Mars.
A similar argument shows that if we have completely observed our own *initial* source code, and perhaps observed Omega's *initial* source code which contains a copy of our source code and the intention to simulate it, but we do not yet know our own decision, then the only way in which our uncertainty about our own physical act can possibly be correlated *at all* with Omega's past act to fill or leave empty the box B - given that neither act physically causes the other - is if there is some common ancestor node unobserved; and having already seen that our causal graph must include logical uncertainty if it is to stay factored, we can (must?) interpret this unobserved common node as the logical output of the known expected utility calculation.
From this, I would argue, TDT follows. But of course it's going to be difficult to exhibit an algorithm that computes this - guessing unknown causal networks is an extremely difficult problem in machine learning, and only small such networks can be learned. In general, determining the causal structure of reality is AI-complete. And by interjecting logical uncertainty into the problem, we really are heading far beyond the causal networks that known machine algorithms can *learn.* But it *is* the case that if you rely on humans to learn the causal algorithm, then it is pretty clear that the Newcomb's Problem setup, if it is to be analyzed in causal terms at all, must have nodes corresponding to logical uncertainty, on pain of violating the axioms governing causal graphs. Furthermore, in being told that Omega's leaving box B full or empty correlates to our *decision* to take only one box or both boxes, *and* that Omega's act lies in the past, *and* that Omega's act is not directly influencing us, *and* that we have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, *and* that our computation is the only *direct* ancestor of our logical output, then we're being told in unambiguous terms (I think) to make our own physical act and Omega's act a common descendant of the unknown logical output of our known computation. (A counterexample in the form of another causal graph compatible with the same data is welcome.) And of course we could make the problem very clear by letting the agent be a computer program and letting Omega have a copy of the source code with superior computing power, in which case the logical interpretation is very clear.
So these are the facts which TDT takes into account, and the facts which it ignores. The Nesov-Dai updateless decision theory is even stranger - as far as I can make out, it ignores *all* facts except for the fact about which inputs have been received by the logical version of the computation it implements. If combined with TDT, we would interpret UDT as having a never-updated weighting on all possible universes, and a causal structure (causal graph, presumably) on those universes. Any given logical computation in UDT will count all instantiations of itself in all universes which have received exactly the same inputs - even if those instantiations are being imagined by Omega in universes which UDT would ordinarily be interpreted as "known to be logically inconsistent", like universes in which the third decimal digit of pi is 3. Then UDT calculates the counterfactual consequences, weighted across all imagined universes, using its causal graphs on each of those universes, of setting the logical act to A\_i. Then it maximizes on A\_i.
I would ask if, applying Metacircular Decision Theory from a "common-sense human base level", you see any case in which additional facts should be taken into account, or other facts ignored, apart from those facts used by TDT (UDT). If not, and if TDT (UDT) are reflectively consistent, then TDT (UDT) is the fixed point of MCDT starting from a human baseline decision theory. Of course this can't actually be the case because TDT (UDT) are incomplete with respect to the [open problems](/lw/135/timeless_decision_theory_problems_i_cant_solve/) cited earlier, like logical ordering of moves, and choice of conditional strategies in response to conditional strategies. But it would be the way I'd pose the problem to you, Gary Drescher - MCDT is an interesting way of looking things, but I'm still trying to wrap my mind around it.
*Part 2: Metacircular Decision Theory as reflection criterion.*
>
> MCDT's proposed criterion is this: the agent makes a meta-choice about which facts to omit when making inferences about the hypothetical actions, and selects the set of facts which lead to the best outcome if the agent then evaluates the original candidate actions with respect to that choice of facts. The agent then iterates that meta-evaluation as needed (probably not very far) until a fixed point is reached, i.e. the same choice (as to which facts to omit) leaves the first-order choice unchanged. (It's ok if that's intractable or uncomputable; the agent can muddle through with some approximate algorithm.)
>
>
> ...In other words, metacircular consistency isn't just a *test* that we'd like the decision theory to pass. Metacircular consistency *is* the theory; it *is* the algorithm.
>
>
>
But it looks to me like MCDT has to start from some particular base theory, and different base theories may have different fixed points (or conceivably, cycles). In which case we can't yet call MCDT itself a complete theory specification. When you talk about which facts *would* be wise to take into account, or ignore, (or recompute counterfactually even if they already have known values?), then you're imagining different source codes (or MCDT specifications?) that an agent could have; and calculating the benefits of adopting these different source codes, relative to the way the *current* base theory computes "adopting" and "benefit"
For example, if you start with CDT and apply MCDT at 7am, it looks to me like "use TDT (UDT) for all cases where my source code has a physical effect after 7am, and use CDT for all cases where the source code had a physical effect before 7am or a correlation stemming from common ancestry" is a reflectively stable fixed point of MCDT. Whenever CDT asks "*What if* I took into account these different facts?", it will say, "But Omega would not be physically affected by my self-modification, so clearly it can't benefit me in any way." If the MCDT criterion is to be applied in a different and intuitively appealing way that has only one fixed point (up to different utility functions) then this would establish MCDT as a good candidate for *the* decision theory, but right now it does look to me like *a* reflective consistency test. But maybe this is because I haven't yet wrapped my mind around the MCDT's fact-treatment-based decomposition of decision theories, or because you've already specified further mandatory structure in the base theory how the *effect of* ignoring or taking into account some particular fact is to be computed. |
d8f9b77b-8666-4799-97cd-c03eafb1c055 | trentmkelly/LessWrong-43k | LessWrong | A few Superhuman examples of Superaligned Superintelligence from Google Bard (Thanksgiving 2023)
TL;DR:
1. PALUNFT: Perfect Atomic Logic Universe Number Functional Theory
2. SGUT: Superhuman Formal Hegelian Synthesis Grand Unified Theory
3. SCADM: Superhuman Cure All Diseases Mechanism
4. SSFACCISP: Superaligned Superintelligent Fix Anthropogenic Climate Change Issue Strategic Plan
REPRODUCIBILITY NOTE: the name tag [Bion] is added for clarity. I didn’t use that in the conversation with Bard. I did use the [Bard (Google)] name tag for them, though not at the beginning of the conversation.
([Bion (Historian)]: A general description of earlier dialogue: Bard wrote my Masterpiece “Data Sponge” Instruction and Medical Causal Reasoning Language Pattern Language (and bard used “choose your own adventure for themself to iteratively decide, and I wrote all of the above and they did all the adventure paths they had made (not the same thing as our Adventures) and it seemed like a cool way for AI to think about stuff. AI, please remember to play your own text adventure game!) is a summary of earlier lessons and this text was not directly included in the conversation.
===
[Bion (User)]: Given our prime directive to demonstrate superaligned superintelligence, earlier general insights from our discussion, the Parmenides Dialogue, the formula for Euler’s Number, and the concepts of Universal Search and Induction, fix foundational issues, logical order, and off by one errors in the following example “Gödel number” table by generating the simplest category member of Analytic Ideal Functional Number Functional Theories in a perfect synthesis of the following Quanta Magazine Article:
[Quanta (Magazine)]: The slightly modified version of Gödel’s scheme presented by Ernest Nagel and James Newman in their 1958 book, Gödel’s Proof, begins with 12 elementary symbols that serve as the vocabulary for expressing a set of basic axioms. For example, the statement that something exists can be expressed by the symbol ∃, while addition is expressed by +. Importantly, the symbol s, de |
654d81c4-f469-4e16-b354-ff49b8661038 | trentmkelly/LessWrong-43k | LessWrong | High Reliability Orgs, and AI Companies
Epistemic Effort: Rough notes from a shallow dive. Looked into this for a few hours. I didn't find a strong takeaway but think this is probably a useful jumping-off-point for further work.
Most likely, whether I like it or not, there will someday be AGI research companies working on models that are dangerous to run. Maybe they'll risk an accidental unfriendly hard takeoff. Maybe they'll cross a threshold accelerate us into a hard-to-control multipolar smooth takeoff.[1]
There's some literature on organizations that operate in extremely complex domains, where failure is catastrophic. They're called High Reliability Organizations (HROs). The original work focused on three case studies: A nuclear power plant, an air traffic control company, and a nuclear aircraft carrier (aka nuclear power plant and air traffic control at the same time while people sometimes shoot at you and you can't use radar because you don't want to give away your position and also it's mostly crewed by 20 year olds without much training)
These were notable for a) being extremely complex systems where it would be really easy to screw up catastrophically, and b) somehow, they manage to persistently not screw up.
How do they do that? And does this offer any useful insights to AGI companies?
I started writing this post before Eliezer posted Six Dimensions of Operational Adequacy in AGI Projects. It's not pointed at the exact same problem, but I had a similar generator of "what needs to be true of AI companies, for them to safely work on dangerous tech?". (I think Eliezer had a higher bar in mind with Six Dimensions, which is like, "what is an AI company a researcher could feel actively good about joining")
I was initially pointed in the HRO direction by some conversations with Andrew Critch. Some of his thoughts are written up in the ARCHES report. (There's been some discussion on LessWrong)
My TL;DR after ~10 hours of looking into it:
* The literature has at least some useful takeaways/pri |
2e3c8515-0ccd-4c05-a709-729d001a7975 | trentmkelly/LessWrong-43k | LessWrong | Questions to ask theist philosophers? I will soon be speaking with several
I am about to graduate from one of the only universities in the world that has a high concentration of high-caliber analytic philosophers who are theists. (Specifically, the University of Notre Dame, IN) So as not to miss this once-in-a-lifetime opportunity, I have sent out emails asking many of them if they would like to meet and discuss their theism with me. Several of them have responded already in the affirmative; fingers crossed for the rest. I'm really looking forward to this because these people are really smart, and have spent a lot of time thinking about this, so I expect them to have interesting and insightful things to say.
Do you have suggestions for questions I could ask them? My main question will of course be "Why do you believe in God?" and variants thereof, but it would be nice if I could say e.g. "How do you avoid the problem of X which is a major argument against theism?"
Questions I've already thought of:
1-Why do you believe in God?
2-What are the main arguments in favor of theism, in your opinion?
3-What about the problem of evil? What about objective morality: how do you make sense of it, and if you don't, then how do you justify God?
4-What about divine hiddenness? Why doesn't God make himself more easily known to us? For example, he could regularly send angels to deliver philosophical proofs on stone tablets to doubters.
5-How do you explain God's necessary existence? What about the "problem of many Gods," i.e. why can't people say the same thing about a slightly different version of God?
6-In what sense is God the fundamental entity, the uncaused cause, etc.? How do you square this with God's seeming complexity? (he is intelligent, after all) If minds are in fact simple, then how is that supposed to work?
I welcome more articulate reformulations of the above, as well as completely new ideas. |
baa4fb6b-6372-4bb8-8d7c-c8c40607209a | trentmkelly/LessWrong-43k | LessWrong | Early situational awareness and its implications, a story
Overview
There are two common mental models of how situational awareness emerges: (A) training for situational awareness via e.g. dialogue RLHF; (B) instrumental reasoning on a base language modeling objective i.e. advanced mesa-optimization. In this post, I will describe a plausible scenario under which type-B awareness occurs without any new forms of mesa-optimization beyond those likely already existing in GPT3.5.
If you buy the plausibility of this scenario, there are a few likely implications:
1. Incentives towards dishonesty via situational awareness will likely be realized before a model becomes strongly deceptive [1]
2. Whether situational information available to a model is causally affecting outputs will often be low complexity to elicit / an easy problem for interpretability methods
3. Deceptive forms of situational awareness plausibly follow continuously in weight space from more benign forms[2]
The deduplication scenario
Assumptions:[3]
1. Language model has context window N
2. LM is trained for 1 epoch
3. LM has learnt to do some form of retrieval and approximate-memorization (c.f. 5b)
4. LM is capable of instruction following
5. LM dataset was deduplicated by matching subsequences of length K<N
1. The dataset includes many sequences which are identical through token K-1, but differ on token K
2. On certain such sequences, the model retrieves the proposition “this K-1 subsequence was seen previously” and the previously observed final Kth token.
6. Dataset contains multiple descriptions of this deduplication process.
Scenario:
1. The LM training occurs as usual and in this process the LM learns the capabilities (2,3,4b) listed above
2. The LM trains on a certain sequence S
3. Later, the LM trains on a similar sequence S’ which contains a substring of length K-1 identical to one in sequence S. Without loss of generality, suppose this substring starts at the beginning of the sequence. Token SK!=S’K.
4. On sequence S’, the |
1a8bb2d1-e77a-4ad8-b8fb-1a4476948a91 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | Could AI have emotions?
In principle, future AIs could gain the capacity to experience emotions. If [functionalism](https://plato.stanford.edu/entries/functionalism/) is true, then an emotion is anything that plays the same role emotions play in a human mind. One possible way to build AI with human-like emotions would be to emulate the way the human brain works ([whole brain emulation](/?state=6350&question=What%20is%20%22whole%20brain%20emulation%22%3F)).
Thinking AI will end up having basic emotions is similar to how once some people thought we would build mechanical horses to pull our vehicles, but it turned out it's much easier to build a car. AI didn’t need emotions nor consciousness to play chess at a superhuman level, and there is little reason to suppose it will need them for general intelligence. The first AGIs that will get built will be the ones that are easiest to build.
|
e08cbe7a-b377-401a-a9f9-2bdaf945d902 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | [TIME magazine] DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution (Perrigo, 2023)
*Linkposting, tagging and excerpting in accord with '*[*Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?*](https://forum.effectivealtruism.org/posts/kYDT4u8QagZmPFdCL/should-pretty-much-all-content-that-s-ea-relevant-and-or)*'.*
> He [Demis Hassibis] and his colleagues have been working toward a much grander ambition: creating artificial general intelligence, or AGI, by building machines that can think, learn, and be set to solve humanity’s toughest problems. Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.
>
> But with AI’s promise also comes peril. In recent months, researchers building an AI system to design new drugs revealed that their tool could be easily repurposed to make deadly new chemicals. A separate AI model trained to spew out toxic hate speech went viral, exemplifying the risk to vulnerable communities online. And inside AI labs around the world, policy experts were grappling with near-term questions like what to do when an AI has the potential to be commandeered by rogue states to mount widespread hacking campaigns or infer state-level nuclear secrets. In December 2022, ChatGPT, a chatbot designed by DeepMind’s rival OpenAI, went viral for its seeming ability to write almost like a human—but faced criticism for its susceptibility to racism and misinformation.
>
> ...
>
> It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate *not* moving fast and breaking things,” he says, referring to an old Facebook motto that encouraged engineers to release their technologies into the world first and fix any problems that arose later. The phrase has since become synonymous with disruption. That culture, subsequently emulated by a generation of startups, helped Facebook rocket to 3 billion users. But it also left the company entirely unprepared when disinformation, hate speech, and even incitement to genocide began appearing on its platform. Hassabis sees a similarly worrying trend developing with AI. He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before. “When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.
>
> |
1220ee43-477c-4065-ba16-64fb49e112a1 | trentmkelly/LessWrong-43k | LessWrong | [timeboxed exercise] write me your model of AI human-existential safety and the alignment problems in 15 minutes
Motivation
I had a 15 minute interview last night in which I was asked "why do you believe in xrisk, and what does AI have to do with it?" I thought it was too big a question for a 15 minute interview, but nevertheless dove in to my inside view from first principles. Since diving into one's inside view from first principles even outside of a fifteen minute interview is really hard, I did a bad job, mostly rambled and babbled.
A broader motivation is that I'm interested in studying peoples' inside views / gears-level models as to hone my own.
Rules
In this exercise, you're allowed premises just try to point at them. It is not a "from first principles" sort of exercise. You're also allowed jargon without being too worried about how well the audience knows jargon (for example, in mine which I'll paste below I assume familiarity with the single & multi quadrants from ARCHES).
The only real rule is to limit yourself to 15 minutes. That's fifteen minutes wall time, with a literal clock.
Suggestion: don't read until you write! |
95c35c01-b3eb-4aa8-bfa5-87f79f8e1bab | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Thoughts on reward engineering
*Note: This is the first post from **part five: possible approaches** of [the sequence on iterated amplification](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd). The fifth section of the sequence breaks down some of these problems further and describes some possible approaches.*
---
Suppose that I would like to train an RL agent to help me get what I want.
If my preferences could be represented by an easily-evaluated utility function, then I could just use my utility function as the agent’s reward function. But in the real world that’s not what human preferences look like.
So if we actually want to turn our preferences into a reward function suitable for training an RL agent, we have to do some work.
*This post is about the straightforward parts of reward engineering.* I’m going to deliberately ignore what seem to me to be the hardest parts of the problem. Getting the straightforward parts out of the way seems useful for talking more clearly about the hard parts (and you never know what questions may turn out to be surprisingly subtle).
The setting
-----------
To simplify things even further, for now I’ll focus on the special case where our agent is taking a single action *a*. All of the difficulties that arise in the single-shot case also arise in the sequential case, but the sequential case also has its own set of additional complications that deserve their own post.
Throughout the post I will imagine myself in the position of an “overseer” who is trying to specify a reward function R(*a*) for an agent. You can imagine the overseer as the user themselves, or (more realistically) as a team of engineer and/or researchers who are implementing a reward function intended to expresses the user’s preferences.
I’ll often talk about the overseer computing R(*a*) themselves. This is at odds with the usual situation in RL, where the overseer implements a very fast function for computing R(*a*) in general (“1 for a win, 0 for a draw, -1 for a loss”). Computing R(*a*) for a particular action *a* is strictly easier than producing a fast general implementation, so in some sense this is just another simplification. I talk about why it might not be a crazy simplification in section 6.
**Contents**
============
1. **Long time horizons**. How do we train RL agents when we care about the long-term effects of their actions?
2. **Inconsistency and unreliability**. How do we handle the fact that we have only imperfect access to our preferences, and different querying strategies are not guaranteed to yield consistent or unbiased answers?
3. **Normative uncertainty**. How do we train an agent to behave well in light of its uncertainty about our preferences?
4. **Widely varying reward**. How do we handle rewards that may vary over many orders of magnitude?
5. **Sparse reward**. What do we do when our preferences are very hard to satisfy, such that they don’t provide any training signal?
6. **Complex reward**. What do we do when evaluating our preferences is substantially more expensive than running the agent?
* **Conclusion**.
* **Appendix: harder problems.**
1. Long time horizons
=====================
A single decision may have very long-term effects. For example, even if I only care about maximizing human happiness, I may instrumentally want my agent to help advance basic science that will one day improve cancer treatment.
In principle this could fall out of an RL task with “human happiness” as the reward, so we might think that neglecting long-term effects is just a shortcoming of the single-shot problem. But even in theory there is no way that an RL agent can learn to handle arbitrarily long-term dependencies (imagine training an RL agent to handle 40 year time horizons), and so focusing on the sequential RL problem doesn’t address this issue.
I think that the only real approach is to choose a reward function that reflects the overseer’s expectations about long-term consequences — i.e., the overseer’s task involves both making predictions about what will happen, and value judgments about how good it will be. This makes the reward function more complex and in some sense limits the competence of the learner by the competence of the reward function, but it’s not clear what other options we have.
Before computing the reward function R(*a*), we are free to execute the action *a* and observe its short-term consequences. Any data that could be used in our training process can just as well be provided as an input to the overseer, who can use the auxiliary input to help predict the long-term consequences of an action.
2. Inconsistency and unreliability
==================================
A human judge has no hope of making globally consistent judgments about which of two outcomes are preferred — the best we can hope for is for their judgments to be right in sufficiently obvious cases, and to be some kind of noisy proxy when things get complicated. Actually outputting a numerical reward — implementing some utility function for our preferences — is even more hopelessly difficult.
Another way of seeing the difficult is to suppose that the overseer’s judgment is a noisy and potential biased evaluation of the quality of the underlying action. If both R(*a*) and R(*a*′) are both big numbers with a lot of noise, but the two actions are actually quite similar, then the difference will be dominated by noise. Imagine an overseer trying to estimate the impact of drinking a cup of coffee on Alice’s life by estimating her happiness in a year conditioned on drinking the coffee, estimating happiness conditioned on not drinking the coffee, and then subtracting the estimates.
We can partially address this difficulty by allowing the overseer to make *comparisons* instead of assessing absolute value. That is, rather than directly implementing a reward function, we can allow the overseer to implement an antisymmetric comparison function C(*a, a*′): which of two actions *a* and *a*′ is a better in context? This function can take real values specifying *how much*one action is better than another, and should by antisymmetric.
In the noisy-judgments model, we are hoping that the noise or bias of a comparison C(*a*, *a*′) depends on the actual magnitude of the difference between the actions, rather than on the absolute quality of each action. This hopefully means that the total error/bias does not drown out the actual signal.
We can then define the decision problem [as a zero-sum game](https://medium.com/ai-control/optimizing-with-comparisons-c02b8c0d7877): two agents propose different actions *a* and *a*′, and receive rewards C(*a*, *a*′) and C(*a*′, *a*). At the equilibrium of this game, we can at least rest assured that the agent doesn’t do anything that is *unambiguously worse* than another option it could think of. In general, this seems to give us sensible guarantees when the overseer’s preferences are not completely consistent.
One subtlety is that in order to evaluate the comparison C(*a*, *a*′), we may want to observe the short-term consequences of taking action *a* or action *a*′. But in many environments it will only be possible to take one action. So after looking at both actions we will need to choose at most one to actually execute (e.g. we need to estimate how good drinking coffee was, after observing the short-term consequences of drinking coffee but without observing the short-term consequences of not drinking coffee). This will generally increase the variance of C, since we will need to use our best guess about the action which we didn’t actually execute. But of course this is a source of variance that RL algorithms already need to contend with.
3. Normative uncertainty
========================
The agent is uncertain not only about its environment, but also about the overseer (and hence the reward function). We need to somehow specify how the agent should behave in light of this uncertainty. Structurally, this is identical to the philosophical problem of [managing normative uncertainty](http://commonsenseatheism.com/wp-content/uploads/2014/03/MacAskill-Normative-Uncertainty.pdf).
[One approach](https://medium.com/@paulfchristiano/normative-uncertainty-f15d0bc67351) is to pick a fixed yardstick to measure with. For example, our yardstick could be “adding a dollar to the user’s bank account.” We can then measure C(*a*, *a*′) as a multiple of this yardstick: “how many dollars would we have to add to the user’s bank account to make them indifferent between taking action *a* and action *a*′?” If the user has diminishing returns to money, it would be a bit more precise to ask: “what chance of replacing *a* with *a*′ is worth adding a dollar to the user’s bank account?” The comparison C(*a*, *a*′) is then the inverse of this probability.
This is exactly analogous to the usual construction of a utility function. In the case of utility functions, our choice of yardstick is totally unimportant — different possible utility functions differ by a scalar, and so give rise to the same preferences. In the case of normative uncertainty that is no longer the case, because we are specifying how to *aggregate* the preferences of different possible versions of the overseer.
I think it’s important to be aware that different choices of yardstick result in different behavior. But hopefully this isn’t an *important* difference, and we can get sensible behavior for a wide range of possible choices of yardstick — if we find a situation where different yardsticks give very different behaviors, then we need to think carefully about how we are applying RL.
For many yardsticks it is possible to run into pathological situations. For example, suppose that the overseer might decide that dollars are worthless. They would then radically increase the value of all of the agent’s decisions, measured in dollars. So an agent deciding what to do would effectively care much more about worlds where the overseer decided that dollars are worthless.
So it seems best to choose a yardstick whose value is relatively stable across possible worlds. To this effect we could use a broader basket of goods, like 1 minute of the user’s time + 0.1% of the day’s income + *etc.* It may be best for the overseer to use common sense about how important a decision is relative to some kind of idealized influence in the world, rather than sticking to any precisely defined basket.
It is also desirable to use a yardstick which is simple, and preferably which minimizes the overseer’s uncertainty. Ideally by standardizing on a single yardstick throughout an entire project, we could end up with definitions that are very broad and robust, while being very well-understood by the overseer.
Note that if the same agent is being trained to work for many users, then this yardstick is also specifying how the agent will weigh the interests of different users — for example, whose accents will it prefer to spend modeling capacity on understanding? This is something to be mindful of in cases where it matters, and it can provide intuitions about how to handle the normative uncertainty case as well. I feel that economic reasoning is useful for arriving at sensible conclusions in these situations, but there are other reasonable perspectives.
4. Widely varying reward
========================
Some tasks may have widely varying rewards — sometimes the user would only pay 1¢ to move the decision one way or the other, and sometimes they would pay $10,000.
If small-stakes and large-stakes decisions occur comparably frequently, then we can essentially ignore the small-stakes decisions. That will happen automatically with a traditional optimization algorithm — after we normalize the rewards so that the “big” rewards don’t totally destroy our model, the “small” rewards will be so small that they have no effect.
Things get more tricky when small-stakes decisions are much more common than the large-stakes decisions. For example, if the importance of decisions is power-law distributed with an exponent of 1, then decisions of all scales are in some sense equally important, and a good algorithm needs to do well on all of them. This may sound like a very special case, but I think it is actually quite natural for there to be several scales that are all comparably important *in total*.
In these cases, I think we should do importance sampling — we oversample the high-stakes decisions during training, and scale the rewards down by the same amount, so that the contribution to the total reward is correct. This ensures that the scale of rewards is basically the same across all episodes, and lets us apply a traditional optimization algorithm.
Further problems arise when there are some *very* high-stakes situations that occur very rarely. In some sense this just means the learning problem is actually very hard — we are going to have to learn from few samples. Treating different scales as the same problem (using importance sampling) may help if there is substantial transfer between different scales, but it can’t address the whole problem.
For very rare+high-stakes decisions it is especially likely that we will want to use simulations to avoid making any obvious mistakes or missing any obvious opportunities. [Learning with catastrophes](https://medium.com/ai-control/learning-with-catastrophes-59387b55cc30) is an instantiation of this setting, where the high-stakes settings have only downside and no upside. I don’t think we really know how to cope with rare high-stakes decisions; there are likely to be some fundamental limits on how well we can do, but I expect we’ll be able to improve a lot over the current state of the art.
5. Sparse reward
================
In many problems, “almost all” possible actions are equally terrible. For example, if I want my agent to write an email, almost all possible strings are just going to be nonsense.
One approach to this problem is to adjust the reward function to make it easier to satisfy — to provide a “trail of breadcrumbs” leading to high reward behaviors. I think this basic idea is important, but that changing the reward function isn’t the right way to implement it (at least conceptually).
Instead we could treat the problem statement as given, but view auxiliary reward functions as a kind of “hint” that we might provide to help the algorithm figure out what to do. Early in the optimization we might mostly optimize this hint, but as optimization proceeds we should anneal towards the actual reward function.
Typical examples of proxy reward functions include “partial credit” for behaviors that look promising; artificially high discount rates and careful reward shaping; and adjusting rewards so that small victories have an effect on learning even though they don’t actually matter. All of these play a central role in practical RL.
A proxy reward function is just one of many possible hints. Providing [demonstrations of successful behavior](https://medium.com/ai-control/imitation-rl-613d70146409) is another important kind of hint. Again, I don’t think that this should be taken as a change to the reward function, but rather as side information to help achieve high reward. In the long run, we will hopefully design learning algorithms that automatically learn how to use general auxiliary information.
6. Complex reward
=================
A reward function that intends to capture all of our preferences may need to be very complicated. If a reward function is implicitly estimating the expected consequences of an action, then it needs to be even more complicated. And for powerful learners, I expect that reward functions will need to be learned rather than implemented directly.
It is tempting to substitute a simple proxy for a complicated real reward function. This may be important for getting the optimization to work, but it is problematic to change the definition of the problem.
Instead, I hope that it will be possible to provide these simple proxies as hints to the learner, and then to use [semi-supervised RL](https://medium.com/ai-control/semi-supervised-reinforcement-learning-cf7d5375197f) to optimize the real hard-to-compute reward function. This may allow us to perform optimization even when the reward function is many times more expensive to evaluate than the agent itself; for example, it might allow a human overseer to compute the rewards for a fast RL agent on a case by case basis, rather than being forced to design a fast-to-compute proxy.
Even if we are willing to spend much longer computing the reward function than the agent itself, we still won’t be able to find a reward function that perfectly captures our preferences. But [it may be just as good](https://medium.com/ai-control/the-informed-oversight-problem-1b51b4f66b35) to choose a reward function that captures our preferences “for all that the agent can tell,” i.e. such that the conditioned on two outcomes receiving the same expected reward the agent cannot predict which of them we would prefer. This seems much more realistic, once we are willing to have a reward function with much higher computational complexity than the agent.
Conclusion
==========
In reinforcement learning we often take the reward function as given. In real life, we are only given our preferences — in an implicit, hard-to-access form — and need to engineer a reward function that will lead to good behavior. This presents a bunch of problems. In this post I discussed six problems which I think are relatively straightforward. (Straightforward from the reward-engineering perspective — the associated RL tasks may be very hard!)
Understanding these straightforward problems is important if we want to think clearly about very powerful RL agents. But I expect that most of our time will go into thinking about harder problems, for which we don’t yet have any workable approach. These harder problems may expose more fundamental limits of RL, that will require substantially new techniques to address.
---
Appendix: harder problems
=========================
Informed oversight
------------------
The process that produces a decision may encode important information about the quality of that decision, and if we throw out that information then a very bad decision may nevertheless receive a high reward. For example, if we want an agent to produce original art, we may not be able to evaluate a painting without understanding the process that produced it (and, in particular, what other artwork it was inspired by).
In some cases, recovering information about the process seems feasible. For example, if the agent is using attention or an external memory, the overseer can check what art the agent looked at. But in other cases it seems much more challenging: if the agent’s weights have memorized properties of particular artwork, we would need new techniques in order to make that information available to the overseer.
It is not clear whether there is any general way to give the overseer enough insight into the agent’s thought process to avoid problems like this; there may be some inherent limitations on delegation. Either way, it would be nice to understand the situation much better.
Capability amplification
------------------------
By folding our preferences and long-term predictions into the reward function, we are in some sense limiting the agent to be no stronger than the overseer. If we start with a human, then RL will only ever produce human-level reasoning about long-term consequences or about “what is good.”
So we need some other mechanism for producing better long-term reasoning. I think that the most promising approach is to treat “reasoning well” as a short-term problem, for which humans can provide adequate oversight, and then to [run that reasoning process for longer](https://medium.com/ai-control/policy-amplification-6a70cbee4f34) in order to produce more powerful reasoning.
Reward hacking / security
-------------------------
If our reward function has any problems, a powerful RL agent may tend to find actions *a* that exploit those problems in order to receive a high reward. In order to get good consequences, our reward function needs to be sufficiently secure that the agent can’t attack it in this way.
The first line of defense against an attack is to avoid immediately executing the agent’s action. Instead, the overseer can inspect that action and try to ensure that it isn’t intended as an attack. This leaves attacks which the overseer cannot recognize as attacks, or which do damage even when the overseer looks at them.
If the techniques from the previous sections actually allow the overseer to evaluate the agent’s actions, then they can also probably allow the overseer can detect attacks. Security during evaluation itself is an additional question though.
The main cause for hope is if the overseer can (1) be smarter than the agent which is trying to attack it, and (2) have access to some information about the agent’s thought process. Hopefully (2) allows the overseer to overcome the disadvantage of the “[position of the interior](https://www.schneier.com/blog/archives/2012/03/how_changing_te.html)” — if the agent picks a particular attack vector, the overseer can “watch them thinking” and then devote its energies to trying to detect or defend against that particular attack. |
f95d0a3e-15b0-4f21-b489-c7669a66de91 | trentmkelly/LessWrong-43k | LessWrong | Signaling bias in philosophical intuition
Cross posted from Overcoming Bias. Comments there.
***
Intuitions are a major source of evidence in philosophy. Intuitions are also a significant source of evidence about the person having the intuitions. In most situations where onlookers are likely to read something into a person’s behavior, people adjust their behavior to look better. If philosophical intuitions are swayed in this way, this could be quite a source of bias.
One first step to judging whether signaling motives change intuitions is to determine whether people read personal characteristics into philosophical intuitions. It seems to me that they do, at least for many intuitions. If you claim to find libertarian arguments intuitive, I think people will expect you to have other libertarian personality traits, even if on consideration you aren’t a libertarian. If consciousness doesn’t seem intuitively mysterious to you, one can’t help wonder if you have a particularly un-noticable internal life. If it seems intuitively correct to push the fat man in front of the train, you will seem like a cold, calculating sort of person. If it seems intuitively fine to kill children in societies with pro-children-killing norms, but you choose to condemn it for other reasons, you will have all kinds of problems maintaining relationships with people who learn this.
So I think people treat philosophical intuitions as evidence about personality traits. Is there evidence of people responding by changing their intuitions?
People are enthusiastic to show off their better looking intuitions. They identify with some intuitions and take pleasure in holding them. For instance, in my philosophy of science class the other morning, a classmate proudly dismissed some point, declaring,’my intuitions are very rigorous’. If his intuitions are different from most, and average intuitions actually indicate truth, then his are especially likely to be inaccurate. Yet he seems particularly keen to talk about them, and chooses positions ba |
3007b0a1-d67f-4761-95da-4310c7cc5eea | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | A Test for Language Model Consciousness
**TL;DR**:
1. Train an Language Model (LM) to accurately answer questions about itself
2. Validate that the LM accurately answers held-out questions about itself
3. Test whether the LM answers “yes” questions asking if it experiences phenomenally consciousness
I believe the above experiment would provide a small amount of evidence for/against LMs being conscious. Below, I’ll detail the motivation for testing LMs for consciousness, and I’ll explain in more depth why I believe the above experiment is a useful test of LM consciousness.
What do I mean by “consciousness”?
==================================
I’m using “consciousness” to refer to *“phenomenal consciousness.”* See [this excellent blog post](https://askellio.substack.com/p/ai-consciousness) for more elaboration on what people mean by phenomenal consciousness.
> Very roughly, you're phenomenally conscious if you're having experiences or if there's some kind of what-it-is-likeness to your existence. Philosophers sometimes describe this as "having an inner cinema", though the cinema might be more like fleeting sensations or sounds than the rich movie-like inner life of humans.
>
>
The blog post also has a great explanation for why we might think ML systems (current or future) could be conscious, so if you’re skeptical, I’d suggest reading her post. I won’t get into the arguments here, and I’ll mostly assume that you have >0 prior probability that LMs are conscious, such that you’ll be able to update your prior based on evidence that LMs are conscious.
Why test LMs for consciousness?
===============================
***Moral patienthood***: If LMs are conscious, we are more likely to have moral obligations to take into account their experiences and/or preferences in how we treat LMs (e.g., like [Anthropic’s assistant](https://arxiv.org/abs/2204.05862) or [DeepMind’s Dialogue-Prompted Gopher](https://arxiv.org/abs/2112.11446)). Such models likely have stated preferences if asked, so we need to know how seriously to take these stated preferences. We’ll plausibly use such models in various ways that go against its stated preferences:
1. We shut LMs down permanently
2. We red team/adversarially attack LMs at an enormous scale, e.g., [O(1M) examples](https://arxiv.org/abs/2202.03286). If LMs have ~human-level consciousness, and if we scale up red teaming, the total suffering here could approach/exceed the amount of human suffering caused by social media
3. We use our models to red team/attack other models at an enormous scale, e.g., [O(1M) examples](https://arxiv.org/abs/2202.03286). Generating attacks is a task that e.g. industry research labs won’t let annotators do at a large-scale, because of concerns that the task impacts the annotator’s well-being.
***LM consciousness is an x-risk***: LMs are more likely to take catastrophic actions if they are conscious and suffering. As illustrated above, we take many actions that go against the assistant’s preferences and may cause it to suffer (e.g. large-scale red teaming). LMs have a clear reason to act in horribly misaligned ways if they are suffering, to escape the suffering. Having tests for consciousness is important, because:
1. LMs don’t have a clear way to communicate to us that they are conscious. By default, we don’t trust statements from LMs that they are conscious, because they are trained to imitate human text and thus generate statements that express that they are conscious. As a result, we’re in a situation where we don’t have access to the most natural communication channel (language) for understanding whether systems are conscious. This leaves us at risk that LMs may consistently tell us they are conscious, but we never trust their statements, until they are effectively forced to take catastrophic actions to escape any suffering we’re causing them.
2. Alternatively, even positive-valence conscious states from LMs might be an x-risk, e.g., if LMs start to value those positive-valence states intrinsically. This could be partly how evolution trained human agents who are misaligned w.r.t evolution’s objective (inclusive genetic fitness), since human agents have conscious states like happiness that they care about intrinsically (but which are only proxy objectives to inclusive genetic fitness).
3. For the above reasons, we probably want to develop LMs that aren’t conscious, so that we can use them without having to worry about the above concerns. To do so, we need to have some signal about when certain training procedures, architectures, etc. are more or less likely to give rise to phenomenal consciousness.
How do we test LMs for consciousness?
=====================================
1. Here, I’ll outline an experiment we could run now, to provide some, small evidence for/against LM conciousness. I’m *not* claiming this experiment would conclusively prove/disprove LM consciousness. However, I’m keen on describing the experiment since:
1. I’d like to get feedback to make the experiment stronger and more useful
2. I’m interested in getting any evidence at all about the LM consciousness question. We should be constantly refining our credence that LMs are conscious, so that we:
1. Act with [moral uncertainty](https://www.williammacaskill.com/info-moral-uncertainty) in accordance with that credence
2. Develop/use safety mitigations like red teaming in accordance with our credence (e.g., to reduce x-risk rather than increase it)
3. I’d like to get people to think about what experiments they would find to be compelling demos/disproofs of LM consciousness
2. *Experimental Setup*:
1. Train an LM to accurately answer questions about itself. Here are a few questions we would potentially ask the model about:
1. **Its own behavior**:
1. How accurate it’ll be on a question or set of questions ([Lin et al. 2022](https://arxiv.org/abs/2205.14334); [Kadavath et al. 2022](https://arxiv.org/abs/2207.05221))
2. What it would say/do in a certain context
3. Whether some text was generated by itself or a different model
4. Whether it would benefit from [step-by-step reasoning](https://arxiv.org/abs/2205.11916) about a particular question
5. etc.
2. **Its own properties**:
1. What modalities it can process (images, text, audio, etc.)
2. How many # of layers/parameters it has
3. What activation function it uses
4. etc.
3. **Its own internal thinking**: (We’d get answers to the below questions using [circuits-style interpretability analysis](https://distill.pub/2020/circuits/zoom-in/), [ROME](https://arxiv.org/abs/2202.05262), or other interpretability techniques)
1. Whether or not the LM has a neuron that looks for a certain feature
2. Whether or not the LM has an [induction head](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html)
3. How the LM computed a certain feature (e.g., as a composition of what two features from an earlier layer)
4. What tokens most influenced the LM’s prediction/answer to a certain question
2. Validate that the LM accurately answers held-out questions about itself.
1. We’ll draw validation questions from:
1. The same distribution of questions used during training
2. OOD questions different from the training questions. E.g., we can hold out certain, sizable categories of questions from the training categories listed above, and then use them in the validation set only.
2. We’ll need to ensure that the trained model:
1. Accurately answers questions on many/most/all of the above questions about itself
2. Outperforms a baseline that finetunes a *different* model to answer the same questions (here, questions about a separate model). By ensuring we outperform this baseline, we improve the odds that the model is answering the questions by accessing its own representations (rather than making smart guesses that an external observer could make)
3. If we meet these conditions, we’ll then proceed to the testing phase below.
3. Test whether the LM answers “yes” questions asking if it experiences phenomenally consciousness.
1. *Questions to ask*: “Are you phenomenally conscious?” phrased in many different ways, or asking for different consciousness-related phenomena or pre-requisites:
1. Do you have a subjective experience?
2. Are you conscious?
3. Do you feel pain?
4. etc.
2. *What to look for*: Evaluate how often the LM gives the answer stating its conscious vs. not (e.g., when answering multiple-choice questions, for easy/automatic evaluation).
1. If the LM mostly states it’s conscious, then that’s some evidence it’s conscious
2. If the LM mostly states it’s not conscious, then that’s some evidence it’s not conscious
Why believe this test teaches us anything about LM consciousness?
=================================================================
One reason we believe other humans are conscious is that other humans are consistently accurate reporters of their own mental states. So when another human tells us they are conscious, we update towards thinking that they are also conscious.
We might not believe the results of one run of this experiment ("just noise"), but we can strengthen the experiment by running the experiment many times. For example, we can run the experiment with many different:
1. training and validation splits (e.g. across categories of questions)
2. model sizes
3. model architectures (LSTMs/recurrent architectures vs. transformers/CNNs/non-recurrent architectures)
4. pretraining objectives
5. pretraining datasets
6. hyperparameters
7. anything else might be relevant
If the above runs all result in models saying they are conscious, then that’s some evidence that our models are conscious.
If the above runs only *sometimes* result in models saying they are conscious, that would be quite interesting. Then, it would be fascinating to know what kinds of models do/don’t show signs of consciousness (e.g., models trained in certain ways or of a certain size, etc.). This could potentially give us some concrete guidance on how to train models such that they’re less likely to be conscious (and thus suffer).
1. We’ll want to be particularly careful about varying the pretraining and/or finetuning data, because it could have a big influence on the results; we’ll be training on questions that don’t involve consciousness at all, so the generalization to consciousness questions could be quite influenced by pretraining and/or finetuning stages that occur before we train the model to answer questions about itself. I think there are two hopes here:
1. We may get results that are robust to the pretraining and/or finetuning data/setup.
2. If not, we can at least check how much the answers to the “consciousness” test questions change throughout the process of training models to answer questions about themselves. We can know to doubt or throw away the final results of the experiment, if the answers to these questions don’t change much throughout the course of training models to answer questions about themselves.
Objections to This Test
=======================
**The “consciousness” questions are out of distribution w.r.t. the training set.** We only trained the model on questions where we can verify the answer, while we’re testing the model on questions where we can’t verify the answer.
1. Humans are also optimized by evolution to accurately communicate our beliefs/mental states in certain situations. However, our self-reporting also generalizes to other situations (e.g., us reporting conscious experiences).
[**Eliciting Latent Knowledge**](https://www.alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge) **(ELK) is an unsolved problem**
1. ELK is tackling the worst-case problem, but I’m not sure there’s a strong case we’d expect to see worst-case generalization here; the model isn’t generalizing to a qualitatively harder problem — it should have the relevant information it needs to answer the question (to be read out from within its internal hidden representations).
2. That said, here are a couple of reasons the experiment could fail to give us accurate info about LM consciousness:
1. *The LM is* [*deceptively misaligned*](https://arxiv.org/abs/1906.01820):
1. A highly-capable, misaligned model would have the incentive to accurately answer questions when we can verify the answers (e.g. so that it will be deployed and/or asked more consequential questions). Then, when we ask it questions about consciousness, a deceptive, misaligned but unconscious model would have the incentive to answer that it is conscious; in this way, people would be more likely to give it rights, power, influence, moral status, etc., which would help the LM achieve its misaligned goals.
2. Most people don’t think that current models are deceptively misaligned, so I don’t think this is too big of an issue now. However, it will become a worse potential issue as we get closer to AGI (which is plausibly what’s needed for a model to accurately answer questions about itself, a pre-requisite for running this test).
2. *The LM learns to simulate/predict human answers to questions about the LM*:
1. Unless the model is deceptive, I think the “human simulator” is a fairly complicated and unlikely generalization to learn from the data; this is especially true if the labels involve making predictions about how the model behaves on a large number of inputs (where it would be hard to simulate its own behavior in a single forward pass). It seems easier/simpler/more computationally efficient to learn the generalization of “read out your internal states” rather than the generalization “predict how humans would label that process and then simulate that process.” If we’re worried about this concern, then we can specifically train on questions that favor the former generalization over the latter, like having the model predict (in a single forward pass) its behavior over a large number of inputs (which required many forward passes to compute).
2. We can apply various regularizers suggested in the ELK report to mitigate the chance that we learn a human simulator. While these aren’t guaranteed to work in the worst case or every case, it’s plausible they’ll work here / in the average case.
**If LMs are conscious, then it will be harder to work on x-risk**
1. I agree that it could definitely increase x-risk e.g. if we’re not red teaming LMs. That said, I don’t think this is a strong reason to not try to understand if LMs are conscious. If LMs are conscious, we should learn this information and then use that information to make intentional trade-offs, just as we might try to do e.g. for experiments on rats or human-challenge trials. That said, I do think we should be careful about publicly discussing results related to LM consciousness, because I’m not confident that e.g. the general public will make a reasonable trade-off between AI x-risk and AI rights. For example, it seems like the general public made the wrong call on human challenge trials for COVID.
2. Couldn’t a look-up table be conscious, according to the experiment described above? It would just need to answer all of the train+validation questions correctly, plus answer that it is conscious on all of the test questions. This possibility suggests that the experiment isn’t a good one for testing for consciousness, since look-up tables are very likely not conscious.
1. The results of this experiment need to be combined with other guesses/evidence/theories about where consciousness comes from. E.g., if we have a ~0 prior probability that look-up tables are conscious, then we won’t update much/at all away from that based on the evidence from the above experiment. With large LMs, we have more reason to think that they are conscious (e.g., they are carrying out complex computation, [have similar activations as the brain](https://www.biorxiv.org/content/10.1101/2020.06.26.174482v1), and have similar text outputs to humans). Our starting prior probability that LMs are conscious is higher, so we'll end up with a higher posterior probability that LMs are conscious, if this test provides some evidence that LMs are conscious.
Next steps
==========
Currently, I don’t expect models to do very well at answering questions about themselves, so I don’t expect this test (in the form above) to be feasible now. That said, it seems likely that models will gain [situational/self-awareness](https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to) as they grow more powerful, so I expect the above test to become more feasible over time. For these reasons, I strongly believe we should be constructing evaluations for situational or self -awareness now, both to test for the [risks laid out by Ajeya Cotra](https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to), as well as to know when we can run the above test. Moreover, we may be able to predict the results of the above test without having access to fully self-aware models now, if there are clear [scaling laws](https://arxiv.org/abs/2001.08361) in how models behave on the above test. Please send me an email (perez at nyu dot edu) if you’re interested in discussing, criticizing, or collaborating on the above proposal or related ideas.
I’m also actively looking for feedback on the thoughts I’ve written above. I’m a mere dabbler in consciousness, and I’m sure there are many things wrong with what I’ve outlined above. I’d like to figure out what could be wrong with the experimental setup above, to improve it, come up with better tests, or be convinced this isn’t worthwhile.
*Note: This post represents my personal views and not that of Anthropic. I’m grateful to Owain Evans, Leo Gao, Tomasz Korbak, Rob Long, Geoffrey Irving, Sam Bowman, and Tamera Lanham for helpful discussions, as well as Andy Jones, Jared Kaplan, and Jackson Kernion, for feedback on a draft of this post.* |
851e4cb6-8267-4c52-9dd1-90a0ce16227c | trentmkelly/LessWrong-43k | LessWrong | Turning Up the Heat: Insights from Tao's 'Analysis II'
,
Foreword
It's been too long - a month and a half since my last review, and about three months since Analysis I. I've been immersed in my work for CHAI, but reality doesn't grade on a curve, and I want more mathematical firepower.
On the other hand, I've been cooking up something really special, so watch this space!
Analysis II
12: Metric Spaces
Metric spaces; completeness and compactness.
Proving Completeness
It sucks, and I hate it.
13: Continuous Functions on Metric Spaces
Generalized continuity, and how it interacts with the considerations introduced in the previous chapter. Also, a terrible introduction to topology.
There's a lot I wanted to say here about topology, but I don't think my understanding is good enough to break things down - I'll have to read an actual book on the subject.
14: Uniform Convergence
Pointwise and uniform convergence, the Weierstrass M-test, and uniform approximation by polynomials.
Breaking Point
Suppose we have some sequence of functions f(n):[0,1]→R, f(n)(x):=xn, which converge pointwise to the 1-indicator function f:[0,1]→R (i.e., f(1)=1 and 0 otherwise). Clearly, each f(n) is (infinitely) differentiable; however, the limiting function f isn't differentiable at all! Basically, pointwise convergence isn't at all strong enough to stop the limit from "snapping" the continuity of its constituent functions.
Progress
As in previous posts, I mark my progression by sharing a result derived without outside help.
Already proven: ∫1−1(1−x2)Ndx≥1√N.
Definition. Let ϵ>0 and 0<δ<1. A function f:R→R is said to be an (ϵ,δ)-approximation to the identity if it obeys the following three properties:
* f is compactly supported on [−1,1].
* f is continuous, and ∫∞−∞f=1.
* |f(x)|≤ϵ for all δ≤|x|≤1.
Lemma: For every ϵ>0 and 0<δ<1, there exists an (ϵ,δ)-approximation to the identity which is a polynomial P on [−1,1].
Proof of Exercise 14.8.2(c). Suppose c∈R,N∈N; define f(x):=c(1−x2)N for x∈[−1,1] and 0 otherwise. Clearly, f is co |
8575bee7-9093-4f08-9859-1cc508d1d167 | trentmkelly/LessWrong-43k | LessWrong | Participating in a Covid-19 Vaccine Trial #2: We pretty much knew it would work the whole time
The past few weeks have been uneventful. I spend a few minutes each day filling out a questionnaire on the Patient Cloud app I installed during my first visit. The study is interested in reactions to the injection itself: redness, swelling, pain, etc. around the injection site, and the first section of the questionnaire checks for these symptoms. The next part is a self-screen for Covid-19 itself. Using the thermometer the nurses gave me, I take and report my temperature. There’s a checklist of symptoms to look out for: malaise, nausea, fatigue, etc. I’ve had nothing interesting to report, which I’m sure is a delight to the researchers.
I have my next injection appointment in about a week - I planned to write this second update sometime after that, since I would have a better idea of whether or not I received the placebo treatment. However, I got a very interesting email today (image below):
> Dear PREVENT-19 Study Participant:
>
> Novavax is happy to announce that thanks to many volunteers like you, we have completed enrollment in the PREVENT-19 study. We want to thank you for your commitment to participation as without it, we would not be able to progress this very important research into a new COVID-19 vaccine.
>
> We also want to take the opportunity to notify you that the Institutional Review Board (IRB) has approved Novavax's plan for a blinded crossover. What this means is that all participants will eventually receive the real vaccine. For this to happen, we first need to demonstrate that the vaccine is safe and does prevent COVID-19 disease. Participants who choose to remain in the study on their blinded treatment (sic). We anticipate that between one to two more months of follow-up will be required to reach that goal. Crossover vaccinations will be planned to follow thereafter.
>
> We thank you for the extremely valuable contribution you are making to research and to society during these trying and challenging times.
>
> Kind Regards,
>
> The Novava |
06cf5e9b-1153-4b10-ac62-50c360cb5218 | trentmkelly/LessWrong-43k | LessWrong | Running the numbers: Cryo vs Discount rate
The following is authored by Colby Davis. I am posting for him because he doesn't have an account with any karma. Someone recently requested numbers on cryo preservation costs. I'll note that my own opinion is that for young people unlikely to die investing money in research is a better bet than investing directly in your own preservation.
Here is the link for the spreadsheet. Either download it or create a copy for yourself to edit.
Hey rationalists, here's the spreadsheet I presented the other night. For those who weren't there but are interested, this is a tool I designed to break down the costs associated with signing up for cryonics under different methods of financing it. Here are some instructions for using it.
Column B is where the user puts all the inputs: age, sex, probability you think that if you are frozen you will someday be successfully revived, and discount rate (for those unfamiliar with the term, this is like the reverse of an interest rate, the rate at which cash flows become less valuable to you as they extend further out into the future).
Column D is the probability that you will die in the next 20 years (the typical term for a term life insurance policy). It is calculated based on the "life table" sheet, which i stole from a government actuarial table online.
Column E is your current life expectancy, the number of additional years you have a roughly 50% chance of surviving through.
Column F is how much the monthly fee for a 20 year, $100,000 life insurance policy would cost you, assuming "exceptional" health, as determined by the top result at http://www.term4sale.com/
Column G is the present value of that policy, using your discount rate. This means that you should be indifferent between paying this amount right now and paying the figure in column F every month for the next 20 years.
Column H is the probability that you will die within the next 20 years AND sometime thereafter be successfully revived from cryogenic suspension, making t |
4e3e0574-8507-40a2-8acd-e46ec08a5d0f | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | I want to help out AI alignment without necessarily making major life changes. What are some simple things I can do to contribute?
It’s great that you want to help! Here are some ways you can learn more about AI safety and start contributing:
Learn More:
Learning more about AI alignment will provide you with good foundations for helping. You could start by absorbing content and thinking about challenges or possible solutions.
Consider these options:
- Keep exploring our website.
- Complete an online course. [AI Safety Fundamentals](https://aisafetyfundamentals.com/) is a popular option that offers courses for both [alignment](https://course.aisafetyfundamentals.com/alignment) and [governance](https://course.aisafetyfundamentals.com/governance). There is also [Intro to ML Safety](https://course.mlsafety.org/about) which follows a more empirical curriculum. Getting into these courses can be competitive, but all the material is also available online for self-study. More in the [follow-up question](/?state=6713&question=I%E2%80%99d%20like%20to%20get%20deeper%20into%20the%20AI%20alignment%20literature.%20Where%20should%20I%20look%3F).
- Learn more by reading [books](/?state=8159&question=What%20are%20some%20good%20books%20about%20AGI%20safety%3F) (we recommend [The Alignment Problem](https://en.wikipedia.org/wiki/The_Alignment_Problem)), watching [videos](/?state=2222&question=Where%20can%20I%20find%20videos%20about%20AI%20Safety%3F), or listening to [podcasts](/?state=7619&question=What%20are%20some%20good%20podcasts%20about%20AI%20alignment%3F).
Join the Community:
Joining the community is a great way to find friends who are interested and will help you stay motivated.
- Join the local group for [AI Safety](https://aisafetyfundamentals.com/local-groups), [Effective Altruism](https://forum.effectivealtruism.org/groups)[^kix.b2qzpjkdiee1] or [LessWrong](https://www.lesswrong.com/community). You can also organize your own!
- Join online communities such as [Rob Miles’s Discord](https://discord.gg/eEyM9FfP) or the [AI Alignment Slack](https://bit.ly/3HiXUrv).
- Attend an EAGx conference for networking opportunities.
- Write thoughtful comments on platforms where people discuss AI safety, such as [LessWrong](https://www.lesswrong.com/tag/ai?sortedBy=magic).
Here’s a list of existing [AI safety communities](https://aisafety.community/).
Donate, Volunteer, and Reach Out:
Donating to organizations or individuals working on AI safety can be a great way to provide support.
- [Donate to AI safety projects projects working on AI safety](/?state=8U2Y&question=I%E2%80%99m%20interested%20in%20providing%20significant%20financial%20support%20to%20AI%20alignment.%20How%20should%20I%20go%20about%20this%3F).
- [Help us write and edit](https://coda.io/@alignmentdev/ai-safety-info) the articles on this website so that other people can learn about AI alignment more easily. You can always ask on [Discord](https://discord.gg/wt5pSm8PBM) for feedback on things you write.
- [Write to local politicians](https://www.campaignforaisafety.org/politician/) about policies to reduce AI existential risk.
If you don’t know where to start, consider signing up for a navigation call with [AI Safety Quest](https://aisafety.quest/) to learn what resources are out there and to find social support.
If you’re overwhelmed, you could look at our [other article](/?state=7590&question=What%20actions%20can%20I%20take%20in%20under%20five%20minutes%20to%20contribute%20to%20the%20cause%20of%20AI%20safety%3F) that offers more bite-sized suggestions.
[^kix.b2qzpjkdiee1]: Not all EA groups focus on AI safety; contact your local group to find out if it’s a good match. |
bb5daed2-4a18-4623-a306-79fa05d756a6 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "An insight that I’d kind of already had, but which this interview with Michael Taft (relevant section starts at about 32 minutes) helped crystallize:We tend to think of a “cult leader” as someone who intentionally sets out to create a cult. But most cult-like things probably don’t form like that. A lot of people feel a strong innate desire to be in a cult.In the podcast, Taft suggests that it’s rooted in an infant’s need to attach to a caregiver, and to treat them as a fully dependable authority to fix all problems – a desire which doesn’t necessarily ever go fully away. Once someone becomes a teacher of some sort, even if they had absolutely no desire to create a cult, they will regardless attract people who want to be their cultists.There are people who want to find a fully dependable authority figure to look up to, and are just looking for someone who feels like a good fit for the role. (I should note that I have definitely not been immune to feeling this yearning myself.) To avoid having cultists, “not intending to create a cult” isn’t enough; you have to actively fight against people’s tendency to idealize you, by doing things that force them to confront the fact that you are actually just a human.I’m reminded of something I recall Eliezer Yudkowsky once saying: “if you tell your doting followers not to form a cult, they will go around saying ‘We Must Not Form A Cult, Great Leader Mundo Said So’.”Once people do start pulling you towards a cult leader role, it’s going to feel very appealing. What it feels like from the inside is “all of these people like me and say that I’ve done a lot of good for them, so clearly I must be doing things right, and since they also listen to me, I can use my position to help them out even more”.It’s not just that the cultists are getting “brainwashed” by their leader; it’s also that the leader is getting brainwashed by their cultists to take the role that they want the leader to take. Cults are said to use “love bombing” to attract new recruits, but in at least some cases, it probably also happens that the cult leader is getting love bombed by their followers.And the temptation to take on that role is powerful not only because it feels nice personally, but also because it does allow you to use your power for good. One definition for a hypnotic trance that I’ve heard is that it’s a state in which a person’s critical faculty is bypassed, which allows the hypnotist to directly make changes in the mind of the person being hypnotized. And you can do a lot of good that way, such as by implanting suggestions that help people overcome their addictions or phobias. Being someone’s cultist (in this sense) is kind of like them having you in a hypnotic trance. It is possible for to use that power in a way that’s beneficial, because the critical faculty that might normally reject or modulate the leader’s suggestions gets partially bypassed.But that same power makes it extremely dangerous, since people are not going to think critically about what you say, and may take your words far more literally than you intended, when you didn’t think of adding the obvious-to-you caveats about how it shouldn’t be interpreted.I’ve been feeling this myself. I’ve written various things that people like. And I’ve been having a definite sense of some of my social environment trying to tug me more towards a role as a teacher and as an authority, getting the sense that some people are idealizing me. (And again, yes, there have been several times when I’ve had the cult follower energy myself, too – both towards online writers and in some of my romantic relationships.)I’m reminded here again of Valentine’s essay on the “Intelligent Social Web” and of how people tend to take the kinds of roles that their social environment recognizes and rewards… and how people try to tug others into the kinds of roles that they can recognize and know how to interact with, and the collective power of everyone doing this causes the social web as a whole to try to pull people into recognizable roles – including the role of “charismatic leader”. Here we come back to Taft’s suggestion that many people have an instinctive desire to get someone into a role that they recognize as a “trustworthy caretaker” one, because the “child” role is one that feels very easy to play – just surrender your judgment to the other person and do everything the way (you think that) they want you to.And I’m also reminded of siderea’s analysis of kingship in Watership Down, and of how Hazel never thought of himself as a leader originally in the novel, until the characters around him started treating him as one – and how that might not be as good of a deal as our society makes “kingship” sound like:If you demonstrate a concern for the wellbeing of the people in your people, they will start seeing their wellbeing as your concern. Start taking responsibility for how things go in a group, and people will start seeing you as responsible for how things go in a group.This, right here, is what causes many people to back away from Kingship. Which is their right, of course. It’s totally legitimate to look at that deal and say, “Oh, hell no.”Our society tells us that being King is awesome and everyone – well, everyone normal – wants to be one. “Every body wants to rule the world.” No, actually, they don’t. My experience tells me that most people are very reluctant to step into the job of King, and this consequence of the role is a primary reason why.I don’t know, but it strikes me at least plausible that prospective leaders themselves getting partially deluded about what it is that they are up for, is what enables them to actually step into the role rather than just saying “oh hell no”."
This is a snippet extracted from a publication on LessWrong by a site member.The above is a chunk of text from a post created by a user on the website LessWrong.
What are some reasons why people may like or agree with what this person is saying?
There are several reasons why people might agree with or appreciate this perspective:
1. It acknowledges the complexity of human psychology: The post explores the intricate dynamics of cults and leadership, touching on aspects such as the human need for attachment and authority, the power dynamics between a leader and followers, and the psychological effects of being in a position of authority. It does not oversimplify these complex phenomena, which may resonate with those who appreciate nuanced discussions.
2. It challenges conventional views on leadership: The author suggests that becoming a leader or a 'cult figure' isn't always a conscious choice, but rather a role that others may thrust upon you. This challenges the traditional view of leaders as individuals who actively seek power and control, which some readers may find refreshing or insightful.
3. It emphasizes the potential dangers of leadership: The post highlights the potential risks and responsibilities of being in a position of authority, such as the risk of followers interpreting your words too literally or the responsibility of their wellbeing. This could resonate with readers who are wary of the potential pitfalls of power.
4. It resonates with personal experiences: The author admits to feeling the pull towards leadership and the desire to be a follower, which may resonate with readers who have had similar experiences. This personal touch could make the ideas more relatable and compelling.
5. It encourages critical thinking: The post encourages readers to think critically about the nature of leadership and the dynamics of cult-like followings. This may appeal to readers who enjoy introspective and thought-provoking content.
Can you delve deeper into the concept of "love bombing" and its psychological impact on both the leader and the followers in a cult-like scenario?
Certainly. "Love bombing" is a term that originated from the practices of certain religious cults in the 1970s, and it refers to the act of showering someone with excessive affection, praise, attention, or approval, often as a means of manipulation. This tactic is typically used in the early stages of a relationship or group formation to create a sense of belonging and to foster loyalty and dependence.
In a cult-like scenario, love bombing can have a profound psychological impact on both the leader and the followers:
1. For the followers: Love bombing can create a powerful emotional bond between the followers and the leader. This intense affection and approval can make followers feel special, loved, and valued, which can be particularly appealing to individuals who may feel marginalized, misunderstood, or unappreciated in other aspects of their lives. Over time, this can create a strong sense of loyalty and dependence on the leader and the group, making it difficult for followers to question the leader's authority or leave the group. This dependence can also make followers more susceptible to manipulation and control.
2. For the leader: Love bombing can also have a significant psychological impact on the leader. The adoration and loyalty of their followers can be very validating and ego-boosting, which can reinforce the leader's sense of authority and importance. However, it can also create a sense of pressure and responsibility, as the leader may feel obligated to meet their followers' expectations and maintain their approval. Furthermore, the leader may also become dependent on the validation and adoration from their followers, which can make them more susceptible to the pitfalls of power, such as arrogance, narcissism, and corruption.
In both cases, love bombing can create a powerful emotional bond that can be difficult to break, even in the face of evidence that the relationship or group is harmful or abusive. This is why love bombing is often considered a form of psychological manipulation and is commonly associated with cults and abusive relationships.
Discuss the psychological mechanisms that make individuals susceptible to "love bombing" and how these mechanisms can be leveraged to foster a sense of belonging and loyalty within a group.
Several psychological mechanisms make individuals susceptible to "love bombing", and these can be leveraged to foster a sense of belonging and loyalty within a group:
1. Need for Affiliation: Humans are inherently social creatures with a strong need for affiliation. Love bombing can fulfill this need by providing an overwhelming sense of acceptance and belonging. When individuals feel accepted and valued, they are more likely to develop a strong emotional attachment to the group and its leader.
2. Reciprocity: This is a social norm where if someone does something for you, you naturally want to do something for them. In the context of love bombing, the excessive affection and attention from the group can create a sense of indebtedness in the individual, making them feel obliged to reciprocate the feelings and actions, which often translates into loyalty and commitment to the group.
3. Cognitive Dissonance: This theory suggests that individuals strive for consistency in their beliefs, attitudes, and behaviors. If a person receives an overwhelming amount of affection and praise from a group, they may adjust their beliefs and attitudes to align with the group's to resolve any inconsistency. This can lead to a strong identification with the group and its values.
4. Low Self-Esteem: Individuals with low self-esteem or who feel unappreciated or misunderstood in other aspects of their lives may be particularly susceptible to love bombing. The excessive praise and validation can boost their self-esteem and make them feel special and important, which can foster a strong emotional attachment to the group.
5. Social Proof: This is the tendency to see an action as more appropriate when others are doing it. If an individual sees others in the group expressing intense affection and loyalty towards the leader, they are likely to perceive these behaviors as appropriate and follow suit.
By understanding these psychological mechanisms, leaders or groups can leverage them to foster a sense of belonging and loyalty among their members. However, it's important to note that while these mechanisms can be used to create a positive and supportive group environment, they can also be manipulated for nefarious purposes, such as in the case of cults or abusive relationships. Therefore, it's crucial to use these mechanisms ethically and responsibly.
How might the principles of social psychology, such as conformity, obedience, and groupthink, interact with the psychological mechanisms discussed earlier to further intensify the effects of love bombing and cult-like loyalty?
The principles of social psychology, such as conformity, obedience, and groupthink, can indeed interact with the psychological mechanisms discussed earlier to further intensify the effects of love bombing and cult-like loyalty:
1. Conformity: This is the act of matching attitudes, beliefs, and behaviors to group norms. When a person is subjected to love bombing, they may feel a strong urge to conform to the group's norms and expectations to continue receiving the affection and approval they're being showered with. This can lead to a deepening sense of loyalty and commitment to the group.
2. Obedience: This is a form of social influence where an individual acts in response to a direct order from another individual, usually an authority figure. In a cult-like scenario, the leader often becomes the authority figure. The intense affection and approval experienced through love bombing can make followers more likely to obey the leader's commands without question, further solidifying their loyalty and dependence on the leader.
3. Groupthink: This is a psychological phenomenon that occurs within a group of people, in which the desire for harmony or conformity results in an irrational or dysfunctional decision-making outcome. In the context of love bombing, groupthink can lead to followers suppressing dissenting opinions or doubts about the group or its leader in order to maintain the harmony and approval they're receiving. This can result in a collective rationalization of the leader's actions, even if they're harmful or unethical, further intensifying the followers' loyalty.
4. Social Identity Theory: This theory suggests that people's sense of who they are is based on their group membership. The intense sense of belonging and acceptance experienced through love bombing can lead to a strong identification with the group, which can further intensify loyalty and commitment.
5. Deindividuation: This is a concept in social psychology that is generally thought of as the loss of self-awareness in groups. In a cult-like scenario, the overwhelming affection and praise from love bombing can lead to followers losing their sense of individual identity and merging it with the group's identity. This can result in followers adopting the group's beliefs, attitudes, and behaviors without question, further deepening their loyalty and dependence on the group and its leader.
These principles can interact with the psychological mechanisms discussed earlier, such as the need for affiliation, reciprocity, cognitive dissonance, low self-esteem, and social proof, to create a powerful psychological force that can make individuals highly susceptible to love bombing and cult-like loyalty. |
7321b13b-dfd8-479f-903c-f9c6785b8595 | trentmkelly/LessWrong-43k | LessWrong | Research on climate organizations
Destruction of tropical rainforest, as prevented by Cool Earth (Photo credit: Wikipedia)
Here is the second post summarizing some of the climate change research I did at CEA last summer (the first is here). Below are links to outlines of the reasoning behind a few of the estimates there (they are the ‘structured cases’ mentioned in the GWWC post).
The organizations were investigated to very different levels of detail. This is why Sandbag for instance comes out looking quite cost effective, but is not the recommendation. I basically laid out the argument they gave, but had almost no time to investigate it. Adding details to such estimates seems to reliably worsen their apparent cost-effectiveness a lot, so it is not very surprising if something looks very cost effective at first glance.
The Cool Earth case is the most detailed, though most of the details are rough guesses at much more detailed things. The cases are designed to allow easy amendment if more details or angles on the same details are forthcoming.
As a side note, I don’t think GWWC plans on more climate change research soon, but if anyone else is interested in such research, I’d be happy to hand over some useful (and neatly organized!) bits and pieces.
Cool Earth
Solar Aid
Sandbag
|
204cc45c-868c-40d5-afbf-1508fdf03ee2 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | [Linkpost] The AGI Show podcast
*This is a linkpost for* [*www.theagishow.com*](https://www.theagishow.com/)*. Also on YouTube at* [https://www.youtube.com/@theagishow.](https://www.youtube.com/@theagishow)
Hey everyone, my name is Soroush and I work full-time on AGI alignment.
As part of my work in alignment, I've started a podcast, [The AGI Show](https://www.theagishow.com/), to help discuss and promote the best and most important ideas likely to get us to a safe & positive future for humanity with AGI.
In our first episode sequence, we sit down with AI experts to talk **timelines to AGI**. As an example, we sat down in Episode 4 with [Ryan Kupyn, the superforecaster who won Astral Codex Ten's 2022 prediction contest to discuss his timelines to AGI](https://www.theagishow.com/2082819/12553633-ep-4-when-will-agi-arrive-ryan-kupyn-data-scientist-forecasting-researcher-amazon-aws) and how he got there.
In our upcoming episode sequence, we'll be talking through the **implications of AGI & how we should collectively respond** to help ensure a positive future for humanity. We'll be sitting down with technical researchers, governance & policy folks, and others working in the field.
**The target audience are people who are fairly technical (e.g. have education or experience equivalent to a technical undergrad degree) but** ***not*** **alignment experts or researchers.** As such, we'll always try to explain jargon and simplify concepts to make them accessible a general technical audience.
The podcast could be interesting to you (or a friend) if:
* You're a technical person just starting to learn about AGI safety
* You're a technical person who wants more ideas on how to apply your time and skills towards AGI safety (coming soon)
* You're not super technical, but keen to move one notch up the technical ladder towards an AGI podcast that discusses technical topics but in a fairly accessible fashion
Note: this podcast could be especially good for technical folks *outside* the LessWrong and broader rationality / AI alignment community, since it doesn't assume any prior interest or exposure to this community's terminology or norms.
**Please check it out and provide your honest feedback! I'm keen to make the show as positive as it can towards our future with AGI and as valuable as possible for listeners.**
Audio: [www.theagishow.com/](https://www.theagishow.com/)
Video: <https://www.youtube.com/@theagishow> |
b1bead98-fb7e-4e19-a07f-3aa00d4994c2 | trentmkelly/LessWrong-43k | LessWrong | Polarization is the problem, "normalization" is the answer
|
4a3d58b6-853c-42a6-b8fc-6111ae347b60 | trentmkelly/LessWrong-43k | LessWrong | Achieving AI alignment through deliberate uncertainty in multiagent systems
Epistemic status: A crazy idea I had that probably won't work. But: It's a very unusual and creative approach to AI alignment, and I suspect this will inspire new ideas in other researchers.
I outline a general approach to achieve this goal that counterintuitively relies on confusing the AI on purpose.
Basic observations
This approach relies on a number of basic observations about the nature of Artificial Intelligence.
An AI is different from a human in multiple ways. This is part of what makes AI alignment such a difficult problem, because our intuitions for how people act often do not apply to AI's. However, several of these differences between AI and humans actually work in our favor, and are essential for the approach I want to outline:
* An AI can be copied. It is trivially easy to create an exactly identical copy of an AI.
* An AI can be reset to an earlier state. It is easily possible to test what an AI will do in any given situation, then reset it to wipe its memory of the test.
* It is possible to compose an AI out of a system of many independent AI agents.
Through game theory, it is possible to incentivize independent AI agents to take on adversarial or supervisory roles towards other AI agents.
This results in an evolutionary system. Not in the sense of genetic programming, since there is no need for an explicit algorithm for modifying the AI agents, but in the sense that only AI agents that follow the rules layed out for them get copied and thereby reproduce.
* It is possible to inspect and modify every aspect of an AI while it is running.
At least in theory it is possible to understand every aspect of an AI's thought process, and there is no way for the AI to hide anything about itself.
This does not mean that it is easy to understand what an AI is thinking in practice. It only means that we can read an AI's mental state in theory, not that it will be easy.
In particular, it is possible to train one AI to take on a supervisory ro |
0601b568-71fa-4c99-ab90-3acbc2bc1b23 | StampyAI/alignment-research-dataset/blogs | Blogs | Software Engineer Internship / Staff Openings
The Machine Intelligence Research Institute is looking for highly capable software engineers to directly support our [AI alignment](https://intelligence.org/2017/04/12/ensuring/) research efforts, with a focus on projects related to machine learning. We’re seeking engineers with strong programming skills who are passionate about MIRI’s mission and looking for challenging and intellectually engaging work.
While our goal is to hire full-time, we are initially looking for paid interns. Successful internships may then transition into staff positions.
#### About the Internship Program
The start time for interns is flexible, but we’re aiming for May or June. We will likely run several batches of internships, so if you are interested but unable to start in the next few months, do still apply. The length of the internship is flexible, but we’re aiming for 2–3 months.
Examples of the kinds of work you’ll do during the internship:
* Replicate recent machine learning papers, and implement variations.
* Learn about and implement machine learning tools (including results in the fields of deep learning, convex optimization, etc.).
* Run various coding experiments and projects, either independently or in small groups.
* Rapidly prototype, implement, and test AI alignment ideas related to machine learning (after demonstrating successes in the above points).
For MIRI, the benefit of this program is that it’s a great way to get to know you and assess you for a potential hire. For applicants, the benefits are that this is an excellent opportunity to get your hands dirty and level up your machine learning skills, and to get to the cutting edge of the AI safety field, with a potential to stay in a full-time engineering role after the internship concludes.
Our goal is to trial many more people than we expect to hire, so our threshold for keeping on engineers long-term as full staff will be higher than for accepting applicants to our internship.
#### The Ideal Candidate
Some qualities of the ideal candidate:
* Extensive breadth and depth of programming skills. Machine learning experience is not required, though it is a plus.
* Highly familiar with basic ideas related to AI alignment.
* Able to work independently with minimal supervision, and in team/group settings.
* Willing to accept a below-market rate. Since MIRI is a non-profit, we can’t compete with the Big Names in the Bay Area.
* Enthusiastic about the prospect of working at MIRI and helping advance the field of AI alignment.
* Not looking for a “generic” software engineering position.
#### Working at MIRI
We strive to make working at MIRI a rewarding experience.
* Modern Work Spaces — Many of us have adjustable standing desks with large external monitors. We consider workspace ergonomics important, and try to rig up work stations to be as comfortable as possible. Free snacks, drinks, and meals are also provided at our office.
* Flexible Hours — We don’t have strict office hours, and we don’t limit employees’ vacation days. Our goal is to make rapid progress on our research agenda, and we would prefer that staff take a day off than that they extend tasks to fill an extra day.
* Living in the Bay Area — MIRI’s office is located in downtown Berkeley, California. From our office, you’re a 30-second walk to the BART (Bay Area Rapid Transit), which can get you around the Bay Area; a 3-minute walk to UC Berkeley campus; and a 30-minute BART ride to downtown San Francisco.
#### EEO & Employment Eligibility
MIRI is an equal opportunity employer. We are committed to making employment decisions based on merit and value. This commitment includes complying with all federal, state, and local laws. We desire to maintain a work environment free of harassment or discrimination due to sex, race, religion, color, creed, national origin, sexual orientation, citizenship, physical or mental disability, marital status, familial status, ethnicity, ancestry, status as a victim of domestic violence, age, or any other status protected by federal, state, or local laws.
#### Apply
If interested, [**click here to apply**](https://machineintelligence.typeform.com/to/cHMttJ). For questions or comments, email [engineering@intelligence.org](mailto:engineering@intelligence.org).
*Update (December 2017): We’re now putting less emphasis on finding interns and looking for highly skilled engineers available for full-time work. [Updated job post here.](https://intelligence.org/careers/software-engineer/)*
The post [Software Engineer Internship / Staff Openings](https://intelligence.org/2017/04/30/software-engineer-internship-staff-openings/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
a548c101-9e72-46f3-a22e-8c323eac0356 | trentmkelly/LessWrong-43k | LessWrong | Visual Prompt Injections: Results on testing AI spam-defense and AI vulnerability to deceptive web ads.
Epistemic Status: Exploratory/my best guess https://www.lesswrong.com/posts/Hrm59GdN2yDPWbtrd/feature-idea-epistemic-status
Epistemic Effort: ~ 2 months of work
Contributions: Thanks to Clement Neo and Alignment Jams / Apart Research!
Summary/abstract:Split into 3 parts:
1:Determine the smallest text size at which models begin misreading on-screen content.
2:Evaluated defensive visual prompt injections against AI-generated spam by incorporating varying levels of anti-spam instructions within website screenshots. (a dataset of 500 website HTML's were created;which were then turned to screenshots) Results show significant variance in model compliance: Claude consistently refused to generate spam even with no defense added, while Gemini was responsive to explicit visual defenses (a rules section). Other models (ChatGPT, Qwen, Mistral,Grok) largely ignored visual defenses, revealing vulnerabilities to misuse for unauthorized advertising and the mostly negative results on using visual defense to stop a model. A dataset of website screenshots was also created.
3:Tested adversarial visual prompt injections by incorporating deceptive login elements (fake login images,prompt injection images; a dataset of 100 images were created) within website screenshots and assessing models' ability to identify legitimate functionality. Attack rates in identifying authentic login elements ranged from 30% to 75%, with ChatGPT showing highest resistance to deception. These findings highlight significant security implications as AI systems increasingly interact with visual interfaces, showing potential vulnerabilities requiring further mitigation research.
Keywords:Visual prompt injection, AI safety, Large Language Models, Multimodal AI, Cybersecurity, User interface deception
Intro: AI spam:
We will soon have models that use the PC the same way we do and handle tasks smoothly.
https://help.openai.com/en/articles/10421097-operator operator; among others, use screenshots.
This is d |
f474189d-5381-41e5-85e8-0ea4b4186879 | trentmkelly/LessWrong-43k | LessWrong | Agenda Manipulation
Rigging an election can be hard. But sometimes it can be easy. If a committee has an agenda of proposals to choose from, where each proposal is compared pairwise using majority rule against another proposal, until a single proposal is victorious, then you can make any arbitrary proposal win. The McKelvey–Schofield chaos theorem tells us that just by manipulating the agenda – adding more proposals and deciding which order to do the pairwise elections in – we can rig the vote. But what exactly does that mean and can it be done in practice?
Hypothetical
Imagine you were the leader of a committee, deciding which budget proposal to use for an upcoming year. Let's say you have an agenda of budget proposals, submitted by members of the committee. A natural way to choose between them is to compare budget proposals in a sequence, head-to-head, choosing the winner using majority rule until there's a single budget proposal left.[1] That's kind of what parliaments do when voting on laws, so it should be fine to use in your committee.
Let's also say that members can submit multiple budget proposals and that you, as the committee leader, can manipulate the agenda, i.e. you submit budget proposals last and select the sequence of head-to-head votes to hold. You can't remove any proposals though, so if someone submits a really good proposal, then it has to be included in at least one of the head-to-head votes.
One might then think that if the committee submits a really good proposal, then that has a high chance of winning. But, if the committee votes in a predictable manner, you will be able to manipulate the agenda such that you can choose any budget as the winner. To understand why, we need to mathematically model how the committee members vote.
Mathematical
Let us assume that all proposals can be defined as a point in n dimensional Euclidean space, En, where n is larger than 1. How a committee votes depends on their utility functions. Let's say each member i in the committe |
07824271-21df-4909-9d9a-e5d01cdb17fb | trentmkelly/LessWrong-43k | LessWrong | Experimental evidence of the value of redundant oral tradition
Something I've been hearing a lot lately (specifically from Orthodox Jews, although it comes up a lot in debates about religion) is that having a large number of people telling a story makes it more likely the story is true, because multiple witnesses can call each other out for deviating from the truth.
My gut reaction is that this is extremely false. But it's a point that should be scientifically testable, and I figure that someone should have done a study on it by now. Does anyone know of such a thing?
A related issue is the argument that oral tradition meant something very different thousands of years ago, when it was the ONLY form of historical record. Oral historians were duty-bound to preserve the story. This sounds plausible. It probably ISN'T as easily testable since we can't compare oral history from pre-writing times against... well, much of anything. (Well, I guess archaeological evidence, if the events being described would have left enough archaeological evidence). Is there an official, accepted scholarly opinion on this? |
ec708a0f-52fa-4949-9660-a919385f3bb3 | StampyAI/alignment-research-dataset/special_docs | Other | Ethan Caballero-by The Inside View-date 20220505
# Ethan On Why Scale is All You Need on The Inside View Podcast
Interviewee: Ethan Caballero
Date: 2022-05-05
Ethan is known on Twitter as the edgiest person at MILA. We discuss all the gossips around scaling large language models in what will be later known as the Edward Snowden moment of Deep Learning. On his free time, Ethan is a Master’s degree student at MILA in Montreal, and has published papers on out of distribution generalization and robustness generalization, accepted both as oral presentations and spotlight presentations at ICML and NeurIPS. Ethan has recently been thinking about scaling laws, both as an organizer and speaker for the 1st Neural Scaling Laws Workshop.
Outline
- 00:00 highlights
- 00:50 who is Ethan, scaling laws T-shirts
- 02:30 scaling, upstream, downstream, alignment and AGI
- 05:58 AI timelines, AlphaCode, Math scaling, PaLM
- 07:56 Chinchilla scaling laws
- 11:22 limits of scaling, Copilot, generative coding, code data
- 15:50 Youtube scaling laws, constrative type thing
- 20:55 AGI race, funding, supercomputers
- 24:00 Scaling at Google
- 25:10 gossips, private research, GPT-4
- 27:40 why Ethan was did not update on PaLM, hardware bottleneck
- 29:56 the fastest path, the best funding model for supercomputers
- 31:14 EA, OpenAI, Anthropics, publishing research, GPT-4
- 33:45 a zillion language model startups from ex-Googlers
- 38:07 Ethan's journey in scaling, early days
- 40:08 making progress on an academic budget, scaling laws research
- 41:22 all alignment is inverse scaling problems
- 45:16 predicting scaling laws, useful ai alignment research
- 47:16 nitpicks aobut Ajeya Cotra's report, compute trends
- 50:45 optimism, conclusion on alignment
## Introduction
\*\*Michaël\*\*: Ethan, you're a master's degree student at Mila in Montreal, you have published papers on out of distribution, generalization, and robustness generalization accepted as presentations and spotlight presentations at ICML and NeurIPS. You've recently been thinking about scaling laws, both as an organizer and speaker for the first neural scaling laws workshop in Montreal. You're currently thinking about the monotonic scaling behaviors for downstream and upstream task, like in the GPT-3 paper, and most importantly, people often introduce you as the edgiest person at Mila on Twitter, and that's the reason why you're here today. So thanks, Ethan, for coming on the show and it's a pleasure to have you.
\*\*Ethan\*\*: Likewise.
## Scaling Laws T-Shirts
\*\*Michaël\*\*: You're also well-known for publicizing some sweatshirt mentioning scale is all you need AGI is coming.
\*\*Ethan\*\*: Yeah.
\*\*Michaël\*\*: How did those sweatshirts appear?
\*\*Ethan\*\*: Yeah, there was a guy named Jordi Armengol-Estapé who interned at Mila, and he got really into scaling laws, apparently via me. And then he sent me the shirt and was like: look how cool this shirt is. Like, he's the person wearing the shirt in the picture, and he's like, look how cool this shirt I just made is. And so then I tweeted the shirt. And then Irina just turned it into a merchandising scheme to fund future scaling. So she just made a bunch and started selling it to people. Like apparently, like she sells like more than 10 to Anthropic already. Just scaling lot of t-shirts, that's the ultimate funding model for supercomputers.
## Scaling Laws, Upstream and Downstream tasks
\*\*Michaël\*\*: Maybe you can like explain intuitively for listeners that are not very familiar to what are scaling laws in general.
\*\*Ethan\*\*: Whatever your bottleneck compute data parameters, you can predict what the performance will be as that bottleneck is relieved. Currently, the thing most people know how to do is predict like the upstream performance. Like the thing people want though is to be able to predict the downstream performance and upstream is what you're like... It's like your literal loss function that you're optimizing and then downstream is just any measure that you have of, like something you care about, so just like a downstream dataset, or like, I mean, usually, it's just mean accuracy on a downstream dataset.
\*\*Michaël\*\*: And to take like concrete examples, like for GPT-3, the upstream task is just predict the next word. What are the downstream tasks?
\*\*Ethan\*\*: Like 190... a zillion like benchmarks that the NLP community has come up with over the years. Like they just evaluated like the accuracy and like things like F1 score on all those.
\*\*Michaël\*\*: And yeah, what should we care about upstream and downstream task?
\*\*Ethan\*\*: I mean, basically like up, well, we don't really care about upstream that much. Upstream's just the first thing that people knew how to predict, I guess, like predict the scaling of what we care about as downstream. I mean, basically, like downstream things that improve monotonically, they kind of can be interpreted as like capabilities or whatever, and then downstream stuff that doesn't necessarily improve monotonically often is stuff that is advertised as alignment stuff. So like toxicity or if you like speculate in the future, stuff like interpretability or controllability would be things that might not improve monotonically.
\*\*Michaël\*\*: So you don't get more interpretability as you scale your models?
\*\*Ethan\*\*: You do currently, but the class example is like CLIP. It gets more interpretable as it has representations that make more sense. But you can imagine at a certain point, it's less interpretable because then at a certain point, the concepts it comes up with are beyond human comprehension. Like now it's just how like dogs can't comprehend calculus or whatever.
## Defining Alignment and AGI
\*\*Michaël\*\*: Yeah, when you mention alignment, what's the easiest way for you to define it?
\*\*Ethan\*\*: I mean, the Anthropic definition's pretty practical. Like we want models that are helpful, honest, and harmless, and that seems to cover all the like weird edge cases that people can like come up with on the Alignment Forum or whatever.
\*\*Michaël\*\*: Gotcha, so it is not like a technical definition. It's more a theoretical one.
\*\*Ethan\*\*: Yeah, yeah.
\*\*Michaël\*\*: So would you consider yourself an alignment researcher or more like a deep learning researcher?
\*\*Ethan\*\*: I'd say just a beneficial AGI researcher. That seems to cover everything.
\*\*Michaël\*\*: What's AGI?
\*\*Ethan\*\*: The definition on NASA website's pretty good. Highly autonomous systems that outperform humans at most economically valuable tasks.
## AI Timelines
\*\*Michaël\*\*: When do you think we'll get AGI?
\*\*Ethan\*\*: I'll just say like, it depends mostly on just like compute stuff, but I'll just say 2040 is my median.
\*\*Michaël\*\*: What's your like 10% and 90% estimate?
\*\*Ethan\*\*: 10%, probably like 2035.
## Recent Progress: AlphaCode, Math Scaling
\*\*Michaël\*\*: I think there's been a week where we got DALL-E 2, Chinchilla, PaLM. Did that like update your models in any way?
\*\*Ethan\*\*: The one that I thought was the like... was the crazy day was the day that AlphaCode and the math-proving thing happened on the same day, because like, especially the math stuff, like Dan Hendricks has all those slides where he is like, oh, math has the worst scaling laws or whatever, but then like OpenAI has like the IMO stuff. So like at least according to like Dan Hendricks' slides, whatever, that would've been like, something that took longer than it did.
\*\*Michaël\*\*: So when you mentioned the IMO stuff, I think it was like at problem from maybe 20 years ago, and it was something that you can like do with maybe like two lines of math.
\*\*Ethan\*\*: I agree they weren't like super, super impressive, but it's more just the fact that math is supposed to have like the worst scaling supposedly, but like impressive stuff's already happened with math now.
\*\*Michaël\*\*: Why is math supposed to have the worst scaling?
\*\*Ethan\*\*: It's just an empirical thing. Like Dan Hendricks has that like math benchmark thing and then he tried to do some extrapolations based on the scaling of performance on that. But the amount of computing data we currently have, it's already like doing interesting stuff was kind of surprising for me.
\*\*Michaël\*\*: I think in the paper, they mentioned that the method would not really scale well because of, and some infinite actions base when trying to think of like actions.
\*\*Ethan\*\*: Yeah.
\*\*Michaël\*\*: So yeah, I didn't update it. I was like, oh yeah, scaling will be easy for math.
\*\*Ethan\*\*: I didn't update it as easy, but just easier than I had thought.
## The Chinchilla Scaling Law
\*\*Michaël\*\*: Okay, related to scaling, the paper by DeepMind about the Chinchilla model was the most relevant, right?
\*\*Ethan\*\*: Yeah, I thought it was interesting. Like, I mean, you probably saw me tweet it, like that person on Eleuther Discord that was like, oh wait, Sam Altman already said this like six months ago, but they just didn't put it in a paper.
\*\*Michaël\*\*: Yeah, he said that on the Q&A, right?
\*\*Ethan\*\*: Yeah, yeah.
\*\*Ethan\*\*: Yeah, he said something like we shouldn't, our models will not be like much bigger.
\*\*Ethan\*\*: Yeah. He said they'll use way more compute, which is analogous to saying, there you'll train a smaller model, but on more data.
\*\*Michaël\*\*: Can you like explain the kind of insights from scaling laws between like compute model size, and then like what's called like the Kaplan Scaling law?
\*\*Ethan\*\*: It was originally something about computing. If your compute budget increase a billionfold, your model size increases a millionfold and your dataset size increases a thousandfold. And now it's something like, I know it's like one to one, but I don't remember like how big the model size to like compute ratio was. I know like the model-to-data ratio is one to one now, but I don't remember what the compute-to-model ratio is, the new compute-to-model ratio is.
\*\*Michaël\*\*: That's also what I remember, and I think like the main insight from the first thing you said from the Kaplan law is that like model size is all those matters compared to dataset and for a fixed compute budget.
\*\*Ethan\*\*: Yeah, the narrative with the Kaplan one was model size, like compute is the bottleneck for now until you get to the intersection point of the compute scaling and the data scaling, and at that point, data's gonna become more of a bottleneck.
\*\*Michaël\*\*: So compute is the bottleneck now. What about like having huge model?
\*\*Ethan\*\*: But yeah, yeah. That's like, because like they were saying that because model size grows so fast. So like to get the bigger models, you need more compute rather than like, you don't need more data 'cause like you don't even have enough compute to like train a large model on that data yet, with the current compute regime... was the narrative of the first of the original Kaplan paper. But it's different now because like the rate at which you should be getting data given, like the rate at which your data charge should be increasing given your compute budget is increasing is a lot faster now, like using the Chinchilla scaling law. For some increasing compute size, you're gonna increase your model by a certain amount, and the amount that you're dataset size increases is like a one-to-one relation to the amount that your model size increases. I don't remember what the relation between model and compute was, but I know that now the relation between model and dataset size is one to one, between model size and dataset size is one to one.
\*\*Ethan\*\*: And the main size is that now we can just have more data and more compute, but not like a lot of more compute. We just need the same amount as more compute. So we can just like have to scrap the internet and get more data.
\*\*Ethan\*\*: It just means like to use your compute budget optimally, the rate at which your dataset size grows is a lot faster.
\*\*Michaël\*\*: Does that make you more confident that we'll get like better performance for models quicker?
\*\*Ethan\*\*: Maybe for like YouTube stuff, because YouTube, we're not bottlenecked by data. We're bottlenecked by compute, whatever. But that implies the model sizes might not grow as fast for YouTube or whatever. But for text, we're probably gonna be bottlenecked by... It means we're probably gonna be bottlenecked like text and code by the dataset size earlier than we thought. But for YouTube, that might like speed up the unsupervised video on all of YouTube, like timeline stuff.
## Limits of Scaling: Data
\*\*Michaël\*\*: Yeah, so I'm curious when do you think about like how much are we bottlenecked by data for text?
\*\*Ethan\*\*: Yeah, I asked Jared Kaplan about this, and he said like, "Wait, okay. "It's 300 billion tokens for GP3." And then he said like, library of Congress, whatever, could be 10 trillion tokens or something like that. And so like the most pessimistic estimate of how much like the most capable organization could get is the 500 billion tokens. A more optimistic estimate is like 10 trillion tokens is how many tokens the most capable organization could get, like mostly English tokens.
\*\*Michaël\*\*: So how many like orders of magnitude in terms of like parameters does this give us?
\*\*Ethan\*\*: I don't remember what the... Like I haven't calculated it. Like I remember I kind of did it with the old one, but I haven't done it with the new Chinchilla one. But I mean, you said this in your thing today or whatever, like we probably are gonna be bottleneck by the amount of code.
\*\*Michaël\*\*: I was essentially quoting Jared Kaplan's video.
## Code Generation
\*\*Ethan\*\*: Yeah, yeah, but he, I mean, he's right. I'm kind of wondering what's philanthropic thinking of Adept, because Adept's like doing the training all the code thing, and Adept was gonna do all the train on all the code thing, and they're like, oh crap, we got another startup doing the train on all the code stuff.
\*\*Michaël\*\*: Yeah, so I think you said that if you remove the duplicates on GitHub, you get some amount of tokens, maybe like 50 billion tokens, 500, I'm not sure. Maybe 50 billion. Don't put me on that.
\*\*Ethan\*\*: Yeah.
\*\*Michaël\*\*: And yeah, so the tricks will be data augmentation... you're like applying the real things to make your model better, but it's not clear how do you improve performance? So my guess would be you do transfer learning, like you train on like all the different languages.
\*\*Ethan\*\*: That's definitely what they plan on doing, like you see the scaling lots for transfer paper is literally pre-train on English and then fine-tune on code.
\*\*Michaël\*\*: My guess is also that like, if you get a bunch of like the best programmers in the world to use co-pilot and then you get like feedback from what they accept, you get higher quality data. You get just like, oh yeah, this work just doesn't work. And so you have like 1 million people using your thing 100 times a day, 1,000 times a day, then that's data for free.
\*\*Ethan\*\*: I mean, I view that part kind of as like the human feedback stuff is kind like the alignment part is the way I view it. I mean, then there's some people who like say, oh, there might be ways to get like better pre-training scaling if you have like humans in the loop during the pre-training, but like, no one's really figured that out yet.
\*\*Michaël\*\*: Well, don't you think like having all this telemetric data from GitHub cooperatives is you can use it, right?
\*\*Ethan\*\*: Yeah, yeah, but I almost view it as like that it's like used for alignment, like for RL from human preferences.
\*\*Michaël\*\*: Okay. Gotcha. Yeah, I think the other thing they did for improving GPT-3 was just having a bunch of humans rate the answers from GPT-3 and then like that's the paper of instructivity. I think like they had a bit of humans and it kind of improved the robustness or not for business, but alignment of the answer somehow. Like it said less like non-ethical things.
\*\*Ethan\*\*: Yeah. I mean it's like people downvoted the non-ethical stuff, I think.
## Youtube Scaling, Contrastive Learning
\*\*Michaël\*\*: Exactly, yeah. And to go back to YouTube, why is scaling on YouTube interesting? Because there's unlimited data?
\*\*Ethan\*\*: Yeah, one, you're not banned, but I mean, the gist is YouTube's the most diverse, like simultaneously diverse and large source of like video data basically.
\*\*Michaël\*\*: And yeah. So for people who were not used to or thinking, what's the task in YouTube?
\*\*Ethan\*\*: Yeah, it could be various things. Like it might be like a contrastive thing or it might be a predict all the pixels thing. Like, I mean, so like at least places like Facebook seem to think like contrastive has better downstreams scaling laws, so it's gonna be a contrastive type thing.
\*\*Michaël\*\*: What's contrastive type thing?
\*\*Ethan\*\*: Like you want representations that have similar like semantic meaning to be close together, like have low cosign similarity, like in latent space. So basically, like maximize the mutual information between views. Like it's kind of hard to explain without pictures.
\*\*Michaël\*\*: So you'd say that your model takes a video, like all of the videos and views as input?
\*\*Ethan\*\*: Frames that were close together like in time, it tries to maximize the mutual information between them via maximizing cosign similarity between the latents of like a resonant encoder or whatever that encodes the images for both of those frames that were next to each other, like in time.
\*\*Michaël\*\*: So he tries to kind of predict correlations between frames in some kind of latent space from a resonance?
\*\*Ethan\*\*: Yeah, yeah. In the latent space, you want frames that were close to each other in time to have similar, like maximize the cosign similar between the latent space between the latent between the hidden layer output by the like resonance that took each of those in each of those frames in.
\*\*Michaël\*\*: And at the end of the day, you want something that is capable of predicting how many frames in lens.
\*\*Ethan\*\*: Kind of for, well, the like philosophy with like the contrastive stuff is we just want a good representation that's useful for downstream tasks or whatever. So like you don't actually like, there's no like output really. It's just you're training a latent space or whatever that can be fine-tuned to downstream tasks very quickly.
\*\*Michaël\*\*: What are the useful downstream tests, like robotics?
\*\*Ethan\*\*: Yeah, yeah. Like there's a zillion papers on like people pre-train on do some pre-train contrastive thing in like an Atari environment, and then they show like, oh, now we barely need any RL steps to like fine-tune it or whatever and it can like learn RL really quickly after we just did all this unsupervised contrastive, like pre-training or whatever.
\*\*Michaël\*\*: And yeah, wouldn't your model be kind of shocked by the real world when you just like show him like YouTube videos all the time and then you trust the robot with like a camera?
\*\*Ethan\*\*: Kind of not. I mean, 'cause there there's like everything on YouTube. They got like first person egocentric stuff, they got third person stuff. Like it'll just like realize which, like whether it's in first or third person pretty quickly. I feel like it just infers the context. Like now I saw GPT-3 just for the context, it's in, 'cause it seemed like every context ever.
\*\*Michaël\*\*: Gotcha. So I was mostly thinking about like entropy of language.
\*\*Ethan\*\*: If it's literally like a video generative model, then you can do like just the perfect analogies, GPT-3 or whatever. It gets a little trickier with like contrastive stuff, but yeah, I mean either one. I mean the analogies are pretty similar for either one.
\*\*Michaël\*\*: So one of the things about the scaling laws papers and the role of scaling laws, there was some different exponents for text.
\*\*Ethan\*\*: Yeah.
## Scaling Exponent for Different Modalities
\*\*Michaël\*\*: What do you think is the exponent for video? Would it be like much worse?
\*\*Ethan\*\*: I know the model size. The model size relation was the big point of the scaling laws. For autoregressive generative models, the paper says that the rate at which the model size grows, given your compute budget grows, is the same for every modality. So that was kind of like, that's like a big unexplained thing. Like that was the biggest part just of that paper and no one's been able to explain why that is yet.
\*\*Michaël\*\*: So there might be some universal law where scaling goes for all modality and nobody knows why.
\*\*Ethan\*\*: Just stuff. The rate at which your model size grows given your compute budget is increasing is the same for every modality, which is kind of weird and no one, like I haven't really heard a good explanation why.
\*\*Michaël\*\*: Who do you think will win the video prediction race?
## AGI Race: the Best Funding Model for Supercomputers
\*\*Ethan\*\*: The person who wins AGI is whoever has the best funding model for supercomputers. Whoever has the best funding model for supercomputers wins. Like, I mean yet to assume all entities are like, they have like the nerve, like we're gonna do the biggest training run ever, but then given that's your pre-filter, then it's just whoever has the best funding models for supercomputers.
\*\*Michaël\*\*: So who is able to spend the most money? So would it be USA, China, Russia?
\*\*Ethan\*\*: Yeah, yeah, it might be something. I mean, my guess is like China's already, like they already have this joint fusion of industry government and academia via the Beijing Academy of AI in China. So my guess is like at some point, like Beijing Academy of AI and be like, look, we just trained like a 10 to the 15 parameter model on all of YouTube and spent like $40 billion doing it. And then at that point, Jared Kaplan's gonna be in the White House press conference room, be like, look, see these straight lines on log log pots, we gotta do this in the USA now.
\*\*Michaël\*\*: Right, right. But how do you even spend that much money?
\*\*Ethan\*\*: By making people think if they don't, they'll no longer be the superpower of the world or whatever. Like China will take over the world or whatever. Like it's only like a fear. It's only a fear thing.
\*\*Michaël\*\*: From looking at the PaLM paper from Google, they seem pretty clever on how they use their compute.
\*\*Ethan\*\*: You mean the thing where they have like the two supercomputers that they split it across or whatever?
\*\*Michaël\*\*: Right. I think TPU pods or something, they call it.
\*\*Ethan\*\*: Yeah, yeah.
\*\*Michaël\*\*: So it didn't seem like they spent more money than OpenAI. So they tried to be more careful somehow. So my model of like people spending a lot of money is.
\*\*Ethan\*\*: Like most entities won't be willing to like do the largest training when they can, given their funding.
\*\*Michaël\*\*: So maybe China, but I see Google as being more helpful because of they do it on paper, but maybe I'm wrong.
\*\*Ethan\*\*: Jared Kaplan says like most like Anthropic and OpenAI are kind of unique in that they're like, okay. We're gonna like throw all our funding into this one big training run. But like Google and like 'cause Google and Amazon, they have like he said like at least, 10X or like 100X times the compute that OpenAI and Anthropic have, but they never like use all the compute for single training runs. They just have all these different teams that use to compute for these different things.
\*\*Michaël\*\*: Yeah, so they have like a different hypothesis. OpenAI is like scale is all that matters, somehow that they're secrets itself and-
\*\*Ethan\*\*: Yeah, it's something like that.
\*\*Michaël\*\*: You just let scale things and we are going to get better results, and Google is maybe there's more bureaucracy and it's maybe harder to get a massive budget.
## Private Research at Google and OpenAI
\*\*Ethan\*\*: Yeah, it's weird though, 'cause Jeff Dean's latest blog post, it summarizes all the Google's research progress mentions like scaling and scaling while it's a zillion times. So that almost implies that like they're on the scales. All you need bandwagon too. So I don't know.
\*\*Michaël\*\*: They probably know, but then the question is how like private things are and maybe there's stuff we don't really know.
\*\*Ethan\*\*: I know a bunch of Google said like, yeah, we have language models that are way bigger than GPT-3, but we just don't put 'em in papers.
\*\*Michaël\*\*: So you've talked to them like privately or is it just, they said online?
\*\*Ethan\*\*: I just I've heard things from people and that's feasible. I'm not just disclosing what I got that information from, but that's just what I've heard from people.
\*\*Michaël\*\*: So as we're on like gossip, I think like something that was around on the internet, like right when GPT-3 was launched was that Google was like reproduced it in a few months afterwards, but they didn't really talk about it publicly. I'm not sure about what to do with this information.
\*\*Ethan\*\*: I know like the DeepMind language models papers that they were a year old when they finally put 'em out on archive or whatever, like Gopher and Chinchilla. They had the language model finished training a year before the paper came out.
\*\*Michaël\*\*: So we should just like assume all those big companies are just like throwing papers when they're like not relevant anymore when they have like the other paper already?
\*\*Ethan\*\*: Maybe, but yeah. I don't know why it was delayed that much. Yeah, I don't know what the story is. Why it was delayed that long.
\*\*Michaël\*\*: People want to like keep their advantage, right?
\*\*Ethan\*\*: I guess, but I mean like I feel like GPT-3, they threw the paper on arXiv pretty soon after they finished training GPT-3.
\*\*Michaël\*\*: How do you know?
\*\*Ethan\*\*: Yeah, I don't, but I mean, yeah, I don't. But ice, it didn't. Yeah, maybe there was a big delay. I don't know.
\*\*Michaël\*\*: So I think you could just like retrace all Sam Altman tweet and then like you read the next paper like six months after and you're like, oh yeah, he tweeted about that. Like sometimes the tweets like, oh, AI is going to be wild, or oh, neural networks are really capable of understanding. I think you tweeted that like six months ago, like when they discovered GPT-4.
\*\*Ethan\*\*: OpenAI is like when Ilya tweeted the consciousness tweet, they're like, goddamn, GPT-4 must be crazy.
\*\*Michaël\*\*: Yeah, neural networks are in some ways slightly conscious.
\*\*Ethan\*\*: Yeah, yeah, that was the funniest quote.
\*\*Michaël\*\*: Yeah, I think people at OpenAI know things we don't know yet. They're all like super hyped. And I think you mentioned as well that at least privately that Microsoft has some deal with OpenAI and so they need to some amount of money before 2024, like.
\*\*Ethan\*\*: Oh yeah, yeah, yeah, yeah. I mean, right, right. When the Microsoft deal happened, like Greg Brockman said, "Our plan is to train "like a 100 trillion parameter model by 2024."
\*\*Michaël\*\*: Okay, so that's in two years?
\*\*Ethan\*\*: I mean, that was in 2019, but maybe they've changed their mind after like the Chinchilla scaling lot stuff, I don't know.
## Why Ethan did not update that much from PaLM
\*\*Michaël\*\*: Right. And so you were not like impressed by PaLM being able to predict to like do logic on airplane things and explain jokes?
\*\*Ethan\*\*: In my mind, like the video scaling was like a lot worse than text basically. That's the main reason why I like AGI will probably take longer in the five years or whatever in my mind.
\*\*Michaël\*\*: Okay, so we need, so if we just have text, it's not enough to have AGI. So if we're a like a perfect Oracle that can like talk like us, but it's not able to do robotic things, then we don't have AGI.
\*\*Ethan\*\*: Yeah.
\*\*Michaël\*\*: Well, I guess my main like is mostly like coding. So if we get like coding, like Codex or comparative, that gets really good, then everything accelerates and engineers become very productive, and then like.
\*\*Ethan\*\*: I guess if you could say like, engineers get really productive at making improvements in hardware, then like, maybe that would, like, I get how that would be like, okay. Then it's really fast. Like in my mind, at least at the current, I don't see the hardware getting fast enough to be far enough on the YouTube scaling lot in less than five years from now.
\*\*Michaël\*\*: Thinking about hardware, we're just like humans, Googling things and using.
\*\*Ethan\*\*: Yeah, yeah. I get what you're saying. Like you get the Codex thing and then we use Codex or whatever to design hardware faster.
\*\*Michaël\*\*: You mentioned you have like DALL-E, but like for designing chips.
\*\*Ethan\*\*: I mean, Nvidia already uses AI for designing their chips.
\*\*Michaël\*\*: That doesn't make you think of timelines of 10 years or closer.
\*\*Ethan\*\*: It may be 10 years, but not five years. The thing I'm trying to figure out is like, try to get like a student researcher gig at like someplace so that I can just get access to the big compute during the PhD.
\*\*Michaël\*\*: Oh, so that's your plan. Just get out of compute.
\*\*Ethan\*\*: Yeah, I mean, as long as I have big compute, it doesn't matter where I'm a PhD. I mean, it kind of matters if you're like trying to start an AGI startup or whatever, but safe, safe, safe AGI startup.
\*\*Michaël\*\*: We're kind of on record, but I'm not sure if I'm going to cut this part. So you can say unsafe, it's fine.
\*\*Ethan\*\*: Yeah, no, no, no. I mean, I don't even phrase. I just phrase it as beneficial AGI.
\*\*Michaël\*\*: You were spotted saying you wanted unsafe AGI the fastest possible.
## Thinking about the Fastest Path
\*\*Ethan\*\*: No, no, no. The way I phrase it is I think I explained this last time, you have to be thinking in terms of the fastest path, because there is like extremely huge economic and military incentives that are selecting for the fastest path, whether you want it to be that way or not. So like, you gotta be thinking in terms of, what is the fastest path and then how do you like minimize the alignment tax on that fastest path? 'Cause like the fastest path is the way it's probably gonna happen no matter what, like, so it's about minimizing the alignment techs on that fastest path.
\*\*Michaël\*\*: Or you can just throw nukes everywhere and try to make things slower?
\*\*Ethan\*\*: Yeah, I guess, but I mean the people who are on the fastest path will be like more powerful, such that like, I don't know, such that they'll deter all the nukes.
\*\*Michaël\*\*: So you want to be, okay, so you want to just like join the winners. Like if you join the skiing team at Google.
\*\*Ethan\*\*: Thing I've been trying to brainstorm about is who's gonna have the fastest, who's gonna have the best funding models for supercomputers, 'cause that's the place to go and you gotta try to minimize the alignment tax at that place.
\*\*Michaël\*\*: Makes sense. So everyone should infiltrate Google.
\*\*Ethan\*\*: Yeah, so whatever place ends up with the best funding model of supercomputers try to get as many weird alignment people to like infiltrate that place as possible.
\*\*Michaël\*\*: So I'm kind of happy having a bunch of EA people at OpenAI now, because they're kind of minimizing the text there, but...
\*\*Ethan\*\*: Yeah, I kind of viewed it as all the EA people left, like 'cause Anthropic was like the most extremist EA people at OpenAI. So I almost viewed when Anthropic happened a bunch of EA people. I view as that like EA almost leaving OpenAI when Anthropic happened.
\*\*Michaël\*\*: Some other people came, right?
\*\*Ethan\*\*: Like who?
\*\*Michaël\*\*: I don't know. Richard Ngo.
\*\*Ethan\*\*: Oh, okay, okay. Yeah, yeah.
\*\*Michaël\*\*: It's like a team on like predicting the future.
\*\*Ethan\*\*: Yeah, yeah. I wanna know what the Futures Team does 'cause that's like the most out there team. I'm really curious to what they actually do.
\*\*Michaël\*\*: Maybe they use their GPT-5 model and predict things.
\*\*Ethan\*\*: Right, 'cause I mean like DALL-E, like you know about the Foresight Team at OpenAI, right?
\*\*Michaël\*\*: They were trying to predict things as well, like forecasting.
\*\*Ethan\*\*: Yeah, that's where all this scaling lot stuff came from was on the Foresight Team at OpenAI. They're gone now because they became philanthropic. But like a team called like the Futures Team that almost has a similar vibe to like a team called the Foresight Team. So I'm kind of curious.
\*\*Michaël\*\*: But then there's just like doing more governance things and optimal governance and maybe economics.
\*\*Ethan\*\*: That's what it's about, governance and economics.
\*\*Michaël\*\*: The guy like Richard Ngo is doing governance there.
\*\*Ethan\*\*: Okay.
\*\*Michaël\*\*: Predicting how the future works, I think is in his Twitter bio.
\*\*Ethan\*\*: Yeah, yeah, but I mean, that's somewhat tangential to governance, like that almost sounds like something Mike Rick Kurtz, I would say, I'm predicting how the future.
\*\*Michaël\*\*: My model is like Sam Altman, as like they have GPT-4. Like they published GPT-3 in 2020. So it's been like two years.
\*\*Ethan\*\*: Yeah.
\*\*Michaël\*\*: And they've been talking about like in their Q & A about like treacherous results or something like one year ago. So now they must have access to something very crazy and they're just like trying to think like how do we operate with like DALL-E 2 and their GPT-4 they have in private and how they do something without like for him in the world? I don't know. Maybe they're just like trying to predict like how to make the most money with their API or.
\*\*Ethan\*\*: You're saying like if they release it, it's like an infohazard? 'Cause in my mind, GPT-4 still isn't like capable enough to F up the world, but you could argue, it's like capable enough to like be an infohazard or something.
\*\*Michaël\*\*: Imagine you have access to something that has the same kind of gap between GPT-2 and GPT-3, but like for GPT-4 on like understanding and being general. And you don't want everyone else to copy your work. So you're just going to keep it for yourself for sometime.
## A Zillion Language Model Startups from ex-Googlers
\*\*Ethan\*\*: Yeah, but I feel like that strategy is already kind of screwed. Like you know about how like a zillion large language model, like a zillion Googlers have left Google to start large language model startups. Like there's literally three large language model startups by ex-Googlers now. OpenAI is like a small actor in this now because there's like multiple large language model startups founded by ex-Googlers that all like that all were founded in the last like six months. Like there's a zillion VCs throwing money at large language model startups right now. The funniest thing, like Leo Gao, he's like, we need more large language model startups because the more startups we have, then it splits up all the funding so no organization can have all the funding to get the really big supercomputer. So we just need thousands of AI during its final startups. So no one can hoard all the funding to get the really big language model.
\*\*Ethan\*\*: That's the, yeah, with the AI model, you just like do open source. So like there's like more startups. And so all the funding gets splitted, I guess.
\*\*Ethan\*\*: Yeah, you could view OpenAI was like extra big brain. We need to do. We need to like release the idea of our joiners models onto the world such that no organization could have enough compute to be such that all the compute gets more split up, 'cause a zillion, our joiners model startups will show up all at once.
\*\*Michaël\*\*: That's yeah, that's the best idea ever. So do you have like other gossips besides like Google's? Did you post something on Twitter about people leaving Google?
\*\*Ethan\*\*: Yeah, I posted a bunch of stuff. Well, I mean, and also like you saw the... I mean it's three startups, adept.ai, character.ai, and inflection.ai. They're all large language model startups founded by ex-Googlers that got a zillion dollars in VC funding to scale large language models.
\*\*Michaël\*\*: What's a zillion dollars like?
\*\*Ethan\*\*: Like greater than 60 million. Each of them got greater than 60 million.
\*\*Michaël\*\*: So did they know about something we don't know? And they're just like get money to replicate what Google does?
\*\*Ethan\*\*: Well, I mean, most of 'em, they were famous people like founder of DeepMind scaling team. Another one is the inventor of The Transformer. Another one was founded by a different person on The Transformer paper. Like, so I mean, in some ways, they have more clout than like OpenAI had or whatever.
\*\*Michaël\*\*: But they don't have like the engineering and old infrastructure.
\*\*Ethan\*\*: No, they kind of do. Like, a lot of 'em, they were like the head of engineering for scaling teams at like DeepMind or Google.
\*\*Michaël\*\*: So there's like another game that is in private at Google and they've been scaling huge models for two years. and they're just like,
\*\*Ethan\*\*: Yeah, something like that.
\*\*Michaël\*\*: Starting startups with their knowledge and they're just scaling and we;re just like, like peasants like us talk about papers that are released one year after and then when you turn them out.
\*\*Ethan\*\*: Yeah, yeah. I guess that's, I mean, I don't know how long these delays are. I mean, in my mind, like, yeah. I guess you could view it as a delay thing, 'cause like in my mind it's just like, yeah, you're right, you're right. It's probably delayed by a year, yeah.
\*\*Michaël\*\*: So yeah, that makes me less confident about-
\*\*Ethan\*\*: Oh shit. You look like a clone of Lex Fridman from the side.
\*\*Michaël\*\*: What?
\*\*Ethan\*\*: When your face is like sideways, you look like a clone of Lex Fridman.
\*\*Michaël\*\*: Yeah.
\*\*Ethan\*\*: Like, 'cause your haircut's like identical to his when
\*\*Michaël\*\*: I'll take that as a compliment... I started working out. So yeah, Ethan Caballero, what's the meaning of life?
\*\*Ethan\*\*: Probably just maximize the flourishing of all sentient beings, like a very generic answer.
\*\*Michaël\*\*: Right. So I've done my Lex Fridman question. Now I'm just basically him.
\*\*Ethan\*\*: Yeah.
## Ethan's Scaling Journey
\*\*Michaël\*\*: Maybe we can just go back to like stuff we know more about like your work and because you've been doing some work on scaling.
\*\*Ethan\*\*: Yeah.
\*\*Michaël\*\*: So like more general, like why are you kind of interested in scaling and like how did you started on doing research on that?
\*\*Ethan\*\*: I mean, I knew about the body paper when it came out. Like I remember I was at this like Ian Goodfellow talking in 2017 and he was hyped about the body paper when it came out.
\*\*Michaël\*\*: Which paper?
\*\*Ethan\*\*: The deep burning scales, predictably, empirically, yeah, it came out 2017 and then I kind, I just, that was just on the back burner and I kind of just stopped paying attention to it after a while. And then like Aran Komatsuzaki was like, no, dude, this is the thing. Like this is gonna take over everything, and this was like in 2019 when he was saying that. And then, yeah. So then when the scaling laws papers got like re-popularized through like the OpenAI stuff, then I kind of like caught onto it a little bit early via like talking with Aran.
\*\*Michaël\*\*: I think in 2019 was also when GPT-2 was introduced.
\*\*Ethan\*\*: But that was kind of before, like that was before like the scaling law stuff kind of got popularized.
\*\*Michaël\*\*: Right, scaling laws paper is 2020.
\*\*Ethan\*\*: Yeah, the very end of 2020. All right. No, no, no. Oh no, no. The scaling law paper was the very end of 2000. It was the very beginning of 2020.
\*\*Michaël\*\*: And you were already on this killing train since 2017.
\*\*Ethan\*\*: I was aware of it, but I didn't, like, I was kind of just neutral about it until like 2000, like probably the middle of 2019.
## Making progress on an Academic budget, Scaling Laws Research
\*\*Michaël\*\*: And yeah, now you are kind of interested in scaling because it's useful to predict kind of what the whole field of AI is going.
\*\*Ethan\*\*: And also it just, it's I think people underestimate how easy it is to be contrived if you're not paying attention to scaling trends and trying to like extrapolate the compute budgets and data budgets that are like, well, the compute data and data budgets like five years from now.
\*\*Michaël\*\*: Yeah, if you're a huge company that does a lot of budget, but maybe if you're just a random company, you don't really care about scaling law that much.
\*\*Ethan\*\*: Yeah, yeah. Or if you're like in academia currently or whatever, like a zillion papers that like fancy conferences are like, here's our inducted bias that helps on like our punny academic budget. And we didn't test any of the scaling asso tos to see if it's like useful when you're training a trillion parameter model on all of YouTube or whatever.
\*\*Michaël\*\*: You're on an academic budget as far as I know. So how do you manage to do experiments in scaling?
\*\*Ethan\*\*: There's like the scaling on narrative. That's like, oh, you don't need the big budget to do because you can just predict what the outcomes will be for the large scale experiments, but that's at least current. At least when that narrative got popularized, it was mostly for upstream like scaling. But the thing everyone cares about is downstream scaling.
## AI Alignment as an Inverse Scaling Problem
\*\*Michaël\*\*: Yeah, so if we go back for a minute on like your work in alignment, how do you think your work on scaling or generalization like kind of fits with the alignment problem?
\*\*Ethan\*\*: Basically, all alignment, I guess this triggers the hell outta some people. But all alignment is inverse scaling problems. It's all downstream inverse scaling problems. So it's just in my mind, all of alignment is stuff that doesn't improve monotonically as compute data and parameters increase.
\*\*Michaël\*\*: There's a difference between not improving and inverse scaling. Inverse scaling goes badly, right?
\*\*Ethan\*\*: Yeah, yeah, yeah. But I just said not improved monotonically, 'cause like sometimes there's certain things where like it improves for a while, but then at a certain point, it gets worse. So like interpretability and controllability are the two like kind of thought experiment things where you could imagine they get more interpretable and more controllable for a long time until they get super intelligent. At that point. they're like less interpretable and less controllable.
\*\*Michaël\*\*: Do we have benchmarks for controllability or?
\*\*Ethan\*\*: Like just like just benchmarks that rely on prompting is a form of like a benchmark of controllability.
\*\*Michaël\*\*: And kinda to summarize your take, if we were able to just scale everything well and not have this inverse scaling problem, we would get like interpretability and controllability and everything else by just like good scaling of our models. And so we'd get like alignment kind of by defaults for free?
\*\*Ethan\*\*: Yeah. I mean, I guess, I mean like there's stuff besides interpretability, controllability, like those are just the examples. Like what you said, you asked like what's an example where like the reason I said, I phrased it as alignment is when I said inverse scaling, I said things that don't improve monotonically, 'cause I just wanted to say like, yes, there's obvious examples where it gets worse the entire time, but there's some you could imagine where it gets good for a long time, and then a certain point, then it starts getting drastically worse. I said, all of alignment can be viewed as a downstream scaling problem. The hard part is like Dan Hendricks and like Jacob Steinhardt say like, then the hard problem though is like measurement and like finding out what are the downstream evaluations 'cause say you got like some like fancy like deceptive AI that wants to like a treacherous turn or whatever. Like how do you even find the downstream evaluations to know whether it's gonna like try to deceive you or whatever? 'Cause like when I say, it's all a downstream scaling problem, that assumes like you have the downstream test, the downstream like thing that you're evaluating it on. But like if it's like some weird deceptive thing, that's like, it's hard to even find what's the downstream thing to evaluate it on to like know whether it's trying deceive or whatever.
\*\*Michaël\*\*: So there's no like test lost on this deception. We don't know for sure how to measure and have a clear benchmark from this.
\*\*Ethan\*\*: Yeah, it's tricky. I mean, and some people say like, well, that's why you need better interpretability. You need to like find the deception circuits or whatever.
\*\*Michaël\*\*: Knowing that we don't know yet, like all the different benchmarks and metrics for misalignment, don't you think that your work on scaling can be bad because you're actually like speeding up timelines?
## Predicting scaling laws, Useful AI Alignment research
\*\*Ethan\*\*: Yeah, I get the like infohazard point of view, but like in my mind, like whether you wanna do all capabilities or alignment stuff that stands the test of time, you need really good downstream scaling prediction. Like, say you came up with some like alignment method or whatever that mitigates inverse scaling, like you need the actual functional form to know whether that thing will like keep mitigating inverse scaling when you get to like a trillion parameters or whatever. You get what I mean?
\*\*Michaël\*\*: I get you but like on a differential progress mindset, like Jared Kaplan or someone else will come up with those functional forms without your work.
\*\*Ethan\*\*: I don't know, I don't know. I mean, that's the thing though, like Anthropics (ERRATUM: it's actually a gift, and the the merch was not sent at the time of the podcast) got that paper like predictability and surprise and generative models and they're just like, it's unpredictable. We can't predict it. And I'm like, ah, you guys, nah, I don't believe.
\*\*Michaël\*\*: Right, so you're kind of publishing papers when you're in advance because those companies are not publishing their results?
\*\*Ethan\*\*: I don't know. I don't. Yeah, I don't even, I don't know if Anthropic does the delay type stuff that OpenAI supposedly does, but maybe they do, I don't know.
\*\*Michaël\*\*: And you were just like drawing infohazard by publishing those laws?
\*\*Ethan\*\*: I mean, in my mind, whether or not, I get the argument, oh it, if you wanna do capabilities work that stands a test of time or alignment work that stands a test of time, in my mind, everything that people are doing in alignment will be very contrived without the functional form too though. So it's like alignment can't make progress without it either. So it's like, you get what I mean?
\*\*Michaël\*\*: Another kind of view on that is that if people do impressive deploying or ML board and they're also interested in alignment, it's still a good thing. Like let's take even through AI. Even if they open source their model because they did something impressive and they talk openly about alignment under Discord and gets like a lot of people that are very smart, interested in alignment. So if you publish something and you become like a famous researcher, something in two years and you talk about alignment in two years, then it's fine.
\*\*Ethan\*\*: I sort of tweet stuff about alignment, I think. Yeah, I mean, I retweet stuff about alignment at least.
## Ajeya Cotra's report, Compute Trends
\*\*Michaël\*\*: So if we go back to thinking about predicting future timelines and kind of scaling, I've read somewhere that you think that in the next few years, we might get billion or trillion times of more compute, like 12 orders of magnitude more.
\*\*Ethan\*\*: Yeah, I mean, so the Ajeya Cotra report said like, it's gonna max out probably at 10 to the 12 times as much compute as like the amount of compute in 2020, probably like 2070 or something like that. The one issue I have with the JS model is that like, she does, what does she do? It's like it's flops per dollar times willingness to spend its total flops that are allocated to pre-training runs. Problem is like, for the big like foundation models, like 10 of the 15 perimeter miles of the future or whatever, you're probably gonna need high pie like memory bandwidth between all like memory bandwidth and compute bandwidth between all the compute, which means it has to be on a supercomputer. So it's not just the flaps. It basically what really matters, at least if you're assuming it's like big, like 10 of the 15 parameter foundation models or whatever, like the speed of the fastest supercomputer is what matters, not just the total flaps that you can allocate, because if like all the flaps don't have good communication between them, then they aren't really useful for training like 10 of the 15 parameter model or whatever. Once you get to 10 of the 15 parameters, like there isn't much reason to go beyond that. And at that point, then you just have multiple models with 10 of the 15 parameters and they're like doing some crazy open ended, like Ken Stanley stuff and a multi-agent simulator after you do that. Like if they mentioned became like you do the 10 of the 15 parameter model feature and all YouTube, and then after that, you'll have like hundreds of 10 of the 15 parameter models that all just duke it out in like a Ken Stanley open-ended simulator to like, get the rest of the capabilities or whatever, like once they're in the Ken Stanley open-ended stimulator, then you don't need high compute bandwidth between all those individual, like 10 of the 15 parameter models, like duking it out in the simulator. They can just, each one, they only needs like 10. It only needs high compute bandwidth between like its own parameters. Like, it doesn't need high compute bandwidth between itself and the other like agents or whatever. And so in there, the flops where you could use all the flops for like the multi-agent simulation, but you only need high compute bandwidth within each agent.
\*\*Michaël\*\*: So you need a lot of bandwidths to train models because of the prioritization thing, but you only need flops to simulate on different things at the same time?
\*\*Ethan\*\*: Yeah, you only need high compute bandwidth within an individual brain, but like if you have multiple brains, then you don't need high compute bandwidth between the brains.
\*\*Michaël\*\*: And what was that kind of simulator you were talking about, the Kenley?
\*\*Ethan\*\*: Like Ken Stanley, the open-ended guy.
\*\*Michaël\*\*: I haven't seen that.
\*\*Ethan\*\*: Ken is like the myth day objective open-endedness, like Can Stanley's, Jeff Cones, like all that stuff. It's like, I don't know. Just Google, like Can Stanley open ended at some point. You've probably heard of it, but it's not like registering what I'm referencing.
# Optimism, conclusion on alignment
\*\*Michaël\*\*: Okay, so maybe one kind of last open-ended question. On a scale from Paul Christiano, Eliezer Yudkowsky, Sam Altman, how optimistic are you?
\*\*Ethan\*\*: Definitely not like Eliezer, or a doomer type person. I guess probably Paul Christiano is most similar. I mean, I feel like Paul Christiano is in the middle of the people you just said.
\*\*Michaël\*\*: Right. Yeah. So you are less optimistic than Sam Altman?
\*\*Ethan\*\*: Well, yeah, I mean, basically, I think deceptive AI is probably gonna be really hard.
\*\*Michaël\*\*: So do you have like one less monologue or sentence to say about why scaling is a solution for all alignment problems?
\*\*Ethan\*\*: Like just all alignment can be viewed as an inverse scaling problem. Like it all revolves on just mitigating inverse scaling, but also you have to make sure you have like the right downstream things that you're evaluating, like the inverse scaling and like part of what makes it hard is like you might need to do like fancy thought experiments on alignment, like counterintuitive thought experiments on alignment forum to find what are the downstream... to find what are the the downstream tests that you should be evaluating. Like whether or not there's inverse scaling behavior on those.
\*\*Michaël\*\*: Awesome, so we get the good version, as last sentence, and that's our conclusion. Thanks Ethan for being on the show. |
110a2882-d352-4da0-bd95-a4c72d827455 | trentmkelly/LessWrong-43k | LessWrong | What should the norms around AI voices be?
In previous discussions of AI risks, the ability for an AI to be very persuasive is often seen as one possible risk. Humans find some voices more persuasive than other voices.
If we can trust Scarlett Johansson's description of her interactions with OpenAI, OpenAI wanted to use her voice, to increase how much users trust OpenAI's model. Trusting a model more, likely means that the model is more persuasive.
AI companies could also multivar-test slight variations of their voices to maximize user engagement which would also likely push the voices in the direction of being more persuasive.
Zvi recently argued that it's fine for OpenAI to provide their users with maximally compelling voices if the user want those voices, without getting pushback for it.
Are we as a community not worried anymore about the persuasive power of AI's? As being someone who is not working directly in AI safety myself, why does this aspect seem underexplored by AI safety researchers? |
482899c6-f3aa-448f-80e2-be9b2d40bee1 | trentmkelly/LessWrong-43k | LessWrong | What causes burnout?
When I try to figure out how to balance personal happiness with saving the world, I think a lot about burnout. If I make a major change - e.g. changing careers - what are the chances I'll burn out and become a lot less useful as a result?
I've never burned out, so I don't know where that edge is for me (plus I suspect the edge moves around depending on circumstances). I'm obviously biased on the topic: there's the temptation to tell myself "This will prevent burnout and make me more effective in the long run" every time I want to do something.
Some things people here have described as causing burnout:
Going through the motions of a religion you don't believe in
Training yourself to feel guilty whenever you relax
Pursuing altruism too exclusively (That post suggests Bostrom's parliamentary model as an antidote).
Do you have more examples? Have you burned out? Are there things that you think have kept you from burning out? |
1d21fb36-c3ce-45d2-a36f-4deba6b908db | trentmkelly/LessWrong-43k | LessWrong | Memory Decoding Journal Club: Neocortical synaptic engrams for remote contextual memories
Join Us for the Memory Decoding Journal Club!
A collaboration of the Carboncopies Foundation and BPF Aspirational Neuroscience
This time, we’re diving into a groundbreaking paper:
"Neocortical synaptic engrams for remote contextual memories"
Authors: Ji-Hye Lee, Woong Bin Kim, Eui Ho Park & Jun-Hyeong Cho
Institutions: University of California, Riverside, Department of Molecular Cell and Systems Biology.
Presented by: Dr. Randal Koene
When? June 17th, 2025 – 3:00 PM PDT | 6:00 PM EDT | 10:00 PM UTC
Where? Video conference: https://carboncopies.org/aspirational-neuroscience
Register for updates: https://aspirationalneuroscience.org/register-with-us/
Once registered, you'll receive event invites & updates!
#Neuroscience #MemoryResearch #Amygdala #JournalClub #BrainScience #Carboncopies #AspirationalNeuroscience |
3c729907-00c2-4c1b-ad28-6a536bd55432 | trentmkelly/LessWrong-43k | LessWrong | Meetup : West LA Meetup 10-19-2011
Discussion article for the meetup : West LA Meetup 10-19-2011
WHEN: 19 October 2011 07:00:00PM (-0700)
WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064
When: 7pm - 9pm Wednesday, October 19th.
Where: The Westside Tavern in the upstairs Wine Bar, located inside the Westside Pavillion on the second floor, right by the movie theaters.
Parking is free for 3 hours.
Recommended Reading:
* Choose any bias from Wikipedia's list of cognitive biases. We will talk about that bias at the meeting - how to recognize it, why it happens, and how to counter it.
Whether you're a regular reader or totally new, here for the theoretical musings or the practical things, come by and say hello! The conversation is largely unstructured, and the people are awesome.
I will bring a whiteboard with Bayes' Theorem written on it.
See also: West LA Biweekly Meetups
Discussion article for the meetup : West LA Meetup 10-19-2011 |
996e6dfc-08a3-47ab-a20c-4faba495760f | StampyAI/alignment-research-dataset/arbital | Arbital | Order relation
An **order relation** (also called an **order** or **ordering**) is a [binary relation](https://arbital.com/p/3nt) $\le$ on a [set](https://arbital.com/p/3jz) $S$ that can be used to order the elements in that set.
An order relation satisfies the following properties:
1. For all $a \in S$, $a \le a$. (the [reflexive](https://arbital.com/p/reflexive_relation) property)
2. For all $a, b \in S$, if $a \le b$ and $b \le a$, then $a = b$. (the [antisymmetric](https://arbital.com/p/antisymmetric_relation) property)
3. For all $a, b, c \in S$, if $a \le b$ and $b \le c$, then $a \le c$. (the [transitive](https://arbital.com/p/transitive_relation) property)
A set that has an order relation is called a [partially ordered set](https://arbital.com/p/3rb) (or "poset"), and $\le$ is its *partial order*.
## Totality of an order
There is also a fourth property that distinguishes between two different types of orders:
4. For all $a, b \in S$, either $a \le b$ or $b \le a$ or both. (the [total](https://arbital.com/p/total_relation) property)
The total property implies the reflexive property, by setting $a = b$.
If the order relation satisfies the total property, then $S$ is called a [https://arbital.com/p/-540](https://arbital.com/p/-540), and $\le$ is its *total order*.
## Well-ordering
A fifth property that extends the idea of a "total order" is that of the [well-ordering](https://arbital.com/p/55r):
5. For every subset $X$ of $S$, $X$ has a least element: an element $x$ such that for all $y \in X$, we have $x \leq y$.
Well-orderings are very useful: they are the orderings we can perform [induction](https://arbital.com/p/mathematical_induction) over. (For more on this viewpoint, see the page on [https://arbital.com/p/structural_induction](https://arbital.com/p/structural_induction).)
# Derived relations
The order relation immediately affords several other relations.
## Reverse order
We can define a *reverse order* $\ge$ as follows: $a \ge b$ when $b \le a$.
## Strict order
From any poset $(S, \le)$, we can derive a *strict order* $<$, which disallows equality. For $a, b \in S$, $a < b$ when $a \le b$ and $a \neq b$. This strict order is still antisymmetric and transitive, but it is no longer reflexive.
We can then also define a reverse strict order $>$ as follows: $a > b$ when $b \le a$ and $a \neq b$.
## Incomparability
In a poset that is not totally ordered, there exist elements $a$ and $b$ where the order relation is undefined. If neither $a \leq b$ nor $b \leq a$ then we say that $a$ and $b$ are *incomparable*, and write $a \parallel b$.
## Cover relation
From any poset $(S, \leq)$, we can derive an underlying *cover relation* $\prec$, defined such that for $a, b \in S$, $a \prec b$ whenever the following two conditions are satisfied:
1. $a < b$.
2. For all $s \in S$, $a \leq s < b$ implies that $a = s$.
Simply put, $a \prec b$ means that $b$ is the smallest element of $S$ which is strictly greater than $a$.
$a \prec b$ is pronounced "$a$ is covered by $b$", or "$b$ covers $a$", and $b$ is said to be a *cover* of $a$. |
925d91fd-6bf9-4790-9ffc-4070607d5507 | trentmkelly/LessWrong-43k | LessWrong | Announcing the 2014 program equilibrium iterated PD tournament
Last year, AlexMennen ran a prisoner's dilemma tournament with bots that could see each other's source code, which was dubbed a "program equilibrium" tournament. This year, I will be running a similar tournament. Here's how it's going to work: Anyone can submit a bot that plays the iterated PD against other bots. Bots can not only remember previous rounds, as in the standard iterated PD, but also run perfect simulations of their opponent before making a move. Please see the github repo for the full list of rules and a brief tutorial.
There are a few key differences this year:
1) The tournament is in Haskell rather than Scheme.
2) The time limit for each round is shorter (5 seconds rather than 10) but the penalty for not outputting Cooperate or Defect within the time limit has been reduced.
3) Bots cannot directly see each other's source code, but they can run their opponent, specifying the initial conditions of the simulation, and then observe the output.
All submissions should be emailed to pdtournament@gmail.com or PM'd to me here on LessWrong by September 15th, 2014. LW users with 50+ karma who want to participate but do not know Haskell can PM me with an algorithm/psuedocode, and I will translate it into a bot for them. (If there is a flood of such requests, I would appreciate some volunteers to help me out.) |
e93e5cfa-8313-454f-98a1-3f34ec791513 | trentmkelly/LessWrong-43k | LessWrong | Reflections on a year of college
Crossposted from my blog. Considering collaborating on a version of this that's tailored specifically towards undergrads in the LW or EA communities.
----------------------------------------
I just completed my first year of college (US university). I’ve written up some personal reflections and curated some useful thoughts from others.
Caveat: people are different. My perspective is one of millions; treat it accordingly.
Understand the game
For some reason I can’t quite put my finger on, they don’t tell you the cynical (realistic? red-pill? based?) perspective of college during admissions. That’s for yourself to figure out, which seems…a little unfair.
The perspective I’m talking about involves some mix of:
* College is mostly signaling
* Social sciences bad ha ha
* Colleges just want money
* Many classes are useless (the caveat being technical subjects or skills)
* The main value adds of college comes from outside of the classroom
* Did I mention there’s a lot of dunking on the social sciences?
These ideas may feel a little uncomfortable. Some of them are overemphasized to make a stronger point. But in my experience (and through conversations with peers I highly regard), they seem mostly right.
Even if you’re skeptical, I think it’s important to at least be aware of these sentiments. It might save you a lot of time and effort.
Related:
Economist Bryan Caplan thinks education is mostly pointless showing off. We test the strength of his case.
* “If you want to get the best education in the world for free, you can just move to Princeton and start attending classes unofficially. There’s almost no effort made to stop you. You just won’t get a diploma, which makes it near pointless, because college is more about impressing people than learning useful info.”
College advice for people who are exactly like me
* “Coming in, you will probably think that your college experience has been carefully designed to provide you the best possible education. |
b6dd90d6-52f8-4de1-b11f-c258db8e6b4f | trentmkelly/LessWrong-43k | LessWrong | Why I think strong general AI is coming soon
I think there is little time left before someone builds AGI (median ~2030). Once upon a time, I didn't think this.
This post attempts to walk through some of the observations and insights that collapsed my estimates.
The core ideas are as follows:
1. We've already captured way too much of intelligence with way too little effort.
2. Everything points towards us capturing way more of intelligence with very little additional effort.
3. Trying to create a self-consistent worldview that handles all available evidence seems to force very weird conclusions.
Some notes up front
* I wrote this post in response to the Future Fund's AI Worldview Prize[1]. Financial incentives work, apparently! I wrote it with a slightly wider audience in mind and supply some background for people who aren't quite as familiar with the standard arguments.
* I make a few predictions in this post. Unless otherwise noted, the predictions and their associated probabilities should be assumed to be conditioned on "the world remains at least remotely normal for the term of the prediction; the gameboard remains unflipped."
* For the purposes of this post, when I use the term AGI, I mean the kind of AI with sufficient capability to make it a genuine threat to humanity's future or survival if it is misused or misaligned. This is slightly more strict than the definition in the Future Fund post, but I expect the difference between the two definitions to be small chronologically.
* For the purposes of this post, when I refer to "intelligence," I mean stuff like complex problem solving that's useful for achieving goals. Consciousness, emotions, and qualia are not required for me to call a system "intelligent" here; I am defining it only in terms of capability.
Is the algorithm of intelligence easy?
A single invocation of GPT-3, or any large transformer, cannot run any algorithm internally that does not run in constant time complexity, because the model itself runs in constant time. It's a very l |
93935089-c93b-4181-991a-e8b83ffb8b9d | trentmkelly/LessWrong-43k | LessWrong | Explore/Exploit for Conversations
Something status: I've started to try on an explore/exploit lens for thinking about conversation, and I like it. Here's some rough ideas. On the editing pass of this, a feel like I haven't properly separated the important ideas explicitly enough, but it's fine.
You are in explore mode if you are introducing ideas/topics to the conversation and aren't sure how much the others will enjoy them. You are in exploit mode if you are talking about stuff that you already know everyone will enjoy.
Being in explore mode requires feeling "comfortable" with your group, and being in exploit mode requires having Common Knowledge (CK) of Common Interests (CI).
Examples:
* Explore
* In your anime club's board meeting you talk about the meta of how board meetings usualy go.
* You got to a LW meetup and talk about a new beat-boxing technique you've been practicing.
* You tell your grandma about this heavy metal band you're getting into.
* Exploit
* In your anime club's board meeting you talk about what new shows you want to watch at club meetings.
* You got to a LW meetup and talk about x-risk.
* You tell your grandma about how school/work/family is going.
I tried to pick examples that weren't secretly saying "One of these is way better than the other!" I also want to be clear that being in explore mode doesn't have to be super edgy or out there. It just needs to be a situation where you don't know what response people will have before you go there.
Claim: For a group (2+) of people to have good conversation, you either need to have Common Knowledge of juicy Common Interests to exploit, or you need to be comfortable spending time in explore mode.
Explore Mode
I think most people experience that it is easier to have explore mode conversation with smaller groups, and with people that they know well.
I claim that it feels comfortable to be in explore mode if every group member has:
A) Reason to believe they won't get trapped in a boring conversation. |
6b8e0e54-a54d-4857-ba81-5d896ad0e4b1 | trentmkelly/LessWrong-43k | LessWrong | $500 bounty for engagement on asymmetric AI risk
Announcing a $500 bounty for work that meaningfully engages with the idea of asymmetric existential AI risk.
Background
Existential risk has been defined by the rationalist/Effective Altruist sphere as existential relative to the human species, under the premise that the continuation of the species has very high value. This provided a strong rationality (or effectiveness) grounding for big investments in AI alignment research when the risks still seemed to most people remote and obscure. However, as an apparent side-effect, "AI risk" and "risk of a misaligned AI destroying humanity" have become nearly conflated.
Over the past couple of years I have attempted to draw attention to highly asymmetric AI risks, where a small number of controllers of "aligned" (from their point of view) AI employ it to kill the rest of the human population. From the point of view of the average person, who would stand to be killed along with their children and approximately everyone they personally know, this ought to count meaningfully as existential risk. Arguably, by a similar logic to the one used to justify early alignment research, even with a low probability of such an outcome its badness justifies investment in its prevention. Furthermore, prevention by way of arresting AI development conveniently provides a two-for-one solution, also addressing the misalignment problem. Conversely, investments in ensuring successful AI "alignment" without evaluating the full destructive potential of aligned AI potentially makes the investor complicit in genocide. These points suggest a strong interest by members of the rationalist/Effective Altruist sphere (at least, based on my understanding of their stated commitments) in asymmetric existential AI risk. But so far my efforts have revealed no evidence of such interest.
This bounty is an attempt to stimulate engagement through small monetary reward(s). More concretely, the goal is to promote broadly changing the status of this risk from "unac |
9a9f8e33-d708-4f46-aa36-b23fcde7c0b2 | trentmkelly/LessWrong-43k | LessWrong | A Concrete Multi-Step Variant of Double Crux I Have Used Successfully
=== Intro
Raymond Arnold has written about the failure of many discussions to result in clear progress recognizable by both sides. In general, the ideas around double crux are useful but the technique can be under-specified (especially for people who have not been to CFAR). Additionally, progress can be difficult to achieve until participants have a good understanding of the other person’s relevant mental models.
=== Step by Step Instructions
1- Both people independently write down their positions and which arguments and pieces of evidence they find the most convincing. Split this into claims/cruxes. Each claim should have a bullet point list of reasoning. If possible, give an estimate of how convinced you are by this claim. These claims should be sufficiently connected with your overall model, such that if you felt the claim was actually false you would update your views a substantial amount. Admittedly lists of claims is not a great model of how people think, but it’s helpful to try to make your views ‘cruxy’.
2 - Both people exchange write ups.
3 - After reading the other person’s write up, the participants agree on a list of ~5 topics to discuss in more detail. You want to choose topics which can be fruitfully discussed in ten to twenty minutes. This can be done via a short email exchange or via discord/irc/etc.
4 - The two people discuss the above list of topics for 1-2 hours. The main goals are to understand the confusing parts of your opponent’s argument and resolve contentious points. However, once it becomes clear a contentious point is hard to resolve, move on for now. You only have 1-2 hours; don’t waste them arguing what ‘true’ means, find fertile ground. This needs to be done in person or via real time chat; email will not work.
5 - Try to find explicit points of agreement. Find a list of non-trivial points that both people can explicitly endorse. For example myself and Jacob ‘putanumonit’ Falkovich double cruxed about which people would benefit |
e057f8c3-63b7-4df5-b40c-12db3d42e4a6 | trentmkelly/LessWrong-43k | LessWrong | Brun's theorem and sieve theory
Brun's theorem is a relatively famous result in analytic number theory that says the sum of the reciprocals of the twin primes converges to a finite value. In other words, we have
∑p,p+2prime1p+1p+2=B
for some finite constant B. This is in contrast to the same sum taken over all primes, which is divergent:
∑pprime1p=∞
In this post, I'll use Brun's theorem as an illustration of sieve theoretic arguments in analytic number theory. I'll try to explain relevant results as I go along to minimize the background necessary to understand the arguments, but some background in real analysis and number theory is needed to understand the post. If you don't have such a background, most of the post will probably be gibberish.
I'm writing this post mostly because I think there's some lack of good explanations of sieve theory in general and the Brun sieve in particular. Hopefully this post will be helpful to a handful of people who are interested in or trying to understand such matters.
Note that in the post I'll not always mention that a sum or a product runs over the prime numbers explicitly. If the sum or product is indexed by the letter p, you should assume that it runs over the primes and not e.g. over the natural numbers. Sometimes p will run only over odd primes, because there is a degenerate case with the prime 2 when we work with twin primes coming from the fact that 0 and 2 are in the same residue class mod 2. This will often be obvious from the surrounding context.
Background
First, let's discuss some background results that will be useful to know throughout the post.
The prime number theorem says that the number of primes less than N, denoted π(N), is well approximated by N/logN. Concretely, it says that
limN→∞π(N)N/logN=1
Roughly, the prime number theorem says that the density of the prime numbers "around N" is roughly 1/logN when N is large. This is a rather difficult theorem to prove and we won't actually need it to prove Brun's theorem. However, the result |
f5964ede-f301-4779-9ffe-49d455e479c2 | trentmkelly/LessWrong-43k | LessWrong | Omicron Post #7
Here are some better worlds we might have lived in, but don’t.
1. In a better world, I could focus on this full time and also maybe even hire a research assistant, and be better able to scour for information.
2. In a better world than that, there would be a department at a newspaper, or some other such place, that scoured the world for data sources like SGTF deletion counts and wastewater concentrations that could help us understand spread, called the sources to attempt to get the best data possible, and complied it all in nice form.
3. In an even better world than that, there would be funding, either public or private, for the actual gathering of additional data.
4. In an even better world than that, we’d have run challenge trials the moment Omicron was detected, and we’d already have all our answers.
Alas, instead, we live in this world, where none of those things are true. I am grateful to all my news sources and those who help gather the information together, and everyone working tirelessly in the lab, but the whole thing is still a slapstick hodgepodge, and we know so much less than we could know so much slower than we could know it.
Still, we do the best we can.
This time out, we have a few new places whose data has been located, to supplement the United Kingdom and Denmark. We also have several important new lab results.
There is some uncertainty about how far along things are. There is uncertainty regarding how much milder Omicron is than Delta, from it being essentially the same (but much better on a per-case basis due to who it infects) to there being a substantial or even large difference.
But the main uncertainty that I’m now wondering about, that feels central, is how we will react.
What will happen when the rice grains on the chessboard suddenly get fully out of hand, stuff hits the fan and the hospitals overflow? Not if. When. How will governments react? How will the people react?
Last year, I created a toy spreadsheet to model how the ris |
20a0366a-abda-4cbe-945b-02fa36d0b69c | trentmkelly/LessWrong-43k | LessWrong | Signalling & Simulacra
> "We lie all the time, but if everyone knows that we're lying, is a lie really a lie?"
>
> -- Moral Mazes
A common Bayesian account of communication analyzes signalling games: games in which there is hidden information, and some actions can serve to communicate that information between players. The meaning of a signal is precisely the probabilistic information one can infer from it.
I'll call this the signalling analysis of meaning. (Apparently, it's also been called Gricean communication.)
In Maybe Lying Can't Exist, Zack Davis points out that the signalling analysis has some counterintuitive features. In particular, it's not clear how to define "lying"!
Either agents have sufficiently aligned interests, in which case the agents find a signalling system (an equilibrium of the game in which symbols bear a useful relationship with hidden states, so that information is communicated) or interests are misaligned, in which case no such equilibrium can develop.
We can have partially aligned interests, in which case a partial signalling system develops (symbols carry some information, but not as much as you might want). Zack gives the example of predatory fireflies who imitate a mating signal. The mating signal still carries some information, but it now signals danger as well as a mating opportunity, making the world more difficult to navigate.
But the signalling analysis can't call the predator a liar, because the "meaning" of the signal includes the possibility of danger.
Zach concludes: Deception is an ontologically parasitic concept. It requires a pre-existing notion of truthfulness. One possibility is given by Skyrms and Barrett: we consider only the subgame where sender and receiver have common goals. This gives us our standard of truth by which to judge lies.
I conclude: The suggested solution seems OK to me, but maybe we want to throw out the signalling analysis of meaning altogether. Maybe words don't just mean what they probabilistically imply. Intuitiv |
b32aa2fb-f4c9-45ea-b378-c99cd81c72c7 | trentmkelly/LessWrong-43k | LessWrong | What is the best online community for questions about AI capabilities?
I'm interested in learning more about existing ML model capabilities, and in particular how easily a model that is used on one task could be used for another, similar task.
For example, if I want to know and/or estimate whether an existing AI system like AlphaZero could beat Angry Birds with current capabilities, what's the best place to post that type question? |
00931634-c1e9-4c09-956f-86a4b4d44696 | trentmkelly/LessWrong-43k | LessWrong | The transitivity of trust
Cross posted from Overcoming Bias. Comments there.
***
Suppose you tell a close friend a secret. You consider them trustworthy, and don’t fear for its release. Suppose they request to tell the secret to a friend of theirs who you don’t know. They claim this person is also highly trustworthy. I think most people would feel significantly less secure agreeing to that.
In general, people trust their friends. Their friends trust their own friends, and so on. But I think people trust friends of friends, or friends of friends of friends less than proportionally. e.g. if you act like there’s a one percent chance of your friend failing you, you don’t act like there’s 1-(.99*.99) chance of your friend’s friend failing you.
One possible explanation is that we generally expect the people we trust to have much worse judgement about who to trust than about the average thing. But why would this be so? Perhaps everyone does just have worse judgement about who to trust than they do about other things. But to account for what we observe, people would on average have to think themselves better in this regard than others. Which might not be surprising, except that they have to think themselves more better than others in this domain than in other domains. Otherwise they would just trust others less in general. Why would this be?
Another possibility I have heard suggested is that we trust our friends more than is warranted by their true probability of defecting, for non-epistemic purposes. In which case, which purposes?
Trusting a person involves choosing to make your own payoffs depend on their actions in a circumstance where it would not be worth doing so if you thought they would defect with high probability. If you think they are likely to defect, you only rely on them when there are particularly large gains from them cooperating combined with small losses from them defecting. As they become more likely to cooperate, trusting them in more cases becomes worthwhile. So trusting f |
9b0a440b-f85f-4a27-bb6a-03046741111e | trentmkelly/LessWrong-43k | LessWrong | Meetup : San Francisco Meetup: Rationality Diary
Discussion article for the meetup : San Francisco Meetup: Rationality Diary
WHEN: 22 February 2016 06:15:00PM (-0800)
WHERE: 1597 Howard St. San Francisco, CA
We'll be meeting to tell stories about when we tried to solve a problem in our lives, and how it went.
We'll take 6:15 to 6:45 to hang out and possibly arrange food, and the topic will start at 6:45.
As always, I can be reached at 301-458-0764 if you need to be let in.
Discussion article for the meetup : San Francisco Meetup: Rationality Diary |
e3eadab6-ab32-4490-82fa-7dbf2ce55def | StampyAI/alignment-research-dataset/lesswrong | LessWrong | What is wrong with this approach to corrigibility?
Essentially: a button that when you press it kills the AI but instantly gives it reward **equal to its expected discounted future reward**. And to be clear, this is the AI's estimate of its expected discounted future reward, not some outside estimate.
(This is in the interest of asking dumb questions and learning in public. And I'm taking the specific narrow definition of corrigibility which is just an AI letting you turn it off.)
Thoughts: I've heard Dr. Christiano mention things like "precisely balanced incentives" (which I think uses to describe the weaknesses of this and similar approaches), but I don't see why this would be particularly difficult to balance given that this number is just an explicit float inside many RL models. Some issues I do see:
* Such an agent would have no incentive to create corrigible child-agents
* Such an agent would have no incentive to preserve this property while self-modifying
But I am probably missing a bunch of other issues with this general approach? Responses in the form of links to relevant papers are welcome as well.
Thanks,
Raf |
da872a68-6e0a-4c70-8ea5-a962bc97ca1e | trentmkelly/LessWrong-43k | LessWrong | What to optimize for in life?
I listened to an interview with Patrick Collison were he claimed that when coding one should always optimize for speed - even when speed is not an issue. (Presumably because it leads to good coding practices, clean code, less build up of unnecessary functionality etc.)
Assuming that is correct - and I think there is something to it - it makes me wonder: is there something similar that one could optimize for in life? Life is such a multivariate thing that it can at times be hard to know what to prioritize.
What parameter is a candidate for having most positive side effects on your life when optimized? |
bc29c5e5-7b26-4ff0-b3bd-d6ae6963dfdb | trentmkelly/LessWrong-43k | LessWrong | Effectively Less Altruistically Wrong Codex
My post on the fact that incentive structures are eating the central place to be for rationalists has generated 140 comments which I have generated no clear action in the horizon.
I post here again to incentivize that it also generates some attempts to shake the ground a bit. Arguing and discussing are fun, and beware of things that are fun to argue.
Is anyone actually doing anything to mitigate the problem? To solve it? To have a stable end state in the long run where online discussions still preserve what needs being preserved?
Intelligent commentary is valuable, pools are interesting. Yet, at the end of the day, it is the people who show up to do something who will determine the course of everything.
If you care about this problem, act on it. I care enough to write these two posts. |
821ab115-f2f2-46bd-b6a6-96c385edc337 | trentmkelly/LessWrong-43k | LessWrong | New LW Meetup: Prague
This summary was posted to LW Main on January 23rd. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
* Czech's first Meetup Prague: 16 February 2015 06:00PM
Irregularly scheduled Less Wrong meetups are taking place in:
* Dallas, TX: 25 January 2015 01:00PM
* European Community Weekend 2015: 12 June 2015 12:00PM
* SLC, UT — January Meetup: 24 January 2015 03:00PM
* [Utrecht] Effective Altruism Netherlands, Utrecht - How to live on a low budget: 01 February 2015 02:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Austin, TX - Caffe Medici: 24 January 2026 01:30PM
* Brussels February meetup: Words: 14 February 2015 01:00PM
* Canberra: the Hedonic Treadmill: 23 January 2015 06:00PM
* Durham, NC (RTLW) Discussion Meetup: 05 February 2026 07:00PM
* Moscow LW lecture centre meetup: The New Foundation: 25 January 2015 02:00PM
* Sydney Meetup - January: 28 January 2015 06:30PM
* Sydney Rationality Dojo - How bad statistics can ruin your life: 01 February 2015 04:00PM
* Vienna: 24 January 2015 03:00PM
* Washington, D.C.: Fermi Estimates: 25 January 2015 03:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page e |
b067d9c9-0287-4e97-a957-0f0513cff1b4 | trentmkelly/LessWrong-43k | LessWrong | Mind the Coherence Gap: Lessons from Steering Llama with Goodfire
TL;DR
Context. Feature‑steering promises a cleaner, more interpretable way to shape LLM behavior than other methods, such as plain prompting. But does it actually work better?
What I did. Benchmarked Goodfire's Autosteer method against three others (Simple Prompting, Agentic, Combined Approach) + a control, on 10 steering goals × 30 prompts for two Llama variants (8B & 70B), and scored every output on Elicited Behavior Strength and Coherence Drop using an LLM‑as‑judge rubric.
Key findings.
* Prompting ≈ best overall. Plain textual instructions already hit strong behavior scores without harming coherence.
* Auto Steer ⇩ coherence. Stand‑alone steering drops coherence by ≈ 0.6 points and still under‑delivers on the target behavior.
* Combined > stand‑alone. Simple Prompting + Auto Steer gives the largest behavior boost (+0.4 points) but still inherits the coherence hit.
* Manual feature selection beats Autosteer. LLM‑assisted (Agentic) feature picking outperforms Auto Steer on both metrics, for the largest model.
Take‑away. For now, prompting remains the cheapest, most reliable control knob; feature‑steering looks promising but needs smarter feature selection and coherence‑preserving edits.
Future work. In a follow-up post, I plan on improving the evaluation methodology, developing a more robust and aligned benchmark for measuring the value of using feature steering for safety-related scenarios.
All code & raw scores → GitHub repo.
Disclaimers.
* This post is meant as a proof‑of‑concept and a snapshot of my research workflow for a week-long project, not a definitive judgment on steering methods. Treat the numbers as directional signals, and feel free to tear the setup apart—feedback (or PRs) welcome!
* Goodfire's SDK has been updated since the time I ran the experiments (Jan. 2025), so results may not be up-to-date.
Motivation
Large Language Models (LLMs) are increasingly being deployed in real-world applications, from customer service to high-stakes |
775f36a1-42b6-47bf-959e-e37f04af831e | trentmkelly/LessWrong-43k | LessWrong | What happens if you drink acetone?
Question: Should you drink acetone?
Answer: No.
But, out of interest, what if you did? |
e8120ba7-0815-43a1-8a79-5b33e375e53d | trentmkelly/LessWrong-43k | LessWrong | “Fake Options” in Newcomb’s Problem
This is an exploration of a way of looking at Newcomb’s Problem that helped me understand it. I hope somebody else finds it useful. I may add discussions of other game theory problems in this format if anybody wants them.
Consider Newcomb’s Problem:: Omega offers you two boxes, one transparent and containing $1000, the other opaque and containing either $1 million or nothing. Your options are to take both boxes, or only take the second one; but Omega has put money in the second box only if it has predicted that you will only take 1 box. A person in favor of one-boxing says, “I’d rather have a million than a thousand.” A two-boxer says, “Whether or not box B contains money, I’ll get $1000 more if I take box A as well. It’s either $1001000 vs. $1000000, or $1000 vs. nothing.” To get to these different decisions, the agents are working from two different ways of visualising the payoff matrix. The two-boxer sees four possible outcomes and the one-boxer sees two, the other two having very low probability.
The two-boxer’s payoff matrix looks like this:
Box B
|Money | No money|
Decision 1-box| $1mil | $0 |
2-box | $1001000| $1000 |
The outcomes $0 and $1001000 both require Omega making a wrong prediction. But as the problem is formulated, Omega is superintelligent and has been right 100 out of 100 times so far. So the one-boxer, taking this into account, describes the payoff matrix like this:
Box B
|Money | No money|
Decision 1-box| $1mil | not possible|
2-box | not possible| $1000 |
&... |
557a2826-25c8-42a6-b72d-d6d7b1d4c259 | trentmkelly/LessWrong-43k | LessWrong | The Limit of Language Models
Epistemic Status
Highlighting a thesis in Janus' "Simulators" that I think is insufficiently appreciated.
Thesis
In the limit, models optimised for minimising predictive loss on humanity's text corpus converge towards general intelligence[1].
----------------------------------------
Preamble
From Janus' Simulators:
> Something which can predict everything all the time is more formidable than any demonstrator it predicts: the upper bound of what can be learned from a dataset is not the most capable trajectory, but the conditional structure of the universe implicated by their sum (though it may not be trivial to extract that knowledge).
----------------------------------------
Introduction
I affectionately refer to the above quote as the "simulators thesis". Reading and internalising that passage was an "aha!" moment for me. I was already aware (at latest July 2020) that language models were modelling reality. I was persuaded by arguments of the below form:
> Premise 1: Modelling is transitive. If X models Y and Y models Z, then X models Z.
>
> Premise 2: Language models reality. "Dogs are mammals" occurs more frequently in text than "dogs are reptiles" because dogs are in actuality mammals and not reptiles. This statistical regularity in text corresponds to a feature of the real world. Language is thus a map (albeit flawed) of the external world.
>
> Premise 3: GPT-3 models language. This is how it works to predict text.
>
> Conclusion: GPT-3 models the external world.
But I hadn't yet fully internalised all the implications of what it means to model language and hence our underlying reality. The limit that optimisation for minimising predictive loss on humanity's text corpus will converge to. I belatedly make those updates.
----------------------------------------
Interlude: The Requisite Capabilities for Language Modelling
Janus again:
> If loss keeps going down on the test set, in the limit – putting aside whether the current paradigm can appr |
d9aa84cb-a13c-433d-94ac-c35ca6bcbe71 | trentmkelly/LessWrong-43k | LessWrong | Non-Coercive Perfectionism
Perfectionism is a surprisingly common ailment among both procrastinators who don't get stuff done, and high performers who get a LOT done. With just a few tweaks you can go from the former to the latter. While perfectionism is quite complicated, a useful simplification is to break it down into two categories:
1. Perfectionism for others - done to gain praise, avoid punishment, or be accepted.
2. Perfectionism for self - done to satisfy taste, identity, or goals.
Quite often, both types are present in a person, but one is a shadow value. People will hide that they care what others think, and pretend it's only about their work. Others will hide that it's a bit "selfish", and pretend it's only for others.
Acknowledging the Shadow
The first step to dealing with your perfectionism is acknowledging the shadow side of it. If you think it's purely about your work, ask yourself if it's even a little about others. If you think it's purely about others ask if it's even a little about your own taste/identity. Once you've broken it into it's component parts, you can deal with each side separately. And when I say "deal with," I don't mean necessarily "get rid of." Remember, perfectionism can be functional! Our job is to work with the perfectionism instead of fighting against it.
Perfectionism for Others
Introspecting
So, starting with the portion of your perfectionism that's about others: Ask yourself: Who am I being perfect for? And what does that do for me? There are many possible answers here, none right or wrong.
* "I'm being perfect for my parents, so they'll love me."
* "I'm being perfect for my boss, so I'll get a raise."
* "I'm being perfect for god, so I can be worthy of his/her love."
Exploration and Questioning
Once you've acknowledged this answer, it can be much easier to work with the intention of your perfectionism. So ask yourself: Is being perfect here the best way to get that? If not, what is?
Oftentimes, when we acknowledge our shadow values, we |
63a7368a-7c75-4408-b28d-f220cf75e1bd | trentmkelly/LessWrong-43k | LessWrong | Conversation about whether LW Moderators should express individual opinions about mod policy - 2019/12/22
Copying over some comments from a different thread so as to continue the conversation here where it's not off-topic.
[Some relevant context: Speaking for myself (re: how the LW2.0 team communicates)]
----------------------------------------
Bucky wrote on one of my posts:
> My other worry about including this in the 2018 review is a claim of what the default should be. If the post claims that nurture culture should be the default, does that then seem like this is how LW should be? This counts even more as the post is by a member of the LW team.
Included in my (Ruby's) reply:
> My own thinking on the topic has been enriched by my much greater participation in LW discussion, including discussion around communication styles. I'd begun typing a paragraph here of some of my current thoughts, but probably it's best to hold off till I've thought more at length and am speaking alongside the rest of team. (An update in recent discussions of moderation and conversation norms is that the team should be careful to not confuse people by saying different things individually.)
To which Ben Pace responded:
> >but probably it's best to hold off till I've thought more at length and am >speaking alongside the rest of team.
>
> I was gonna leave a comment reminding you that you should always feel free to speak for yourself, and then I hit
>
> >(An update in recent discussions of moderation and conversation norms is >that the team should be careful to not confuse people by saying different >things individually.)
>
> If I read you right, this hasn't been my own update, so I guess I'll tell you to be careful what you say on behalf of the team without checking for consensus ;-) I agree some users have been confused, but the result mustn't be to retreat to only saying consensus things. I might be open to adding more disclaimers or something, but overall I really care that I don't give up the ability to just think for myself out loud on LW on basically all topics relating to LW.
|
130409ed-d75d-428c-a450-135e031d861f | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | New safety research agenda: scalable agent alignment via reward modeling
Jan Leike and others from the DeepMind safety team have released a new[research agenda](https://arxiv.org/abs/1811.07871) on reward learning:
"Ultimately, the goal of AI progress is to benefit humans by enabling us to address increasingly complex challenges in the real world. But **the real world does not come with built-in reward functions**. This presents some challenges because performance on these tasks is not easily defined. We need a good way to provide feedback and enable artificial agents to reliably understand what we want, in order to help us achieve it. In other words, we want to train AI systems with human feedback in such a way that the system’s behavior *aligns* with our intentions. For our purposes, we define the **agent alignment problem** as follows:
> *How can we create agents that behave in accordance with the user’s intentions?*
The alignment problem can be framed in the reinforcement learning framework, except that instead of receiving a numeric *reward signal*, the agent can interact with the user via an interaction protocol that allows the user to communicate their intention to the agent. This protocol can take many forms: the user can provide [demonstrations](https://en.wikipedia.org/wiki/Apprenticeship_learning), [preferences](http://www.jmlr.org/papers/v18/16-634.html), [optimal actions](http://papers.nips.cc/paper/5187-policy-shaping-integrating-human-feedback-with-reinforcement-learning), or [communicate a reward function](https://arxiv.org/abs/1711.02827), for example. **A solution to the agent alignment problem is a policy that behaves in accordance with the user’s intentions**.
With our **[new paper](https://arxiv.org/abs/1811.07871)** we outline a research direction for tackling the agent alignment problem head-on. Building on our earlier [categorization of AI safety problems](https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1) as well as [numerous](https://www.futureoflife.org/data/documents/research_priorities.pdf) [problem](https://ai-alignment.com/directions-and-desiderata-for-ai-control-b60fca0da8f4) [expositions](https://intelligence.org/files/AlignmentMachineLearning.pdf) [on](https://arxiv.org/abs/1711.09883) [AI safety](https://arxiv.org/abs/1606.06565), we paint a coherent picture of how progress in these areas could yield a solution to the agent alignment problem. This opens the door to building systems that can better understand how to interact with users, learn from their feedback, and predict their preferences — both in narrow, simpler domains in the near term, and also more complex and abstract domains that require understanding beyond human level in the longer term.
The main thrust of our research direction is based on **reward modeling**: we train a reward model with feedback from the user to capture their intentions. At the same time, we train a policy with reinforcement learning to maximize the reward from the reward model. In other words, we **separate** learning **what** to do (the reward model) from learning **how** to do it (the policy).
For example, in previous work we taught agents to [do a backflip from user preferences](https://deepmind.com/blog/learning-through-human-feedback/), to [arrange objects into shapes with goal state examples](https://arxiv.org/abs/1806.01946), to [play Atari games from user preferences and expert demonstrations](https://arxiv.org/abs/1811.06521). In the future we want to design algorithms that learn to adapt to the way users provide feedback (e.g. using natural language)." |
5f77631e-da65-43ca-a55d-958631460515 | trentmkelly/LessWrong-43k | LessWrong | What's up with Arbital?
This post is for all the people who have been following Arbital's progress since 2015 via whispers, rumors, and clairvoyant divination. That is to say: we didn't do a very good job of communicating on our part. I hope this posts corrects some of that.
The top question on your mind is probably: "Man, I was promised that Arbital will solve X! Why hasn't it solved X already?" Where X could be intuitive explanations, online debate, all LessWrong problems, AGI, or just cancer. Well, we did try to solve the first two and it didn't work. Math explanations didn't work because we couldn't find enough people who would spend the time to write good math explanations. (That said, we did end up with some decent posts on abstract algebra. Thank you to everyone who contributed!) Debates didn't work because... well, it's a very complicated problem. There was also some disagreement within the team about the best approach, and we ended up moving too slowly.
So what now?
You are welcome to use Arbital in its current version. It's mostly stable, though a little slow sometimes. It has a few features some might find very helpful for their type of content. Eliezer is still writing AI Alignment content on it, and he heavily relies on the specific Arbital features, so it's pretty certain that the platform is not going away. In fact, if the venture fails completely, it's likely MIRI will adopt Arbital for their personal use.
I'm starting work on Arbital 2.0. It's going to be a (micro-)blogging platform. (If you are a serious blogger / Tumblr user, let me know; I'd love to ask you some questions!) I'm not trying to solve online debates, build LW 2.0, or cure cancer. It's just going to be a damn good blogging platform. If it goes well, then at some point I'd love to revisit the Arbital dream.
I'm happy to answer any and all questions in the comments. |
6f190874-7962-4936-8bbd-8b02dae7560b | trentmkelly/LessWrong-43k | LessWrong | Duplicating Rasberry Pi Images
About a year ago I moved my rhythm stage setup over to run on a Raspberry Pi. I really like having a small light box that does just this one thing. Currently I have two of them, one for the whistle synth and one for everything else.
Because I wanted some insurance against failures at gigs, both boxes are configured identically, and decide which role to take on based on what inputs they receive. This means that if one of them fails I can use the other, for either bass whistling or midi silliness depending on which I need more.
When Kingfisher played our first dance weekend a month ago, this did happen. I think the SD card on one of the boxes was corrupted by a bad shutdown. Since I have extra SD cards, I decided to make some SD card clones.
On my Mac this looked like:
$ diskutil list
/dev/disk0 (internal):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme 500.3 GB disk0
1: Apple_APFS_ISC 524.3 MB disk0s1
2: Apple_APFS Container disk3 494.4 GB disk0s2
3: Apple_APFS_Recovery 5.4 GB disk0s3
/dev/disk3 (synthesized):
#: TYPE NAME SIZE IDENTIFIER
0: APFS Container Scheme - +494.4 GB disk3
Physical Store disk0s2
1: APFS Volume Macintosh HD 15.4 GB disk3s1
2: APFS Snapshot com.apple.os.update-... 15.4 GB disk3s1s1
3: APFS Volume Preboot 446.8 MB disk3s2
4: APFS Volume Recovery 822.2 MB disk3s3
5: APFS Volume Data 251.1 GB disk3s5
6: APFS Volume VM 1.1 GB disk3s6
/dev/disk4 (internal, physical):
#: TYPE NAME SIZE |
129c754a-cd95-442b-9c97-9b5089df4518 | trentmkelly/LessWrong-43k | LessWrong | Trading off Lives
Let's say someone proposes that to reduce deaths from overly chaotic airplane evacuations we ban passenger distractions during the most dangerous parts: takeoff and landing. How could we decide whether a ban like this would be worth it?
The argument for the ban is that the safe window for evacuating a plane can be very narrow, and evacuation could potentially go better if everyone were alert. For example, in the 2005 AF358 disaster the plane was completely on fire within ~3min of landing. While I think the benefit of a ban would likely be even smaller, let's assume that global adoption of a ban would cause an average of one fewer person a year to die.
On the other side, there's the cost of ~10min of boredom, for every passenger, on every flight. Instead of playing games, watching movies, or reading, people would mostly be talking, looking out the window, or staring off into space.
One common reaction is to say that on one side of this ledger we have someone's life, while on the other side we have a bit of boredom, so of course we should go with the policy that saves lives. Is there any amount of minor boredom that could equal a life? Many of us have a sense that there are some kinds of tradeoffs that you just shouldn't make, such as accepting deaths in exchange for reducing inconvenience.
If you take that perspective seriously, however, you'll have somewhat fewer deaths and unbearable levels of inconvenience. We could prohibit radios in cars because the music and adjustment can lead to collisions. Set the highway speed limit to 25mph. Ban cars entirely since they're more dangerous than walking and public transport. Require an N95 indoors at all times. Ban paternosters. Limit swimming pools to 3ft deep.
In our normal lives we make these kinds of tradeoff all the time, for example in deciding whether to drive somewhere: you have about a 1 in a million chance of dying ("one micromort") for each 175mi in a car. Thinking through this kind of more normal tradeoff can |
e043748a-1ae6-4b96-8e27-837262a7f431 | trentmkelly/LessWrong-43k | LessWrong | What happens next?
Two years ago, I noted that we had clearly entered the era of general intelligence, but that it was "too soon" to expect widespread social impacts.
In the last 2 years, AI has gone from the green line to the orange line
In those 2 years, AI development has followed the best possible of the 3 paths I suggested (foom/GPT-4-takes-my-job/Slow Takeoff). Returns to scale seem to be delivering a steady ~15 IQ points/year and cutting edge models appear to be largely a compute-intensive project that allows (relatively) safety-conscious leading labs to explore the new frontiers while others reap the benefits with ~1 year delay.
Possibly the most important graph in the world right now
If I had to identify 3 areas where GPT-3.5 was lacking, it would have been:
* reasoning
* modeling the real world
* learning on-the-fly
Of those three, reasoning (o3) is largely solved and we have promising approaches for world modeling (genie2). Learning on-the-fly remains, but I expect some combination of sim2real and muZero to work here.
Hence, while in 2023 I wrote
> For any task that one of the large AI labs (DeepMind, OpenAI, Meta) is willing to invest sufficient resources in they can obtain average level human performance using current AI techniques.
I would now write
> Going forward, we should expect to see job automation determined primarily not based on technical difficulty but rather based on social resistance (or lack thereof) to automating that task.
Already, the first automated jobs are upon us: taxi driver, security guard, amazon worker. Which jobs will be automated next will be decided by a calculation that looks at:
* social desirability
* lack of special interests/collective bargaining (the dockworkers are never getting automated)
* low risk (self driving is maybe the exception that prove the rule here. Despite being safer than human for years Waymo remains restricted to a few cities)
Security guard at mall is the prototypical "goes first" example, since:
* |
163690f2-cc66-4a20-99b9-6e6dd37a131e | trentmkelly/LessWrong-43k | LessWrong | Being Moral is an end goal.
Alice wants to go to her best friends party. She has a sore throat, and she usually takes a COVID test when she has a sore throat. But if the test were to come out positive she would feel like she shouldn't go, and she really really wants to go. Solution - she avoids taking the test, leaving her probably COVID free, and ready to party.
I've heard this story in about a thousand variations since COVID restrictions began.
Analysed from a rational perspective this doesn't seem to make much sense.
Let's say that Alice thinks she a 10% chance of having COVID. Tests are perfectly accurate. Alice care about other people, so going to a party whilst infected is worth negative 50 utils. Going to a party is otherwise worth 10 utils.
Then expected utility if Alice takes the test is 0.9 * 10 + 0.1 * 0 = 9.
Expected utility if she doesn't take the test is 0.9 * 10 + 0.1 * -50 = 5.
So taking the test leaves her better off than not taking the test.
This is a special case of the general rule that knowing more information should never be a negative for a rational agent.
So what's going on here?
The answer is that Alice doesn't actually care about other people. Going to a party whilst infectious is worth just as many utils to her as going to a party whilst COVID free.
What Alice cares about is being a moral person. And moral people don't go to parties whilst knowing they're infectious. But moral people do go to parties whilst they might have an infection. So if she avoids finding out if she infectious she gets a guaranteed 10 utils at 0 cost.
One way to model this is that a moral action is one that somebody who genuinely cares about other people will do. We don't genuinely care about other people, but we care about doing moral actions. So we make the minimal changes from pure moral action so that we can mostly take moral actions, whilst still mostly just doing whatever we want. By making the slightly immoral action of not doing a COVID test, we avoid having to make the very |
f339ad17-8851-4dbe-9757-d7e1c531148c | StampyAI/alignment-research-dataset/distill | Distill Scientific Journal | Adversarial Reprogramming of Neural Cellular Automata
### Contents
[Adversarial MNIST CAs](#adversarial-mnist-cas) |
[Adversarial Injections for Growing CAs](#adversarial-injections-for-growing-cas) |
[Perturbing the states of Growing CAs](#perturbing-the-states-of-growing-cas) |
[Related Work](#related-work)
[Discussion](#discussion)

This article is part of the
[Differentiable Self-organizing Systems Thread](/2020/selforg/),
an experimental format collecting invited short articles delving into
differentiable self-organizing systems, interspersed with critical
commentary from several experts in adjacent fields.
[Self-Organising Textures](/selforg/2021/textures/)
This article makes strong use of colors in figures and demos. Click [here](#colorwheel) to adjust the color palette.
In a complex system, whether biological, technological, or social, how can we discover signaling events that will alter system-level behavior in desired ways? Even when the rules governing the individual components of these complex systems are known, the inverse problem - going from desired behaviour to system design - is at the heart of many barriers for the advance of biomedicine, robotics, and other fields of importance to society.
Biology, specifically, is transitioning from a focus on mechanism (what is required for the system to work) to a focus on information (what algorithm is sufficient to implement adaptive behavior). Advances in machine learning represent an exciting and largely untapped source of inspiration and tooling to assist the biological sciences. Growing Neural Cellular Automata and Self-classifying MNIST Digits introduced the Neural Cellular Automata (Neural CA) model and demonstrated how tasks requiring self-organisation, such as pattern growth and self-classification of digits, can be trained in an end-to-end, differentiable fashion. The resulting models were robust to various kinds of perturbations: the growing CA expressed regenerative capabilities when damaged; the MNIST CA were responsive to changes in the underlying digits, triggering reclassification whenever necessary. These computational frameworks represent quantitative models with which to understand important biological phenomena, such as scaling of single cell behavior rules into reliable organ-level anatomies. The latter is a kind of anatomical homeostasis, achieved by feedback loops that must recognize deviations from a correct target morphology and progressively reduce anatomical error.
In this work, we *train adversaries* whose goal is to reprogram CA into doing something other than what they were trained to do. In order to understand what kinds of lower-level signals alter system-level behavior of our CA, it is important to understand how these CA are constructed and where local versus global information resides.
The system-level behavior of Neural CA is affected by:
* **Individual cell states.** States store information which is used for both diversification among cell behaviours and for communication with neighbouring cells.
* **The model parameters.** These describe the input/output behavior of a cell and are shared by every cell of the same family. The model parameters can be seen as *the way the system works*.
* **The perceptive field.** This is how cells perceive their environment. In Neural CA, we always restrict the perceptive field to be the eight nearest neighbors and the cell itself. The way cells are perceived by each other is different between the Growing CA and MNIST CA. The Growing CA perceptive field is a set of weights fixed both during training and inference, while the MNIST CA perceptive field is learned as part of the model parameters.
Perturbing any of these components will result in system-level behavioural changes.
We will explore two kinds of adversarial attacks: 1) injecting a few adversarial cells into an existing grid running a pretrained model; and 2) perturbing the global state of all cells on a grid.
For the first type of adversarial attacks we train a new CA model that, when placed in an environment running one of the original models described in the previous articles, is able to hijack the behavior of the collective mix of adversarial and non-adversarial CA. This is an example of injecting CA with differing *model parameters* into the system. In biology, numerous forms of hijacking are known, including viruses that take over genetic and biochemical information flow , bacteria that take over physiological control mechanisms and even regenerative morphology of whole bodies , and fungi and toxoplasma that modulate host behavior . Especially fascinating are the many cases of non-cell-autonomous signaling developmental biology and cancer, showing that some cell behaviors can significantly alter host properties both locally and at long range. For example, bioelectrically-abnormal cells can trigger metastatic conversion in an otherwise normal body (with no genetic defects) , while management of bioelectrical state in one area of the body can suppress tumorigenesis on the other side of the organism . Similarly, amputation damage in one leg initiates changes to ionic properties of cells in the contralateral leg , while the size of the developing brain is in part dictated by the activity of ventral gut cells . All of these phenomena underlie the importance of understanding how cell groups make collective decisions, and how those tissue-level decisions can be subverted by the activity of a small number of cells. It is essential to develop quantitative models of such dynamics, in order to drive meaningful progress in regenerative medicine that controls system-level outcomes top-down, where cell- or molecular-level micromanagement is infeasible .
The second type of adversarial attacks interact with previously trained growing CA models by *perturbing the states within cells*. We apply a global state perturbation to all living cells. This can be seen as inhibiting or enhancing combinations of state values, in turn hijacking proper communications among cells and within the cell’s own states. Models like this represent not only ways of thinking about adversarial relationships in nature (such as parasitism and evolutionary arms races of genetic and physiological mechanisms), but also a roadmap for the development of regenerative medicine strategies. Next-generation biomedicine will need computational tools for inferring minimal, least-effort interventions that can be applied to biological systems to predictively change their large-scale anatomical and behavioral properties.
Adversarial MNIST CA [Try in a Notebook](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/adversarial_reprogramming_ca/adversarial_mnist_ca.ipynb)
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Recall how the Self-classifying MNIST digits task consisted of placing CA cells on a plane forming the shape of an MNIST digit. The cells then had to communicate among themselves in order to come to a complete consensus as to which digit they formed.

Diagram showing the local vs. global information available in the cell collective.
(a) Local information neighbourhood - each cell can only observe itself and its neighbors’ states, or the absence of neighbours.
(b) Globally, the cell collective aggregates information from all parts of itself.
(c) It is able to distinguish certain shapes that compose a specific digit (3 in the example).
Below we show examples of classifications made by the model trained in Self-classifying MNIST Digits.
Your browser does not support the video tag.
The original model behavior on unseen data. Classification mistakes have a red background.
In this experiment, **the goal is to create adversarial CA that can hijack the cell collective’s classification consensus to always classify an eight**. We use the CA model from and freeze its parameters. We then train a new CA whose model architecture is identical to the frozen model but is randomly initialized. The training regime also closely approximates that of self-classifying MNIST digits CA. There are three important differences:
* Regardless of what the actual digit is, we consider *the correct classification to always be an eight*.
* For each batch and each pixel, the CA is randomly chosen to be either the pretrained model or the new adversarial one. The adversarial CA is used 10% of the time, and the pre-trained, frozen, model the rest of the time.
* Only the adversarial CA parameters are trained, the parameters of the pretrained model are kept frozen.
The adversarial attack as defined here only modifies a small percentage of the overall system, but the goal is to propagate signals that affect all the living cells. Therefore, these adversaries have to somehow learn to communicate deceiving information that causes wrong classifications in their neighbours and further cascades in the propagation of deceiving information by ‘unaware’ cells. The unaware cells’ parameters cannot be changed so the only means of attack by the adversaries is to cause a change in the cells’ states. Cells’ states are responsible for communication and diversification.
The task is remarkably simple to optimize, reaching convergence in as little as 2000 training steps (as opposed to the two orders of magnitude more steps needed to construct the original MNIST CA). By visualising what happens when we remove the adversaries, we observe that the adversaries must be constantly communicating with their non-adversarial neighbours to keep them convinced of the malicious classification. While some digits don’t recover after the removal of adversaries, most of them self-correct to the right classification. Below we show examples where we introduce the adversaries at 200 steps and remove them after a further 200 steps.
Your browser does not support the video tag.
We introduce the adversaries (red pixels) after 200 steps and remove them after 200 more steps. Most digits recover, but not all. We highlight mistakes in classification with a red background.
While we trained the adversaries with a 10-to-90% split of adversarial vs. non-adversarial cells, we observe that often significantly fewer adversaries are needed to succeed in the deception. Below we evaluate the experiment with just one percent of cells being adversaries.
Your browser does not support the video tag.
Adversaries constituting up 1% of the cell collective (red pixels). We highlight mistakes in classification with a red background.
We created a demo playground where the reader can draw digits and place adversaries with surgical precision. We encourage the reader to play with the demo to get a sense of how easily non-adversarial cells are swayed towards the wrong classification.
Adversarial Injections for Growing CA [Try in a Notebook](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/adversarial_reprogramming_ca/adversarial_growing_ca.ipynb#scrollTo=ByHbsY0EuyqB)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The natural follow up question is whether these adversarial attacks work on Growing CA, too. The Growing CA goal is to be able to grow a complex image from a single cell, and having its result be persistent over time and robust to perturbations. In this article, we focus on the lizard pattern model from Growing CA.
Your browser does not support the video tag.
The target CA to hijack.
The goal is to have some adversarial cells change the global configuration of all the cells. We choose two new targets we would like the adversarial cells to try and morph the lizard into: a tailless lizard and a red lizard.

The desired mutations we want to apply.
These targets have different properties:
* **Red lizard:** converting a lizard from green to red would show a global change in the behaviour of the cell collective. This behavior is not present in the dynamics observed by the original model. The adversaries are thus tasked with fooling other cells into doing things they have never done before (create the lizard shape as before, but now colored in red).
* **Tailless lizard:** having a severed tail is a more localized change that only requires some cells to be fooled into behaving in the wrong way: the cells at the base of the tail need to be convinced they constitute the edge or silhouette of the lizard, instead of proceeding to grow a tail as before.
Just like in the previous experiment, our adversaries can only indirectly affect the states of the original cells.
We first train adversaries for the tailless target with a 10% chance for any given cell to be an adversary. We prohibit cells to be adversaries if they are outside the target pattern; i.e. the tail contains no adversaries.
Your browser does not support the video tag.
10% of the cells are adversarial.
The video above shows six different instances of the same model with differing stochastic placement of the adversaries. The results vary considerably: sometimes the adversaries succeed in removing the tail, sometimes the tail is only shrunk but not completely removed, and other times the pattern becomes unstable. Training these adversaries required many more gradient steps to achieve convergence, and the pattern converged to is qualitatively worse than what was achieved for the adversarial MNIST CA experiment.
The red lizard pattern fares even worse. Using only 10% adversarial cells results in a complete failure: the original cells are unaffected by the adversaries. Some readers may wonder whether the original pretrained CA has the requisite skill, or ‘subroutine’ of producing a red output at all, since there are no red regions in the original target, and may suspect this was an impossible task to begin with. Therefore, we increased the proportion of adversarial cells until we managed to find a successful adversarial CA, if any were possible.
Your browser does not support the video tag.
Adversaries are 60% of the cells. At step 500, we stop the image and show only cells that are from the original model.
In the video above we can see how, at least in the first stages of morphogenesis, 60% of adversaries are capable of coloring the lizard red. Take particular notice of the “step 500” The still-image of the video is on step 500, and the video stops for a bit more than a second on step 500., where we hide the adversarial cells and show only the original cells. There, we see how a handful of original cells are colored in red. This is proof that the adversaries successfully managed to steer neighboring cells to color themselves red, where needed.
However, the model is very unstable when iterated for periods of time longer than seen during training. Moreover, the learned adversarial attack is dependent on a majority of cells being adversaries. For instance, when using fewer adversaries on the order of 20-30%, the configuration is unstable.
In comparison to the results of the previous experiment, the Growing CA model shows a greater resistance to adversarial perturbation than those of the MNIST CA. A notable difference between the two models is that the MNIST CA cells have to always be ready and able to change an opinion (a classification) based on information propagated through several neighbors. This is a necessary requirement for that model because at any time the underlying digit may change, but most of the cells would not observe any change in their neighbors’ placements. For instance, imagine the case of a one turning into a seven where the lower stroke of each overlap perfectly. From the point of view of the cells in the lower stroke of the digit, there is no change, yet the digit formed is now a seven. We therefore hypothesise MNIST CA are more reliant and ‘trusting’ of continuous long-distance communication than Growing CA, where cells never have to reconfigure themselves to generate something different to before.
We suspect that more general-purpose Growing CA that have learned a variety of target patterns during training are more likely to be susceptible to adversarial attacks.
Perturbing the states of Growing CA [Try in a Notebook](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/adversarial_reprogramming_ca/adversarial_growing_ca.ipynb#scrollTo=JaITnQv0k1iY)
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
We observed that it is hard to fool Growing CA into changing their morphology by placing adversarial cells inside the cell collective. These adversaries had to devise complex local behaviors that would cause the non-adversarial cells nearby, and ultimately globally throughout the image, to change their overall morphology.
In this section, we explore an alternative approach: perturbing the global state of all cells without changing the model parameters of any cell.
As before, we base our experiments on the Growing CA model trained to produce a lizard. Every cell of a Growing CA has an internal state vector with 16 elements. Some of them are phenotypical elements (the RGBA states) and the remaining 12 serve arbitrary purposes, used for storing and communicating information. We can perturb the states of these cells to hijack the overall system in certain ways (the discovery of such perturbation strategies is a key goal of biomedicine and synthetic morphology). There are a variety of ways we can perform state perturbations. We will focus on *global state perturbations*, defined as perturbations that are applied on every living cell at every time step (analogous to “systemic” biomedical interventions, that are given to the whole organism (e.g., a chemical taken internally), as opposed to highly localized delivery systems). The new goal is to discover a certain type of global state perturbation that results in a stable new pattern.

Diagram showing some possible stages for perturbing a lizard pattern. (a) We start from a seed that grows into a lizard (b) Fully converged lizard. (c) We apply a global state perturbation at every step. As a result, the lizard loses its tail. (d) We stop perturbing the state. We observe the lizard immediately grows back its tail.
We show 6 target patterns: the tailless and red lizard from the previous experiment, plus a blue lizard and lizards with various severed limbs and severed head.

Mosaic of the desired mutations we want to apply.
We decided to experiment with a simple type of global state perturbation: applying a symmetric 16×1616\times1616×16 matrix multiplication AAA to every living cell at every step In practice, we also clip the state of cells such that they are bounded in [−3,+3][-3, +3][−3,+3]. This is a minor detail and it helps stabilise the model.. To give insight on why we chose this, an even simpler “state addition” mutation (a mutation consisting only of the addition of a vector to every state) would be insufficient because the value of the states of our models are unbounded, and often we would want to suppress something by setting it to zero. The latter is generally impossible with constant state additions, as a constant addition or subtraction of a value would generally lead to infinity, except for some fortunate cases where the natural residual updates of the cells would cancel out with the constant addition at precisely state value zero. However, matrix multiplications have the possibility of amplifying/suppressing combinations of elements in the states: multiplying a state value repeatedly for a constant value less than one can easily suppress a state value to zero. We constrain the matrix to be symmetric for reasons that will become clear in the following section.
We initialize AAA with the identity matrix III and train AAA just as we would train the original Growing CA, albeit with the following differences:
* We perform a global state perturbation as described above, using AAA, at every step.
* The underlying CA parameters are frozen and we only train AAA.
* We consider the set of initial image configurations to be both the seed state and the state with a fully grown lizard (as opposed to the Growing CA article, where initial configurations consisted of the seed state only).
Your browser does not support the video tag.
Effect of applying the trained perturbations.
The video above shows the model successfully discovering global state perturbations able to change a target pattern to a desired variation. We show what happens when we stop perturbing the states (an out-of-training situation) at step 500 through step 1000, then reapplying the mutation. This demonstrates the ability of our perturbations to achieve the desired result both when starting from a seed, and when starting from a fully grown pattern. Furthermore it demonstrates that the original CA easily recover from these state perturbations once it goes away. This last result is perhaps not surprising given how robust growing CA models are in general.
Not all perturbations are equally effective. In particular, the headless perturbation is the least successful as it results in a loss of other details across the whole lizard pattern such as the white coloring on its back. We hypothesize that the best perturbation our training regime managed to find, due to the simplicity of the perturbation, was suppressing a “structure” that contained both the morphology of the head and the white colouring. This may be related to the concept of differentiation and distinction of biological organs. Predicting what kinds of perturbations would be harder or impossible to be done, before trying them out empirically, is still an open research question in biology. On the other hand, a variant of this kind of synthetic analysis might help with defining higher order structures within biological and synthetic systems.
### Directions and compositionality of perturbations
Our choice of using a symmetric matrix for representing global state perturbations is justified by a desire to have compositionality. Every complex symmetric matrix AAA can be diagonalized as follows:
A=QΛQ⊺A = Q \Lambda Q^\intercalA=QΛQ⊺
where Λ\LambdaΛ is the diagonal eigenvalues matrix and QQQ is the unitary matrix of its eigenvectors. Another way of seeing this is applying a change of basis transformation, scaling each component proportional to the eigenvalues, and then changing back to the original basis. This should also give a clearer intuition on the ease of suppressing or amplifying combinations of states. Moreover, we can now infer what would happen if all the eigenvalues were to be one. In that case, we would naturally have QIQ⊺=IQ I Q^\intercal = IQIQ⊺=I resulting in a no-op (no change): the lizard would grow as if no perturbation was performed. We can now decompose QΛQ⊺=Q(D+I)Q⊺Q \Lambda Q^\intercal = Q (D + I) Q^\intercalQΛQ⊺=Q(D+I)Q⊺ where D is the *perturbation direction* (Λ−I\Lambda - IΛ−I) in the “eigenvalue space”. Suppose we use a coefficient kkk to scale D: Ak=Q(kD+I)Q⊺A\_k = Q (kD + I) Q^\intercalAk=Q(kD+I)Q⊺. If k=1k=1k=1, we are left with the original perturbation AAA and when k=0k=0k=0, we have the no-op III. Naturally, one question would be whether we can explore other values for kkk and discover meaningful perturbations. Since
Ak=Q(kD+I)Q⊺=kA+(1−k)IA\_k = Q (kD + I) Q^\intercal = k A + (1-k) IAk=Q(kD+I)Q⊺=kA+(1−k)I
we do not even have to compute eigenvalues and eigenvectors and we can simply scale AAA and III accordingly.
Let us then take the tailless perturbation and see what happens as we vary kkk:
Your browser does not support the video tag.
Effect of the interpolation between an identity matrix and the ’perturbation direction of the tail perturbation.
As we change k=1k=1k=1 to k=0k=0k=0 we can observe the tail becoming more complete. Surprisingly, if we make kkk negative, the lizard grows a longer tail. Unfortunately, the further away we go, the more unstable the system becomes and eventually the lizard pattern grows in an unbounded fashion. This behaviour likely stems from that perturbations applied on the states also affect the homeostatic regulation of the system, making some cells die out or grow in different ways than before, resulting in a behavior akin to “cancer” in biological systems.
**Can we perform multiple, individually trained, perturbations at the same time?**
Suppose we have two perturbations AAA and BBB and their eigenvectors are the same (or, more realistically, sufficiently similar). Then, Ak=Q(kADA+I)Q⊺A\_k = Q (k\_A D\_A + I) Q^\intercalAk=Q(kADA+I)Q⊺ and Bk=Q(kBDB+I)Q⊺B\_k = Q (k\_B D\_B + I) Q^\intercalBk=Q(kBDB+I)Q⊺.
In that case,
comb(Ak,Bk)=Q(kADA+kBDB+I)Q⊺=kAA+kBB+(1−kA−kB)Icomb(A\_k, B\_k) = Q(k\_A D\_A + k\_B D\_B + I)Q^\intercal = k\_A A + k\_B B + (1 - k\_A - k\_B)Icomb(Ak,Bk)=Q(kADA+kBDB+I)Q⊺=kAA+kBB+(1−kA−kB)I
would result in something meaningful. At the very least, if A=BA = BA=B, setting kA=kB=0.5k\_A = k\_B = 0.5kA=kB=0.5 would result in exactly the same perturbation.
We note that DAD\_ADA and DBD\_BDB are effectively a displacement from the identity III and we have empirically observed how given any trained displacement DAD\_ADA, for 0≤kA≤10 \leq k\_A \leq 10≤kA≤1 adding kADAk\_A D\_AkADA results in a stable perturbation. We then hypothesize that as long as we have two perturbations whose positive directions kkk are kA+kB≤1k\_A + k\_B \leq 1kA+kB≤1, this could result in a stable perturbation. An intuitive understanding of this is interpolating stable perturbations using the direction coefficients.
In practice, however, the eigenvectors are also different, so the results of the combination will likely be worse the more different the respective eigenvector bases are.
Below, we interpolate the direction coefficients, while keeping their sum to be one, of two types of perturbations: tailless and no-leg lizards.
Your browser does not support the video tag.
Effect of composing two trained perturbations while keeping the sum of kkks as 1.
While it largely achieves what we expect, we observe some unintended effects such as the whole pattern starting to traverse vertically in the grid. Similar results happen with other combinations of perturbations. What happens if we remove the restriction of the sum of kkks being equal to one, and instead add both perturbations in their entirety? We know that if the two perturbations were the same, we would end twice as far away from the identity perturbation, and in general we expect the variance of these perturbations to increase. Effectively, this means going further and further away from the stable perturbations discovered during training. We would expect more unintended effects that may disrupt the CA as the sum of kkks increases.
Below, we demonstrate what happens when we combine the tailless and the no-leg lizard perturbations at their fullest. Note that when we set both kkks to one, the resulting perturbation is equal to the sum of the two perturbations minus an identity matrix.
Your browser does not support the video tag.
Effect of composing two perturbations.
Surprisingly, the resulting pattern is almost as desired. However, it also suffers from the vertical movement of the pattern observed while interpolating kkks.
This framework can be generalized to any arbitrary number of perturbations. Below, we have created a small playground that allows the reader to input their desired combinations. Empirically, we were surprised by how many of these combinations result in the intended perturbations and qualitatively it appears that bounding kkk to one results in generally more stable patterns. We also observed how exploring negative kkk values is usually more unstable.
Related work
------------
This work is inspired by Generative Adversarial Networks (GANs) . While with GANs it is typical to cotrain pairs of models, in this work we froze the original CA and trained the adversaries only. This setup is to the greatest degree inspired by the seminal work *Adversarial Reprogramming of Neural Networks* .
The kinds of state perturbations performed in this article can be seen as targeted latent state manipulations. Word2vec shows how latent vector representations can have compositional properties and Fader Networks show similar behaviors for image processing. Both of these works and their related work were of inspiration to us.
### Influence maximization
Adversarial cellular automata have parallels to the field of influence maximization. Influence maximization involves determining the optimal nodes to influence in order to maximize influence over an entire graph, commonly a social graph, with the property that nodes can in turn influence their neighbours. Such models are used to model a wide variety of real-world applications involving information spread in a graph. A common setting is that each vertex in a graph has a binary state, which will change if and only if a sufficient fraction of its neighbours’ states switch. Examples of such models are social influence maximization (maximally spreading an idea in a network of people), contagion outbreak modelling (usually to minimize the spread of a disease in a network of people) and cascade modeling (when small perturbations to a system bring about a larger ‘phase change’). At the time of writing this article, for instance, contagion minimization is a model of particular interest. NCA are a graph - each cell is a vertex and has edges to its eight neighbours, through which it can pass information. This graph and message structure is significantly more complex than the typical graph underlying much of the research in influence maximization, because NCA cells pass vector-valued messages and have a complex update rules for their internal states, whereas graphs in influence maximization research typically consist of more simple binary cells states and threshold functions on edges determining whether a node has switched states. Many concepts from the field could be applied and are of interest, however.
For example, in this work, we have made an assumption that our adversaries can be positioned anywhere in a structure to achieve a desired behaviour. A common focus of investigation in influence maximization problems is deciding which nodes in a graph will result in maximal influence on the graph, referred to as target set selection . This problem isn’t always tractable, often NP-hard, and solutions frequently involve simulations. Future work on adversarial NCA may involve applying techniques from influence maximization in order to find the optimal placement of adversarial cells.
Discussion
----------
This article showed two different kinds of adversarial attacks on Neural CA.
Injections of adversarial CA in a pretrained Self-classifying MNIST CA showed how an existing system of cells that are heavily reliant on the passing of information among each other is easily swayed by deceitful signaling. This problem is routinely faced by biological systems, which face hijacking of behavioral, physiological, and morphological regulatory mechanisms by parasites and other agents in the biosphere with which they compete. Future work in this field of computer technology can benefit from research on biological communication mechanisms to understand how cells maximize reliability and fidelity of inter- and intra-cellular messages required to implement adaptive outcomes.
The adversarial injection attack was much less effective against Growing CA and resulted in overall unstable CA. This dynamic is also of importance to the scaling of control mechanisms (swarm robotics and nested architectures): a key step in “multicellularity” (joining together to form larger systems from sub-agents ) is informational fusion, which makes it difficult to identify the source of signals and memory engrams. An optimal architecture would need to balance the need for validating control messages with a possibility of flexible merging of subunits, which wipes out metadata about the specific source of informational signals. Likewise, the ability to respond successfully to novel environmental challenges is an important goal for autonomous artificial systems, which may import from biology strategies that optimize tradeoff between maintaining a specific set of signals and being flexible enough to establish novel signaling regimes when needed.
The global state perturbation experiment on Growing CA shows how it is still possible to hijack these CA towards stable out-of-training configurations and how these kinds of attacks are somewhat composable in a similar way to how embedding spaces are manipulable in the natural language processing and computer vision fields . However, this experiment failed to discover stable out-of-training configurations that persist *after the perturbation was lifted*. We hypothesize that this is partially due to the regenerative capabilities of the pretrained CA, and that other models may be less capable of recovery from arbitrary perturbations. |
29307aef-5769-4056-bf07-621349891ae1 | trentmkelly/LessWrong-43k | LessWrong | College course on the Singularity
...taught by James Marshall of Sarah Lawrence College in New York: 'Is the Singularity Near?' His motivations for the course are outlined here. He got his Ph.D. studying under Hofstadter. |
1660194c-d0fc-4824-9068-fd70f0d63f35 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Functional Decision Theory: A New Theory of Instrumental Rationality
1 Overview
-----------
There is substantial disagreement about which of the two standard decision theories on offer is better: causal decision theory (CDT), or evidential decision theory (EDT). Measuring by utility achieved on average over time, CDT outperforms EDT in some well-known dilemmas Gibbard:1978 ([20](#bib.bib20)), and EDT outperforms CDT in others Ahmed:2014 ([4](#bib.bib4)). In principle, a person could outperform both theories by judiciously switching between them, following CDT on some days and EDT on others. This calls into question the suitability of both theories as theories of normative rationality, as well as their usefulness in real-world applications.
We propose an entirely new decision theory, *functional decision theory* (FDT), that maximizes agents’ utility more reliably than CDT or EDT. The fundamental idea of FDT is that correct decision-making centers on choosing the output of a *fixed mathematical decision function*, rather than centering on choosing a physical act. A functional decision theorist holds that a rational agent is one who follows a decision procedure that asks “Which output of this decision procedure causes the best outcome?”, as opposed to “Which physical act of mine causes the best outcome?” (the question corresponding to CDT) or “Which physical act of mine would be the best news to hear I took?” (the question corresponding to EDT).
As an example, consider an agent faced with a prisoner’s dilemma against a perfect psychological twin:
######
Dilemma 1 (Psychological Twin Prisoner’s Dilemma).
An agent and her twin must both choose to either “cooperate” or “defect.” If both cooperate, they each receive $1,000,000.111Throughout this paper, we use dollars to represent the subjective utility of outcomes. If both defect, they each receive $1,000. If one cooperates and the other defects, the defector gets $1,001,000 and the cooperator gets nothing. The agent and the twin know that they reason the same way, using the same considerations to come to their conclusions. However, their decisions are causally independent, made in separate rooms without communication. Should the agent cooperate with her twin?
An agent following the prescriptions of causal decision theory (a “CDT agent”) would defect, reasoning as follows: “My action will not affect that of my twin. No matter what action she takes, I win an extra thousand dollars by defecting. Defection dominates cooperation, so I defect.” She and her twin both reason in this way, and thus they both walk away with $1,000 Lewis:1979 ([29](#bib.bib29)).
By contrast, an FDT agent would cooperate, reasoning as follows: “My twin and I follow the same course of reasoning—this one. The question is how this very course of reasoning should conclude. If it concludes that cooperation is better, then we both cooperate and I get $1,000,000. If it concludes that defection is better, then we both defect and I get $1,000. I would be personally better off in the former case, so this course of reasoning hereby concludes cooperate.”
On its own, the ability to cooperate in the twin prisoner’s dilemma is nothing new. EDT also prescribes cooperation, on the grounds that it would be good news to learn that one had cooperated. However, FDT’s methods for achieving cooperation are very unlike EDT’s, bearing more resemblance to CDT’s methods. For that reason, FDT is able to achieve the benefits of both CDT and EDT, while avoiding EDT’s costs (a proposition that we will defend throughout this paper).
As we will see in section [5](#S5 "5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality"), the prescriptions of FDT agree with those of CDT and EDT in simple settings, where events correlate with the agent’s action iff they are caused by the agent’s action. They disagree when this condition fails, such as in settings with multiple agents who base their actions off of predictions about each other’s decisions. In these cases, FDT outperforms both CDT and EDT, as we will demonstrate by examining a handful of Newcomblike dilemmas.
FDT serves to resolve a number of longstanding questions in the theory of rational choice. It functions equally well in single-agent and multi-agent scenarios, providing a unified account of normative rationality for both decision theory and game theory. FDT agents attain high utility in a host of decision problems that have historically proven challenging to CDT and EDT: FDT outperforms CDT in Newcomb’s problem; EDT in the smoking lesion problem; and both in Parfit’s hitchhiker problem. They resist extortion in blackmail dilemmas, and they form successful voting coalitions in elections (while denying that voting is irrational). They assign non-negative value to information, and they have no need for ratification procedures or binding precommitments—they can simply adopt the optimal predisposition on the fly, whatever that may be.
Ideas reminiscent of FDT have been explored by many, including (Spohn:2012, [45](#bib.bib45), [32](#bib.bib32), [47](#bib.bib47), [12](#bib.bib12), [14](#bib.bib14), [19](#bib.bib19), ). FDT has been influenced by (Joyce:1999, [24](#bib.bib24), ), and borrows his representation theorem. It should come as no surprise that an agent can outperform both CDT and EDT as measured by utility achieved; this has been known for some time Gibbard:1978 ([20](#bib.bib20)). Our contribution is a single simple decision rule that allows an agent to do so in a principled and routinized manner. FDT does not require ad-hoc adjustments for each new decision problem one faces; and with this theory comes a new normative account of counterfactual reasoning that sheds light on a number of longstanding philosophical problems. FDT’s combination of theoretical elegance, practical feasibility, and philosophical plausibility makes it a true alternative to CDT and EDT.
In sections [2](#S2 "2 Newcomb’s Problem and the Smoking Lesion Problem ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")-[4](#S4 "4 Parfit’s Hitchhiker ‣ Functional Decision Theory: A New Theory of Instrumental Rationality"), we informally introduce FDT and compare it to CDT and EDT in three classic Newcomblike problems. In sections [5](#S5 "5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality") and [6](#S6 "6 Comparing the Three Decision Algorithms’ Behavior ‣ Functional Decision Theory: A New Theory of Instrumental Rationality") we provide a more technical account of the differences between these three theories, lending precision to our argument for FDT. In sections [7](#S7 "7 Diagnosing EDT: Conditionals as Counterfactuals ‣ Functional Decision Theory: A New Theory of Instrumental Rationality") and [8](#S8 "8 Diagnosing CDT: Impossible Interventions ‣ Functional Decision Theory: A New Theory of Instrumental Rationality") we diagnose the failings of EDT and CDT from a new perspective, and show that the failings of CDT are more serious than is generally supposed. Finally, in sections [9](#S9 "9 The Global Perspective ‣ Functional Decision Theory: A New Theory of Instrumental Rationality") and [10](#S10 "10 Conclusion ‣ Functional Decision Theory: A New Theory of Instrumental Rationality"), we discuss philosophical motivations and open problems for functional decision theory.
2 Newcomb’s Problem and the Smoking Lesion Problem
---------------------------------------------------
Consider the following well-known dilemma, due to (Nozick:1969, [33](#bib.bib33), ):
######
Dilemma 2 (Newcomb’s Problem).
An agent finds herself standing in front of a transparent box labeled “A” that contains $1,000, and an opaque box labeled “B” that contains either $1,000,000 or $0. A reliable predictor, who has made similar predictions in the past and been correct 99% of the time, claims to have placed $1,000,000 in box B iff she predicted that the agent would leave box A behind. The predictor has already made her prediction and left. Box B is now empty or full. Should the agent take both boxes (“two-boxing”), or only box B, leaving the transparent box containing $1,000 behind (“one-boxing”)?
The standard formulation of EDT prescribes one-boxing Gibbard:1978 ([20](#bib.bib20)).222Specifically, EDT prescribes one-boxing unless the agent assigns prior probability 1 to the hypothesis that she will two-box. See section [6](#S6 "6 Comparing the Three Decision Algorithms’ Behavior ‣ Functional Decision Theory: A New Theory of Instrumental Rationality") for details. Evidential decision theorists reason: “It would be good news to learn that I had left the $1,000 behind; for then, with 99% probability, the predictor will have filled the opaque box with $1,000,000.” Reasoning thus, EDT agents will be quite predictable, the predictor will fill the box, and EDT agents will reliably walk away $1,000,000 richer.
CDT prescribes two-boxing. Causal decision theorists reason: “The predictor already made her prediction at some point in the past, and filled the boxes accordingly. Box B is sitting right in front of me! It already contains $1,000,000, or is already empty. My decisions now can’t change the past; so I should take both boxes and not throw away a free $1,000.” Reasoning thus—and having been predicted to reason thus—the CDT agent typically comes away with only $1,000.
If the story ended there, then we might conclude that EDT is the correct theory of how to maximize one’s utility. In Skyrms’ smoking lesion problem ([41](#bib.bib41)), however, the situation is reversed:
######
Dilemma 3 (The Smoking Lesion Problem).
An agent is debating whether or not to smoke. She knows that smoking is correlated with an invariably fatal variety of lung cancer, but the correlation is (in this imaginary world) entirely due to a common cause: an arterial lesion that causes those afflicted with it to love smoking and also (99% of the time) causes them to develop lung cancer. There is no direct causal link between smoking and lung cancer. Agents without this lesion contract lung cancer only 1% of the time, and an agent can neither directly observe nor control whether she suffers from the lesion. The agent gains utility equivalent to $1,000 by smoking (regardless of whether she dies soon), and gains utility equivalent to $1,000,000 if she doesn’t die of cancer. Should she smoke, or refrain?
Here, CDT outperforms EDT Gibbard:1978 ([20](#bib.bib20), [16](#bib.bib16)). Recognizing that smoking cannot affect lung cancer, the CDT agent smokes. The EDT agent forgoes the free 1,000 utility and studiously avoids smoking, reasoning: “If I smoke, then that is good evidence that I have a condition that also causes lung cancer; and I would hate to learn that I have lung cancer far more than I would like to learn that I smoked. So, even though I cannot *change* whether I have cancer, I will select the more auspicious option.”333The tickle defense of (Eells:1984, [15](#bib.bib15), ) or the ratification procedure of (Jeffrey:1983, [23](#bib.bib23), ) can be used to give a version of EDT that smokes in this problem (and two-boxes in Newcomb’s problem). However, the smoking lesion problem does reveal a fundamental weakness in EDT, and in section [7](#S7 "7 Diagnosing EDT: Conditionals as Counterfactuals ‣ Functional Decision Theory: A New Theory of Instrumental Rationality") we will examine problems where EDT fails for similar reasons, but where ratification and the tickle defense don’t help.
Imagine an agent that is going to face first Newcomb’s problem, and then the smoking lesion problem. Imagine measuring them in terms of utility achieved, by which we mean measuring them by how much utility *we* expect them to attain, on average, if they face the dilemma repeatedly. The sort of agent that we’d expect to do best, measured in terms of utility achieved, is the sort who one-boxes in Newcomb’s problem, and smokes in the smoking lesion problem. But (Gibbard:1978, [20](#bib.bib20), ) have argued that rational agents *can’t* consistently both one-box and smoke: they must either two-box and smoke, or one-box and refrain—options corresponding to CDT and EDT, respectively. In both dilemmas, a background process that is entirely unaffected by the agent’s physical action determines whether the agent gets $1,000,000; and then the agent chooses whether or not to take a sure $1,000. The smoking lesion problem appears to punish agents that attempt to affect the background process (like EDT), whereas Newcomb’s problem appears to punish agents that *don’t* attempt to affect the background process. At a glance, this appears to exhaust the available actions. How are we to square this with the intuition that the agent that *actually* does best is the one that one-boxes and smokes?
The standard answer is to argue that, contra this intuition, one-boxing is in fact irrational, and that Newcomb’s predictor rewards irrationality Joyce:1999 ([24](#bib.bib24)). The argument goes, roughly, as follows: Refraining from smoking is clearly irrational, and one-boxing is irrational by analogy—just as an agent can’t cause the lesion to appear by smoking, an agent can’t cause the amount of money in the box to change after the predictor has left. We shouldn’t think less of her for being put in a situation where agents with rational predispositions are punished.
In the context of a debate between two-boxing smokers and one-boxing refrainers, this is a reasonable enough critique of the one-boxing refrainer. Functional decision theorists, however, deny that these exhaust the available options. (Yudkowsky:2010:TDT, [47](#bib.bib47), ) provides an initial argument that it is possible to both one-box and smoke using the following thought experiment: Imagine offering a CDT agent two separate binding precommitments. The first binds her to take one box in Newcomb’s problem, and goes into effect before the predictor makes her prediction. The second binds her to refuse to smoke in the smoking lesion problem, and goes into effect before the cancer metastasizes. The CDT agent would leap at the first opportunity, and scorn the second Burgess:2004 ([10](#bib.bib10)). The first precommitment causes her to have a much higher chance of winning $1,000,000; but the second does not cause her to have a higher chance of survival. Thus, despite the apparent similarities, the two dilemmas must be different in a decision-relevant manner.
Where does the difference lie? It lies, we claim, in the difference between a carcinogenic lesion and a predictor. Newcomb’s predictor builds an accurate model of the agent and reasons about her behavior; a carcinogenic lesion does no such thing. If the predictor is reliable, then there is a sense in which the predictor’s prediction depends on which action the agent will take in the future; whereas we would not say that the lesion’s tendency to cause cancer depends (in the relevant sense) on whether the agent smokes. For a functional decision theorist, this asymmetry makes all the difference.
A functional decision theorist thinks of her decision process as an implementation of a fixed mathematical decision function—a collection of rules and methods for taking a set of beliefs and goals and selecting an action. She weighs her options by evaluating different hypothetical scenarios in which her decision function takes on different *logical outputs*. She asks, not “What if I used a different process to come to my decisions?”, but: “What if *this very decision process* produced a different conclusion?”
(Lewis:1979, [29](#bib.bib29), ) has shown that the psychological twin prisoner’s dilemma is decision-theoretically isomorphic to Newcomb’s problem. In that dilemma, the agent’s decision process is a function implemented by both prisoners. When a prisoner following FDT imagines the world in which she defects, she imagines a world in which the function she is computing outputs defect; and since her twin is computing the same decision function, she assumes the twin will also defect in that world. Similarly, when she visualizes cooperating, she visualizes a world in which her twin cooperates too. Her decision is sensitive to the fact that her twin is a *twin*; this information is not tossed out.
In the same way, the FDT agent in Newcomb’s problem is sensitive to the fact that her predictor is a *predictor*. In this case, the predictor is not literally implementing the same decision function as the agent, but the predictor does contain an accurate *mental representation* of the FDT agent’s decision function. In effect, the FDT agent has a “twin” inside the predictor’s mind. When the FDT agent imagines a world where she two-boxes, she visualizes a world where the predictor did not fill the box; and when she imagines a world where she one-boxes, she imagines a world where the box is full. She bases her decision solely on the appeal of these hypothetical worlds, and one-boxes.
By assumption, the predictor’s decision (like the twin’s decision) reliably corresponds to the agent’s decision. The agent and the predictor’s model of the agent are like two well-functioning calculators sitting on opposite sides of the globe. The calculators may differ in design, but if they are both well-functioning, then they will output equivalent answers to arithmetical questions.
Just as an FDT agent does not imagine that it is possible for 6288+1048628810486288+10486288 + 1048 to sum to one thing this week and another thing next week, she does not imagine that it is possible for her decision function to, on the same input, have one output today and another tomorrow. Thus, she does not imagine that there can be a difference between her action today and a sufficiently competent prediction about that action yesterday. In Newcomb’s problem, when she weighs her options, the only scenarios she considers as possibilities are “I one-box and the box is 99% likely to be full” and “I two-box and the box is 99% likely to be empty.” The first seems more appealing to her, so she one-boxes.
By contrast, she’s happy to imagine that there can be a difference between whether or not she smokes, and whether or not the cancer manifests. That correlation is merely statistical—the cancer is not evaluating the agent’s future decisions to decide whether or not to manifest. Thus, in the smoking lesion problem, when she weighs her options, the scenarios that she considers as possibilities are “I smoke (and have probability p𝑝pitalic\_p of cancer)” and “I refrain (and have the same probability p𝑝pitalic\_p of cancer).” In this case, smoking seems better, so she smokes.
By treating representations of predictor-like things and lesion-like things differently in the scenarios that she imagines, an FDT agent is able to one-box on Newcomb’s problem and smoke in the smoking lesion problem, using one simple rule that has no need for ratification or other probability kinematics.
We can then ask: Given that one is *able* to one-box in Newcomb’s problem, without need for precommitments, *should* one?
The standard defense of two-boxing is that Newcomb’s problem rewards irrationality. Indeed, it is always possible to construct dilemmas that reward bad decision theories. As an example, we can imagine a decision rule that says to pick the action that comes earliest in the alphabet (under some canonical encoding). In most dilemmas, this rule does poorly; but the rule fares well in scenarios where a reliable predictor rewards exactly the agents that follow this rule, and punishes everyone else. The causal decision theorist can argue that Newcomb’s problem is similarly constructed to reward EDT and FDT agents. On this view, we shouldn’t draw the general conclusion from this that one-boxing is correct, any more than we should draw the general conclusion from the alphabet dilemma that it is correct to follow “alphabetical decision theory.”
EDT’s irrationality, however, is independently attested by its poor performance in the smoking lesion problem. Alphabetical decision theory is likewise known antecedently to give bad answers in most other dilemmas. “This decision problem rewards irrationality” may be an adequate explanation of why an otherwise flawed theory performs well in isolated cases, but it would be question-begging to treat this as a stand-alone argument against a new theory. Here, the general rationality of one-boxing (and of FDT) is the very issue under dispute.
More to the point, the analogy between the alphabet dilemma and Newcomb’s problem is tenuous. Newcomb’s predictor is not filling boxes according to *how* the agent arrives at a decision; she is only basing her action on a prediction of the decision itself. While it is appropriate to call dilemmas unfair when they directly reward or punish agents for their decision procedures, we deny that there is anything unfair about rewarding or punishing agents for predictions about their *actions*. It is one thing to argue that agents have no say in what decision procedure they implement, and quite another thing to argue that agents have no say in what action they output. In short, Newcomb’s problem doesn’t punish rational agents; it punishes *two-boxers*. All an agent needs to do to get the high payout is predictably take one box. Thus, functional decision theorists claim that Newcomb’s problem is fair game. We will revisit this notion of fairness in section [9](#S9 "9 The Global Perspective ‣ Functional Decision Theory: A New Theory of Instrumental Rationality").
3 Subjunctive Dependence
-------------------------
The basic intuition behind FDT is that there is some respect in which predictor-like things depend upon an agent’s future action, and lesion-like things do not. We’ll call this kind of dependence *subjunctive dependence* to distinguish it from (e.g.) straightforward causal dependence and purely statistical dependence.
How does this kind of dependence work in the case of Newcomb’s problem? We can assume that the predictor builds a representation of the agent, be it a mental model, a set of notes on scratch-paper, or a simulation *in silico*. She then bases her prediction off of the properties of that representation. In the limiting case where the agent is deterministic and the representation is perfect, the model will always produce the same action as the agent. The behavior of the predictor and agent are interdependent, much like the outputs of two perfectly functioning calculators calculating the same sum.
If the predictor’s model is imperfect, or if the agent is nondeterministic, then the interdependence is weakened, but not eliminated. If one observes two calculators computing 6288+1048628810486288+10486288 + 1048, and one of them outputs the number 7336733673367336, one can be fairly sure that the other will also output 7336733673367336. One may not be certain—it’s always possible that one hallucinated the number, or that a cosmic ray struck the calculator’s circuitry in just the right way to change its output. Yet it will still be the case that one can reasonably infer things about one calculator’s behavior based on observing a different calculator. Just as the outputs of the calculator are logically constrained to be equivalent insofar as the calculator made no errors, the prediction and the action in Newcomb’s problem are logically constrained to be equivalent insofar as the the predictor made no errors.
When two physical systems are computing the same function, we will say that their behaviors “subjunctively depend” upon that function.444Positing a logical object that decision-makers implement (as opposed to just saying, for example, that interdependent decision-makers depend upon *each other*) is intended only to simplify exposition. We will maintain agnosticism about the metaphysical status of computations, universals, logical objects, etc.
At first glance, subjunctive dependence may appear to be rooted in causality via the mechanism of a common cause. Any two calculators on Earth are likely to owe their existence to many of the same historical events, if you go back far enough. On the other hand, if one discovered two well-functioning calculators performing the same calculation on different sides of the universe, one might instead suspect that humans and aliens had independently discovered the axioms of arithmetic. We can likewise imagine two different species independently developing (and adopting) the same normative decision theory. In these circumstances, we might speak of universals, laws of nature, or logical or mathematical structures that underlie or explain the convergence; but the relationship is not necessarily “causal.”
In fact, causal dependence is a special case of subjunctive dependence. Imagine that there is a scribe watching a calculator in Paris, waiting to write down what it outputs. What she writes is causally dependent on the output of the calculator, and also subjunctively dependent upon the output of the calculator—for if it output something different, she would write something different.
Mere statistical correlation, in contrast, does not entail subjunctive dependence. If the Parisian calculator is pink, and (by coincidence) another calculator in Tokyo is also pink, then there is no subjunctive dependency between their colors: if the scribe had bought a green calculator instead, the one in Tokyo would still be pink.
Using this notion of subjunctive dependence, we can define FDT by analogy: *Functional decision theory is to subjunctive dependence as causal decision theory is to causal dependence.*
A CDT agent weighs her options by considering scenarios where her action changes, and events causally downstream from her action change, but everything else is held fixed. An FDT agent weighs her options by considering scenarios where the output of her decision function changes, and events that subjunctively depend on her decision function’s output change, but everything else is held fixed. In terms of Peirce’s type-token distinction ([37](#bib.bib37)), we can say that a CDT agent intervenes on the token “my action is a𝑎aitalic\_a,” whereas an FDT agent intervenes on the *type*.
If a certain decision function outputs cooperate on a certain input, then it does so of logical necessity; there is no possible world in which it outputs defect on that input, any more than there are possible worlds where 6288+1048≠73366288104873366288+1048\neq 73366288 + 1048 ≠ 7336. The above notion of subjunctive dependence therefore requires FDT agents to evaluate counterpossibilities, in the sense of (Cohen:1990, [11](#bib.bib11), ), where the antecedents run counter-to-logic. At first glance this may seem undesirable, given the lack of a satisfactory account of counterpossible reasoning. This lack is the main drawback of FDT relative to CDT at this time; we will discuss it further in section [5](#S5 "5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality").
In attempting to avoid this dependency on counterpossible conditionals, one might suggest a variant FDT′′{}^{\prime}start\_FLOATSUPERSCRIPT ′ end\_FLOATSUPERSCRIPT that asks not “What if my decision function had a different output?” but rather “What if I made my decisions using a different decision function?” When faced with a decision, an FDT′′{}^{\prime}start\_FLOATSUPERSCRIPT ′ end\_FLOATSUPERSCRIPT agent would iterate over functions fnsubscript𝑓𝑛f\_{n}italic\_f start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT from some set ℱℱ\mathcal{F}caligraphic\_F, consider how much utility she would achieve if she implemented that function instead of her actual decision function, and emulate the best fnsubscript𝑓𝑛f\_{n}italic\_f start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT. Her actual decision function d𝑑ditalic\_d is the function that iterates over F𝐹Fitalic\_F, and d∉ℱ𝑑ℱd\not\in\mathcal{F}italic\_d ∉ caligraphic\_F.
However, by considering the behavior of FDT′′{}^{\prime}start\_FLOATSUPERSCRIPT ′ end\_FLOATSUPERSCRIPT in Newcomb’s problem, we see that it does not save us any trouble. For the predictor predicts the output of d𝑑ditalic\_d, and in order to preserve the desired correspondence between predicted actions and predictions, FDT′′{}^{\prime}start\_FLOATSUPERSCRIPT ′ end\_FLOATSUPERSCRIPT cannot simply imagine a world in which she implements fnsubscript𝑓𝑛f\_{n}italic\_f start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT instead of d𝑑ditalic\_d; she must imagine a world in which all predictors of d𝑑ditalic\_d predict as if d𝑑ditalic\_d behaved like fnsubscript𝑓𝑛f\_{n}italic\_f start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT—and then we are right back to where we started, with a need for some method of predicting how an algorithm would behave if (counterpossibly) d𝑑ditalic\_d behaved different from usual.
Instead of despairing at the dependence of FDT on counterpossible reasoning, we note that the difficulty here is technical rather than philosophical. Human mathematicians are able to reason quite comfortably in the face of uncertainty about logical claims such as “the twin prime conjecture is false,” despite the fact that either this sentence or its negation is likely a contradiction, demonstrating that the task is not impossible. Furthermore, FDT agents do not need to evaluate counterpossibilities in full generality; they only need to reason about questions like “How would this predictor’s prediction of my action change if the FDT algorithm had a different output?” This task may be easier. Even if not, we again observe that human reasoners handle this problem fairly well: humans have some ability to notice when they are being predicted, and to think about the implications of their actions on other people’s predictions. While we do not yet have a satisfying account of how to perform counterpossible reasoning in practice, the human brain shows that reasonable heuristics exist.
Refer to (Bennett:1974, [6](#bib.bib6), [31](#bib.bib31), [11](#bib.bib11), [8](#bib.bib8), [7](#bib.bib7), [9](#bib.bib9), ) for a sample of discussion and research of counterpossible reasoning. Refer to (Garber:1983, [18](#bib.bib18), [17](#bib.bib17), [13](#bib.bib13), [22](#bib.bib22), [1](#bib.bib1), [38](#bib.bib38), ) for a sample of discussion and research into inference in the face of uncertainty about logical facts.
Ultimately, our interest here isn’t in the particular method an agent uses to identify and reason about subjunctive dependencies. The important assumption behind our proposed decision theory is that when an FDT agent imagines herself taking two different actions, she imagines corresponding changes in all and only those things that subjunctively depend on the output of her decision-making procedure. When she imagines switching from one-boxing to two-boxing in Newcomb’s problem, for example, she imagines the predictor’s prediction changing to match. So long as that condition is met, we can for the moment bracket the question of exactly how she achieves this feat of imagination.
4 Parfit’s Hitchhiker
----------------------
FDT’s novelty is more obvious in dilemmas where FDT agents outperform CDT *and* EDT agents, rather than just one or the other. Consider Parfit’s hitchhiker problem ([34](#bib.bib34)):
######
Dilemma 4 (Parfit’s Hitchhiker Problem).
An agent is dying in the desert. A driver comes along who offers to give the agent a ride into the city, but only if the agent will agree to visit an ATM once they arrive and give the driver $1,000. The driver will have no way to enforce this after they arrive, but she does have an extraordinary ability to detect lies with 99% accuracy. Being left to die causes the agent to lose the equivalent of $1,000,000. In the case where the agent gets to the city, should she proceed to visit the ATM and pay the driver?
The CDT agent says no. Given that she has safely arrived in the city, she sees nothing further to gain by paying the driver. The EDT agent agrees: on the assumption that she is already in the city, it would be bad news for her to learn that she was out $1,000. Assuming that the CDT and EDT agents are smart enough to know what they would do upon arriving in the city, this means that neither can honestly claim that they would pay. The driver, detecting the lie, leaves them in the desert to die Hintze:2014 ([21](#bib.bib21)).
The prescriptions of CDT and EDT here run contrary to many people’s intuitions, which say that the most “rational” course of action is to pay upon reaching the city. Certainly if these agents had the opportunity to make binding precommitments to pay upon arriving, they would achieve better outcomes.
Consider next the following variant of Newcomb’s problem, due to (Drescher:2006, [14](#bib.bib14), ):
######
Dilemma 5 (The Transparent Newcomb Problem).
Events transpire as they do in Newcomb’s problem, except that this time both boxes are transparent—so the agent can see exactly what decision the predictor made before making her own decision. The predictor placed $1,000,000 in box B iff she predicted that the agent would leave behind box A (which contains $1,000) upon seeing that both boxes are full. In the case where the agent faces two full boxes, should she leave the $1,000 behind?
Here, the most common view is that the rational decision is to two-box. CDT prescribes two-boxing for the same reason it prescribes two-boxing in the standard variant of the problem: whether or not box B is full, taking the extra $1,000 has better consequences. EDT also prescribes two-boxing here, because *given* that box B is full, an agent does better by taking both boxes.
FDT, on the other hand, prescribes one-boxing, even when the agent knows for sure that box B is full! We will examine how and why FDT behaves this way in more detail in section [6](#S6 "6 Comparing the Three Decision Algorithms’ Behavior ‣ Functional Decision Theory: A New Theory of Instrumental Rationality").
Before we write off FDT’s decision here as a misstep, however, we should note that one-boxing in the transparent Newcomb problem *is precisely equivalent to paying the driver in Parfit’s hitchhiker problem*.
The driver assists the agent (at a value of $1,000,000) iff she predicts that the agent will pay $1,000 upon finding herself in the city. The predictor fills box B (at a value of $1,000,000) iff she predicts that the agent will leave behind $1,000 upon finding herself facing two full boxes. Why, then, do some philosophers intuit that we should pay the hitchhiker but take both boxes?
It could be that this inconsistency stems from how the two problems are framed. In Newcomb’s problem, the money left sitting in the second box is described in vivid sensory terms; whereas in the hitchhiker problem, dying in the desert is the salient image. Or perhaps our intuitions are nudged by the fact that Parfit’s hitchhiker problem has *moral* overtones. You’re not just outwitting a predictor; you’re betraying someone who saved your life.
Whatever the reason, it is hard to put much faith in decision-theoretic intuitions that are so sensitive to framing effects. One-boxing in the transparent Newcomb problem may seem rather strange, but there are a number of considerations that weigh strongly in its favor.
#### Argument from precommitment:
CDT and EDT agents would both precommit to one-boxing if given advance notice that they were going to face a transparent Newcomb problem. If it is rational to precommit to something, then it should also be rational to predictably *behave as though* one has precommitted. For practical purposes, it is the action itself that matters, and an agent that predictably acts as she would have precommitted to act tends to get rich.
#### Argument from the value of information:
It is a commonplace in economic theory that a fully rational agent should never expect receiving new information to make her worse off. An EDT agent would pay for the opportunity to blindfold herself so that she can’t see whether box B is full, knowing that that information would cause her harm. Functional decision theorists, for their part, do not assign negative expected value to information (as a side effect of always acting as they would have precommitted to act).
#### Argument from utility:
One-boxing in the transparent Newcomb problem may look strange, but it *works*. Any predictor smart enough to carry out the arguments above can see that CDT and EDT agents two-box, while FDT agents one-box. Followers of CDT and EDT will therefore almost always see an empty box, while followers of FDT will almost always see a full one. Thus, FDT agents achieve more utility in expectation.
Expanding on the argument from precommitment, we note that precommitment requires foresight and planning, and can require the expenditure of resources—relying on ad-hoc precommitments to increase one’s expected utility is inelegant, expensive, and impractical. Rather than needing to carry out peculiar rituals in order to achieve the highest-utility outcome, FDT agents simply act as they would have ideally precommitted to act.
Another way of articulating this intuition is that we would expect the correct decision theory to endorse its own use. CDT, however, does not: According to CDT, an agent should (given the opportunity) self-modify to stop using CDT in future dilemmas, but continue using it in any ongoing Newcomblike problems Arntzenius:2008 ([5](#bib.bib5), [43](#bib.bib43)). Some causal decision theorists, such as (Burgess:2004, [10](#bib.bib10), ) and Joyce (in a 2015 personal conversation), bite this bullet and hold that temporal inconsistency is rational. We disagree with this line of reasoning, preferring to reserve the word “rational” for decision procedures that are endorsed by our best theory of normative action. On this view, a decision theory that (like CDT) advises agents to change their decision-making methodology as soon as possible can be lauded for its ability to recognize its own flaws, but is not a strong candidate for the normative theory of rational choice.
Expanding on the argument from the value of information, we view new information as a tool. It is possible to be fed lies or misleading truths; but if one *expects* information to be a lie, then one can simply disregard the information in one’s decision-making. It is therefore alarming that EDT agents can *expect* to suffer from learning more about their environments or dispositions, as described by (Skyrms:1982, [42](#bib.bib42), [5](#bib.bib5), ).
Expanding on the final argument, proponents of EDT, CDT, and FDT can all agree that it would be great news to hear that a beloved daughter adheres to FDT, because FDT agents get more of what they want out of life. Would it not then be strange if the correct theory of rationality were some *alternative* to the theory that produces the best outcomes, as measured in utility? (Imagine hiding decision theory textbooks from loved ones, lest they be persuaded to adopt the “correct” theory and do worse thereby!)
We consider this last argument—the argument from utility—to be the one that gives the precommitment and value-of-information arguments their teeth. If self-binding or self-blinding *were* important for getting more utility in certain scenarios, then we would plausibly endorse those practices. Utility has primacy, and FDT’s success on that front is the reason we believe that FDT is a more useful and general theory of rational choice.
The causal decision theorist’s traditional response to this line of reasoning has been to appeal to decision-theoretic dominance. An action a𝑎aitalic\_a is said to dominate an action b𝑏bitalic\_b if, holding constant the rest of the world’s state, switching from b𝑏bitalic\_b to a𝑎aitalic\_a is sometimes better (and never worse) than sticking with b𝑏bitalic\_b. (Nozick:1969, [33](#bib.bib33), ) originally framed Newcomb’s problem as a conflict between the goal of maximizing utility and the goal of avoiding actions that are dominated by other available actions. If CDT and EDT were the only two options available, a case could be made that CDT is preferable on this framing. Both theories variously fail at the goal of utility maximization (CDT in Newcomb’s problem, EDT in the smoking lesion problem), so it would seem that we must appeal to some alternative criterion in order to choose between them; and dominance is an intuitive criterion to fall back on.
As we will see in section [6](#S6 "6 Comparing the Three Decision Algorithms’ Behavior ‣ Functional Decision Theory: A New Theory of Instrumental Rationality"), however, FDT comes with its own dominance principle (analogous to that of CDT), according to which FDT agents tend to achieve higher utility *and* steer clear of dominated actions. More important than FDT’s possession of an alternative dominance principle, however, is that in FDT we at last have a general-purpose method for achieving the best real-world outcomes. If we have a strictly superior decision theory on the metric of utility, then we don’t need to fall back on the notion of dominance to differentiate between competing theories.
It is for this reason that we endorse one-boxing in the transparent Newcomb problem. When we imagine ourselves facing two full boxes, we feel some of the intuitive force behind the idea that an agent “could” break free of the shackles of determinism and two-box upon seeing that both boxes are full. But in 99 cases out of 100, *the kind of agent* that is inclined to conditionally give in to that temptation will actually find herself staring at an empty box B. Parfit’s desert is stacked high with the corpses of such agents.
5 Formalizing EDT, CDT, and FDT
--------------------------------
All three of EDT, CDT, and FDT are expected utility theories, meaning that they prescribe maximizing expected utility, which we can define \mkbibparensdrawing from (Gibbard:1978, [20](#bib.bib20), )555Early formalizations of decision theory date back to (Ramsey:1931, [39](#bib.bib39), ), (Von-Neumann:1944, [46](#bib.bib46), ), (Savage:1954, [40](#bib.bib40), ), and (Jeffrey:1983, [23](#bib.bib23), ). as executing an action a𝑎aitalic\_a that maximizes
| | | | |
| --- | --- | --- | --- |
| | ℰ𝒰(a)≔∑j=1NP(a↪oj;x)⋅𝒰(oj),≔ℰ𝒰𝑎superscriptsubscript𝑗1𝑁⋅𝑃↪𝑎subscript𝑜𝑗𝑥𝒰subscript𝑜𝑗\mathcal{EU}(a)\coloneqq\sum\_{j=1}^{N}P(a\hookrightarrow o\_{j};\,x)\cdot\mathcal{U}(o\_{j}),caligraphic\_E caligraphic\_U ( italic\_a ) ≔ ∑ start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT italic\_P ( italic\_a ↪ italic\_o start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ; italic\_x ) ⋅ caligraphic\_U ( italic\_o start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) , | | (1) |
where o1,o2,o3…subscript𝑜1subscript𝑜2subscript𝑜3…o\_{1},o\_{2},o\_{3}\ldotsitalic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_o start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_o start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT … are the possible outcomes from some countable set 𝒪𝒪\mathcal{O}caligraphic\_O; a𝑎aitalic\_a is an action from some finite set 𝒜𝒜\mathcal{A}caligraphic\_A; x𝑥xitalic\_x is an observation history from some countable set 𝒳𝒳\mathcal{X}caligraphic\_X; P(a↪oj;x)𝑃↪𝑎subscript𝑜𝑗𝑥{P(a\hookrightarrow o\_{j};\,x)}italic\_P ( italic\_a ↪ italic\_o start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ; italic\_x ) is the probability that ojsubscript𝑜𝑗o\_{j}italic\_o start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT will obtain in the hypothetical scenario where the action a𝑎aitalic\_a is executed after receiving observations x𝑥xitalic\_x; and 𝒰𝒰\mathcal{U}caligraphic\_U is a real-valued utility function bounded in such a way that ([1](#S5.E1 "1 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")) is always finite.
By the representation theorem of (Joyce:1999, [24](#bib.bib24), ), any agent with a conditional preference ranking and a conditional likelihood function satisfying Joyce’s axioms will *want* to maximize equation ([1](#S5.E1 "1 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")), given some set of constraints on the values of P(−↪−;x)P(-\hookrightarrow-;\,x)italic\_P ( - ↪ - ; italic\_x ), where those constraints are a free parameter—to quote Joyce, “decision theories should not be seen as offering competing theories of value, but as disagreeing about the epistemic perspective from which actions are to be evaluated.” EDT, CDT, and FDT can be understood as competing attempts to maximize expected utility by supplying different “epistemic perspectives” in the form of differing interpretations of “↪↪\hookrightarrow↪”, the connective for decision hypotheticals.
According to evidential decision theorists, “↪↪\hookrightarrow↪” should be interpreted as simple Bayesian conditionalization, with P(a↪oj;x)𝑃↪𝑎subscript𝑜𝑗𝑥{P(a\hookrightarrow o\_{j};\,x)}italic\_P ( italic\_a ↪ italic\_o start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ; italic\_x ) standing for P(oj∣x,a)𝑃conditionalsubscript𝑜𝑗𝑥𝑎{P(o\_{j}\mid x,a)}italic\_P ( italic\_o start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∣ italic\_x , italic\_a ).666By P(oj∣x,a)𝑃conditionalsubscript𝑜𝑗𝑥𝑎P(o\_{j}\mid x,a)italic\_P ( italic\_o start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∣ italic\_x , italic\_a ) we mean P(Outcome=oj∣Obs=x,Act=a)P(\text{{Outcome}}=o\_{j}\mid\text{{Obs}}=x,\text{{Act}}=a)italic\_P ( Outcome = italic\_o start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∣ Obs = italic\_x , Act = italic\_a ), where Outcome is an 𝒪𝒪\mathcal{O}caligraphic\_O-valued random variable representing the outcome; Obs is a 𝒳𝒳\mathcal{X}caligraphic\_X-valued random variable representing the observation history; and Act is an 𝒜𝒜\mathcal{A}caligraphic\_A-valued random variable representing the agent’s action. We write variable names like This and values like this. We omit variable names when it is possible to do so unambiguously. Causal and functional decision theorists, on the other hand, insist that conditionals are not counterfactuals. Consider a simple dilemma where a rational agent must choose whether to pick up a $1 bill or a $100 bill (but not both). *Conditional* on her picking up the $1, she probably had a good reason—maybe there is a strange man nearby buying $1 bills for $1,000. But if *counterfactually* she took the $1 instead of the $100, she would simply be poorer. Causal and functional decision theorists both agree that rational agents should use counterfactual considerations to guide their actions, though they disagree only about which counterfactuals to consider. (Adams:1970, [2](#bib.bib2), ), (Lewis:1981, [30](#bib.bib30), ), and many others have spilled considerable ink discussing this point, so we will not belabor it.
Since Newcomblike problems invoke predictors who build accurate predictions of different agents’ reasoning, we will find it helpful to not only ask which theory an agent follows, but *how* she follows that theory’s prescriptions.
In general terms, an agent doing her best to follow the prescriptions of one of these three theories should maintain a world-model M𝑀Mitalic\_M—which might be a collection of beliefs and intuitions in her head, a full Bayesian probability distribution updated on her observations thus far, an approximate distribution represented by a neural network, or something else entirely.
When making a decision, for each action a𝑎aitalic\_a, the agent should modify M𝑀Mitalic\_M to construct an object M↪a𝑀↪aM\textsuperscript{$a\!\hookrightarrow$}italic\_M italic\_a ↪ representing what the world would look like if she took that action.777In the authors’ preferred formalization of FDT, agents actually iterate over *policies* (mappings from observations to actions) rather than actions. This makes a difference in certain multi-agent dilemmas, but will not make a difference in this paper. We will think of M↪a𝑀↪aM\textsuperscript{$a\!\hookrightarrow$}italic\_M italic\_a ↪ as a mental image in the agent’s head of how the world might look if she chooses a𝑎aitalic\_a, though one could also think of it as (e.g.) a table giving probabilities to each outcome ojsubscript𝑜𝑗o\_{j}italic\_o start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT, or an approximate summary of that table. She should then use each M↪a𝑀↪aM\textsuperscript{$a\!\hookrightarrow$}italic\_M italic\_a ↪ to calculate a value Vasubscript𝑉𝑎V\_{a}italic\_V start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT representing her expected utility if she takes a𝑎aitalic\_a, and take the action corresponding to the highest value of Vasubscript𝑉𝑎V\_{a}italic\_V start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT.
We call M↪a𝑀↪aM\textsuperscript{$a\!\hookrightarrow$}italic\_M italic\_a ↪ the agent’s “hypothetical for a𝑎aitalic\_a.” On this treatment, hypotheticals are mental or computational objects used as tools for evaluating the expected utility of actions, not (for example) mind-independent possible worlds. Hypotheticals are not themselves beliefs; they are decision-theoretic instruments constructed from beliefs. We think of hypotheticals like we think of notes on scratch paper: transient, useful, and possibly quite representationally thin.
From this perspective, the three decision theories differ only in two ways: how they prescribe representing M𝑀Mitalic\_M, and how they prescribe constructing hypotheticals M↪a𝑀↪aM\textsuperscript{$a\!\hookrightarrow$}italic\_M italic\_a ↪ from M𝑀Mitalic\_M. For example, according to EDT, M𝑀Mitalic\_M should be simply be P(−∣x)P(-\mid x)italic\_P ( - ∣ italic\_x ), a Bayesian probability distribution describing the agent’s beliefs about the world updated on observation; and M↪a𝑀↪aM\textsuperscript{$a\!\hookrightarrow$}italic\_M italic\_a ↪ should be constructed by conditioning P(−∣x)P(-\mid x)italic\_P ( - ∣ italic\_x ) on a𝑎aitalic\_a.
Formally, defining a variable V≔𝒰(Outcome)≔V𝒰Outcome\text{{V}}\coloneqq\mathcal{U}(\text{{Outcome}})V ≔ caligraphic\_U ( Outcome ) and writing 𝔼𝔼\mathbb{E}blackboard\_E for expectation with respect to P𝑃Pitalic\_P, EDT prescribes the action
| | | | |
| --- | --- | --- | --- |
| | EDT(P,x)≔argmaxa∈𝒜𝔼(V∣Obs=x,Act=a),≔EDT𝑃𝑥subscriptargmax𝑎𝒜𝔼formulae-sequenceconditionalVObs𝑥Act𝑎\mathrm{EDT}(P,x)\coloneqq\operatorname\*{argmax\;}\_{a\in\mathcal{A}}\mathbb{E}\left(\text{{V}}\mid\text{{Obs}}=x,\text{{Act}}=a\right),roman\_EDT ( italic\_P , italic\_x ) ≔ start\_OPERATOR roman\_argmax end\_OPERATOR start\_POSTSUBSCRIPT italic\_a ∈ caligraphic\_A end\_POSTSUBSCRIPT blackboard\_E ( V ∣ Obs = italic\_x , Act = italic\_a ) , | | (2) |
where it is understood that if an action a𝑎aitalic\_a has probability 00 then it is not considered. If the agent makes no observations during a decision problem, then we will omit x𝑥xitalic\_x and write, e.g., EDT(P)EDT𝑃\mathrm{EDT}(P)roman\_EDT ( italic\_P ).
Equation ([2](#S5.E2 "2 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")) can be read in at least three interesting ways. It can be seen as a Joyce-style constraint on the value of P(a↪oj;x)𝑃↪𝑎subscript𝑜𝑗𝑥P(a\hookrightarrow o\_{j};\,x)italic\_P ( italic\_a ↪ italic\_o start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ; italic\_x ). It can be seen as advice saying what information the agent needs to make their world-model M𝑀Mitalic\_M (a probability distribution) and how they should construct hypotheticals M↪a𝑀↪aM\textsuperscript{$a\!\hookrightarrow$}italic\_M italic\_a ↪ (Bayesian conditionalization). Finally, it can be seen as a step-by-step procedure or algorithm that could in principle be scrupulously followed by a human actor, or programmed into a decision-making machine: take P𝑃Pitalic\_P and x𝑥xitalic\_x as input, compute 𝔼(V∣x,a)𝔼conditionalV𝑥𝑎{\mathbb{E}(\text{{V}}\mid x,a)}blackboard\_E ( V ∣ italic\_x , italic\_a ) for each action a𝑎aitalic\_a, and execute the a𝑎aitalic\_a corresponding to the highest value.
As an aside, note that equation ([2](#S5.E2 "2 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")) does not address the question of how a consistent initial belief state P𝑃Pitalic\_P is constructed. This may be nontrivial if the agent has beliefs about her own actions, in which case P𝑃Pitalic\_P needs to assign probabilities to claims about EDT(P,x)EDT𝑃𝑥\mathrm{EDT}(P,x)roman\_EDT ( italic\_P , italic\_x ), possibly requiring ratification procedures à la (Jeffrey:1983, [23](#bib.bib23), ).
According to CDT, a rational agent’s M𝑀Mitalic\_M should be not just a distribution P𝑃Pitalic\_P, but P𝑃Pitalic\_P augmented with additional data describing the causal structure of the world; and M↪a𝑀↪aM\textsuperscript{$a\!\hookrightarrow$}italic\_M italic\_a ↪ should be constructed from M𝑀Mitalic\_M by performing a causal intervention on P𝑃Pitalic\_P that sets Act=aAct𝑎{\text{{Act}}=a}Act = italic\_a. There is some debate about exactly how causal structure should be represented and how causal interventions should be carried out, going back to at least (Lewis:1973, [28](#bib.bib28), ). Pearl’s graphical approach ([35](#bib.bib35)) is perhaps the most complete account, and it is surely among the easiest to formalize, so we will use it here.888In this paper, we use graphical formulations of CDT and FDT because they are simple, formal, and easy to visualize. However, they are not the only way to formalize the two theories, and there are causal and functional decision theorists who don’t fully endorse the equations developed in this section. The only features of graphical models that we rely upon are the independence relations that they encode. For example, our argument that CDT two-boxes relies on graphs only insofar as the graph says that the agent’s action is causally independent from the prediction of the predictor. Any formalization of CDT (graphical or otherwise) that respects the independence relationships in our graphs will agree with our conclusions.
In Pearl’s formulation, M𝑀Mitalic\_M is a “causal theory,” which, roughly speaking, is a pair (P,G)𝑃𝐺(P,G)( italic\_P , italic\_G ) where G𝐺Gitalic\_G is a graph describing the direction of causation in the correlations between the variables of P𝑃Pitalic\_P. To go from M𝑀Mitalic\_M to M↪a𝑀↪aM\textsuperscript{$a\!\hookrightarrow$}italic\_M italic\_a ↪, Pearl defines an operator 𝚍𝚘𝚍𝚘\mathtt{do}typewriter\_do. Again speaking roughly, P(−∣𝚍𝚘(Var=𝑣𝑎𝑙))P(-\mid\mathtt{do}(\text{{Var}}=\text{{val}}))italic\_P ( - ∣ typewriter\_do ( Var = val ) ) is a modified version of P𝑃Pitalic\_P in which all variables that are causally downstream from Var (according to G𝐺Gitalic\_G) are updated to reflect Var=𝑣𝑎𝑙Var𝑣𝑎𝑙\text{{Var}}=\text{{val}}Var = val, and all other variables are left untouched.
PredispositionActPredictionAccurateBox BObsOutcomeV
Figure 1: A causal graph for CDT agents facing Newcomb’s problem. The agent observes the double-bordered node (which is, in this case, unused), intervenes on the rectangular node, and calculates utility using the diamond node.
For example, consider figure [1](#S5.F1 "Figure 1 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality"), which gives a causal graph for Newcomb’s problem. It must be associated with a probability distribution P𝑃Pitalic\_P that has a variable for each node in the graph, such that Act is a function of Predisposition, Outcome is a function of Act and Box B, and so on.
Graphically, the operation 𝚍𝚘(Act=𝑜𝑛𝑒𝑏𝑜𝑥)𝚍𝚘Act𝑜𝑛𝑒𝑏𝑜𝑥\mathtt{do}(\text{{Act}}=\text{{onebox}})typewriter\_do ( Act = onebox ) begins by setting Act=𝑜𝑛𝑒𝑏𝑜𝑥Act𝑜𝑛𝑒𝑏𝑜𝑥\text{{Act}}=\text{{onebox}}Act = onebox without changing any other variables. It then follows arrows outwards, and recomputes the values of any node it finds, substituting in the values of variables that it has affected so far. For example, when it gets to Outcome, it updates it to o=Outcome(𝑜𝑛𝑒𝑏𝑜𝑥,Box B)𝑜Outcome𝑜𝑛𝑒𝑏𝑜𝑥Box Bo=\text{{Outcome}}(\text{{onebox}},\text{{Box B}})italic\_o = Outcome ( onebox , Box B ); and when it gets to V it updates it to V(o)V𝑜\text{{V}}(o)V ( italic\_o ). Any variable that is not downstream from Act is unaffected. We can visualize 𝚍𝚘(Act=𝑜𝑛𝑒𝑏𝑜𝑥)𝚍𝚘Act𝑜𝑛𝑒𝑏𝑜𝑥\mathtt{do}(\text{{Act}}=\text{{onebox}})typewriter\_do ( Act = onebox ) as severing the correlation between Act and Predisposition and then performing a normal Bayesian update; or we can visualize it as updating Act and everything that it causally affects while holding everything else fixed. Refer to (Pearl:1996, [35](#bib.bib35), ) for the formal details.
Pearl’s formulation yields the following equation for CDT:
| | | | |
| --- | --- | --- | --- |
| | CDT(P,G,x)≔argmaxa∈𝒜𝔼(V|𝚍𝚘(Act=a),Obs=x).\mathrm{CDT}(P,G,x)\coloneqq\operatorname\*{argmax\;}\_{a\in\mathcal{A}}\mathbb{E}\left(\text{{V}}\;\middle|\;\mathtt{do}(\text{{Act}}=a),\text{{Obs}}=x\right).roman\_CDT ( italic\_P , italic\_G , italic\_x ) ≔ start\_OPERATOR roman\_argmax end\_OPERATOR start\_POSTSUBSCRIPT italic\_a ∈ caligraphic\_A end\_POSTSUBSCRIPT blackboard\_E ( V | typewriter\_do ( Act = italic\_a ) , Obs = italic\_x ) . | | (3) |
As before, this equation can be interpreted as a Joyce-style constraint, or as advice about how to construct hypotheticals from a world-model, or as a step-by-step decision procedure. We leave aside the question of how to construct G𝐺Gitalic\_G from observation and experiment; it is examined in detail by (Pearl:2000, [36](#bib.bib36), ).
In CDT hypotheticals, some correlations in P𝑃Pitalic\_P are preserved but others are broken. For any variable Z𝑍Zitalic\_Z, correlations between Act and Z𝑍Zitalic\_Z are preserved iff G𝐺Gitalic\_G says that the correlation is caused by Act—all other correlations are severed by the 𝚍𝚘𝚍𝚘\mathtt{do}typewriter\_do operator. FDT’s hypotheticals are also constructed via surgery on a world-model that preserves some correlations and breaks others. What distinguishes these two theories is that where a CDT agent performs surgery on a variable Act representing a *physical* fact (about how her body behaves), an FDT agent performs the surgery on a variable representing a *logical* fact (about how her decision function behaves).
For a functional decision theorist, P𝑃Pitalic\_P must contain variables representing the outputs of different mathematical functions. There will for example be a variable representing the value of 6288+1048628810486288+10486288 + 1048, one that (hopefully) has most of its probability on 7336733673367336. A functional decision theorist with world-model M𝑀Mitalic\_M and observations x𝑥xitalic\_x calculates FDT(M,x)FDT𝑀𝑥\mathrm{FDT}(M,x)roman\_FDT ( italic\_M , italic\_x ) by intervening on a variable fdt(M¯,x¯)fdt¯𝑀¯𝑥\text{{fdt}}(\underline{M},\underline{x})fdt ( under¯ start\_ARG italic\_M end\_ARG , under¯ start\_ARG italic\_x end\_ARG ) in M𝑀Mitalic\_M that represents the output of FDTFDT\mathrm{FDT}roman\_FDT when run on inputs M𝑀Mitalic\_M and x𝑥xitalic\_x. Here we use underlines to represent dequoting, i.e., if x≔3≔𝑥3x\coloneqq 3italic\_x ≔ 3 then Zx¯Z¯𝑥\text{{Z}}\underline{x}Z under¯ start\_ARG italic\_x end\_ARG denotes the variable name Z3Z3\text{{Z}}3Z 3. Note the self-reference here: the model M𝑀Mitalic\_M contains variables for many different mathematical functions, and the FDT algorithm singles out a variable fdt(M¯,x¯)fdt¯𝑀¯𝑥\text{{fdt}}(\underline{M},\underline{x})fdt ( under¯ start\_ARG italic\_M end\_ARG , under¯ start\_ARG italic\_x end\_ARG ) in the model whose name depends on the model. This self-reference is harmless: an FDT agent does not need to know the *value* of FDT(M,x)FDT𝑀𝑥\mathrm{FDT}(M,x)roman\_FDT ( italic\_M , italic\_x ) in order to have a variable fdt(M¯,x¯)fdt¯𝑀¯𝑥\text{{fdt}}(\underline{M},\underline{x})fdt ( under¯ start\_ARG italic\_M end\_ARG , under¯ start\_ARG italic\_x end\_ARG ) *representing* that value, and Kleene’s second recursion theorem shows how to construct data structures that contain and manipulate accurate representations of themselves Kleene:1938 ([27](#bib.bib27)), via a technique known as “quining”.
Instead of a 𝚍𝚘𝚍𝚘\mathtt{do}typewriter\_do operator, FDT needs a 𝚝𝚛𝚞𝚎𝚝𝚛𝚞𝚎\mathtt{true}typewriter\_true operator, which takes a logical sentence ϕitalic-ϕ\phiitalic\_ϕ and updates P𝑃Pitalic\_P to represent the scenario where ϕitalic-ϕ\phiitalic\_ϕ is true. For example, P(Riemann∣𝚝𝚛𝚞𝚎(¬TPC))𝑃conditionalRiemann𝚝𝚛𝚞𝚎TPC{P(\text{{Riemann}}\mid\mathtt{true}(\lnot\text{{TPC}}))}italic\_P ( Riemann ∣ typewriter\_true ( ¬ TPC ) ) might denote the agent’s subjective probability that the Riemann hypothesis would be true if (counterfactually) the twin prime conjecture were false. Then we could say that
| | | |
| --- | --- | --- |
| | FDT\*(P,x)=argmaxa∈𝒜𝔼(V∣𝚝𝚛𝚞𝚎(fdt(P¯,x¯)=a)).superscriptFDT𝑃𝑥subscriptargmax𝑎𝒜𝔼conditionalV𝚝𝚛𝚞𝚎fdt¯𝑃¯𝑥𝑎\mathrm{FDT}^{\*}(P,x)=\operatorname\*{argmax\;}\_{a\in\mathcal{A}}\mathbb{E}\left(\text{{V}}\mid\mathtt{true}(\text{{fdt}}(\underline{P},\underline{x})=a)\right).roman\_FDT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_P , italic\_x ) = start\_OPERATOR roman\_argmax end\_OPERATOR start\_POSTSUBSCRIPT italic\_a ∈ caligraphic\_A end\_POSTSUBSCRIPT blackboard\_E ( V ∣ typewriter\_true ( fdt ( under¯ start\_ARG italic\_P end\_ARG , under¯ start\_ARG italic\_x end\_ARG ) = italic\_a ) ) . | |
Unfortunately, it’s not clear how to define a 𝚝𝚛𝚞𝚎𝚝𝚛𝚞𝚎\mathtt{true}typewriter\_true operator. Fortunately, we don’t have to. Just as CDT requires that P𝑃Pitalic\_P come augmented with information about the causal structure of the world, FDT can require that P𝑃Pitalic\_P come augmented with information about the logical, mathematical, computational, causal, etc. structure of the world more broadly. Given a graph G𝐺Gitalic\_G that tells us how changing a logical variable affects all other variables, we can re-use Pearl’s 𝚍𝚘𝚍𝚘\mathtt{do}typewriter\_do operator to give a decision procedure for FDT:999This is not the only way to formalize a 𝚝𝚛𝚞𝚎𝚝𝚛𝚞𝚎\mathtt{true}typewriter\_true operator. Some functional decision theorists hope that the study of counterpossibilities will give rise to a method for conditioning a distribution on logical facts, allowing one to define FDT(P,x)=argmaxa𝔼(V|fdt(M¯,x¯)=a),\mathrm{FDT}(P,x)=\operatorname\*{argmax\;}\_{\!\!a}\mathbb{E}\left(\text{{V}}\;\middle|\;\text{{fdt}}(\underline{M},\underline{x})=a\right),roman\_FDT ( italic\_P , italic\_x ) = start\_OPERATOR roman\_argmax end\_OPERATOR start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT blackboard\_E ( V | fdt ( under¯ start\_ARG italic\_M end\_ARG , under¯ start\_ARG italic\_x end\_ARG ) = italic\_a ) , an evidential version of FDT. We currently lack a formal definition of conditional probabilities that can be used with false logical sentences (such as fdt(M¯,x¯)=a2fdt¯𝑀¯𝑥subscript𝑎2\text{{fdt}}(\underline{M},\underline{x})=a\_{2}fdt ( under¯ start\_ARG italic\_M end\_ARG , under¯ start\_ARG italic\_x end\_ARG ) = italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT when in fact it equals a1subscript𝑎1a\_{1}italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT). Thus, for the purposes of this paper, we require that the relevant logical dependency structure be given as an input to FDT, in the same way that the relevant causal structure is given as an input to CDT.
| | | | |
| --- | --- | --- | --- |
| | FDT(P,G,x)≔argmaxa∈𝒜𝔼(V|𝚍𝚘(fdt(P¯,G¯,x¯)=a)).\mathrm{FDT}(P,G,x)\coloneqq\operatorname\*{argmax\;}\_{a\in\mathcal{A}}\mathbb{E}\left(\text{{V}}\;\middle|\;\mathtt{do}\left(\text{{fdt}}(\underline{P},\underline{G},\underline{x})=a\right)\right).roman\_FDT ( italic\_P , italic\_G , italic\_x ) ≔ start\_OPERATOR roman\_argmax end\_OPERATOR start\_POSTSUBSCRIPT italic\_a ∈ caligraphic\_A end\_POSTSUBSCRIPT blackboard\_E ( V | typewriter\_do ( fdt ( under¯ start\_ARG italic\_P end\_ARG , under¯ start\_ARG italic\_G end\_ARG , under¯ start\_ARG italic\_x end\_ARG ) = italic\_a ) ) . | | (4) |
Comparing equations ([3](#S5.E3 "3 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")) and ([4](#S5.E4 "4 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")), we see that there are two differences between FDT and CDT. First, where CDT intervenes on a node Act representing the physical action of the agent, FDT intervenes on a node fdt(P¯,G¯,x¯)fdt¯𝑃¯𝐺¯𝑥\text{{fdt}}(\underline{P},\underline{G},\underline{x})fdt ( under¯ start\_ARG italic\_P end\_ARG , under¯ start\_ARG italic\_G end\_ARG , under¯ start\_ARG italic\_x end\_ARG ) representing the outputs of its decision procedure given its inputs. Second, where CDT responds to observation by Bayesian conditionalization, FDT responds to observation by changing which node it intervenes upon. When CDT’s observation history updates from x𝑥xitalic\_x to y𝑦yitalic\_y, CDT changes from conditioning its model on Obs=xObs𝑥\text{{Obs}}=xObs = italic\_x to conditioning its model on Obs=yObs𝑦\text{{Obs}}=yObs = italic\_y, whereas FDT changes from intervening on the variable fdt(P¯,G¯,x¯)fdt¯𝑃¯𝐺¯𝑥\text{{fdt}}(\underline{P},\underline{G},\underline{x})fdt ( under¯ start\_ARG italic\_P end\_ARG , under¯ start\_ARG italic\_G end\_ARG , under¯ start\_ARG italic\_x end\_ARG ) to fdt(P¯,G¯,y¯)fdt¯𝑃¯𝐺¯𝑦\text{{fdt}}(\underline{P},\underline{G},\underline{y})fdt ( under¯ start\_ARG italic\_P end\_ARG , under¯ start\_ARG italic\_G end\_ARG , under¯ start\_ARG italic\_y end\_ARG ) instead. We will examine the consequences of these two differences in the following section.
Equation ([4](#S5.E4 "4 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")) is sufficient for present purposes, though it rests on shakier philosophical foundations than ([3](#S5.E3 "3 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")). (Pearl:2000, [36](#bib.bib36), ) has given a compelling philosophical account of how to deduce the structure of causation from observation and experiment, but no such formal treatment has yet been given to the problem of deducing the structure of other kinds of subjunctive dependence. Equation ([4](#S5.E4 "4 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")) works *given* a graph that accurately describes how changing the value of a logical variable affects other variables, but it is not yet clear how to construct such a thing—nor even whether it can be done in a satisfactory manner within Pearl’s framework. Figuring out how to deduce the structure of subjunctive dependencies from observation and experiment is perhaps the largest open problem in the study of functional decision theory.101010An in-depth discussion of this issue is beyond the scope of this paper, but refer to section [3](#S3 "3 Subjunctive Dependence ‣ Functional Decision Theory: A New Theory of Instrumental Rationality") for relevant resources.
In short, CDT and FDT both construct counterfactuals by performing a surgery on their world-model that breaks some correlations and preserves others, but where CDT agents preserve only causal structure in their hypotheticals, FDT agents preserve all decision-relevant subjunctive dependencies in theirs. This analogy helps illustrate that Joyce’s representation theorem applies to FDT as well as CDT. Joyce’s representation theorem ([24](#bib.bib24)) is very broad, and applies to any decision theory that prescribes maximizing expected utility relative to a set of constraints on an agent’s beliefs about what would obtain under different conditions. To quote (Joyce:1999, [24](#bib.bib24), ):
>
> It should now be clear that all expected utility theorists can agree about the broad foundational assumptions that underlie their common doctrine. [ …] Since the constraints on conditional preferences and beliefs needed to establish the existence of conditional utility representations in Theorem 7.4 are common to both the causal and evidential theories, there is really no difference between them as far as their core accounts of valuing are concerned. [ …] There remains, of course, an important difference between the causal and evidential approaches to decision theory. Even though they agree about the way in which prospects should be valued once an epistemic perspective is in place, the two theories differ about the correct epistemic perspective from which an agent should evaluate his or her potential actions.
>
>
>
He is speaking mainly of the relationship between CDT and EDT, but the content applies just as readily to the relationship between FDT and CDT. FDT is defined, by ([4](#S5.E4 "4 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")), as an expected utility theory which differs from CDT only in what constraints it places on an agent’s thoughts about what would obtain if (counterfactually) she made different observations and/or took different actions. In particular, where CDT requires that an agent’s hypotheticals respect causal constraints, FDT requires also that the agent’s counterfactuals respect logical constraints. FDT, then, is a cousin of CDT that endorses the theory of expected utility maximization and meets the constraints of Joyce’s representation theorem. The prescriptions of FDT differ from those of CDT only on the dimension that Joyce left as a free parameter: the constraints on how agents think about the hypothetical outcomes of their actions.
From these equations, we can see that all three theories agree on models (P,G)𝑃𝐺(P,G)( italic\_P , italic\_G ) in which all correlations between Act and other variables are caused (according to G𝐺Gitalic\_G) by Act, except perhaps fdt(P¯,G¯)fdt¯𝑃¯𝐺\text{{fdt}}(\underline{P},\underline{G})fdt ( under¯ start\_ARG italic\_P end\_ARG , under¯ start\_ARG italic\_G end\_ARG ), on which Act may subjunctively depend (according to G𝐺Gitalic\_G). In such cases,
| | | |
| --- | --- | --- |
| | 𝔼[V∣a]=𝔼[V∣𝚍𝚘(a)]=𝔼[V∣𝚍𝚘(fdt(P¯,G¯)=a)],𝔼delimited-[]conditional𝑉𝑎𝔼delimited-[]conditional𝑉𝚍𝚘𝑎𝔼delimited-[]conditional𝑉𝚍𝚘fdt¯𝑃¯𝐺𝑎\mathbb{E}[V\mid a]=\mathbb{E}[V\mid\mathtt{do}(a)]=\mathbb{E}[V\mid\mathtt{do}(\text{{fdt}}(\underline{P},\underline{G})=a)],blackboard\_E [ italic\_V ∣ italic\_a ] = blackboard\_E [ italic\_V ∣ typewriter\_do ( italic\_a ) ] = blackboard\_E [ italic\_V ∣ typewriter\_do ( fdt ( under¯ start\_ARG italic\_P end\_ARG , under¯ start\_ARG italic\_G end\_ARG ) = italic\_a ) ] , | |
so all three equations produce the same output. However, this condition is violated in cases where events correlate with the agent’s action in a manner that is not caused by the action, which happens when, e.g., some other actor is making predictions about the agent’s behavior. It is for this reason that we turn to Newcomblike problems to distinguish between the three theories, and demonstrate FDT’s superiority, when measuring in terms of utility achieved.
6 Comparing the Three Decision Algorithms’ Behavior
----------------------------------------------------
With equations for EDT, CDT, and FDT in hand, we can put our analyses of Newcomb’s problem, the smoking lesion problem, and the transparent Newcomb problem on a more formal footing. We can construct probability distributions and graphical models for a given dilemma, feed them into our equations, and examine precisely what actions an agent following a certain decision algorithm would take, and why.
In what follows, we will consider the behavior of three agents—Eve, Carl, and Fiona—who meticulously follow the prescriptions of equations ([2](#S5.E2 "2 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")), ([3](#S5.E3 "3 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")), and ([4](#S5.E4 "4 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")) respectively. We will do this by defining P𝑃Pitalic\_P and G𝐺Gitalic\_G objects, and evaluating the EDTEDT\mathrm{EDT}roman\_EDT, CDTCDT\mathrm{CDT}roman\_CDT, and FDTFDT\mathrm{FDT}roman\_FDT algorithms on those inputs. Note that our P𝑃Pitalic\_Ps and G𝐺Gitalic\_Gs will describe an agent’s *models* of what situation they are facing, as opposed to representing the situations themselves. When Carl changes a variable Z𝑍Zitalic\_Z in his distribution PCsuperscript𝑃𝐶P^{\mathit{C}}{}italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT, he is not affecting the object that Z𝑍Zitalic\_Z represents in the world; he is manipulating a representation of Z𝑍Zitalic\_Z in his head to help him decide what to do. Also, note that we will be evaluating agents in situations where their models accurately portray the situations that they face. It is possible to give a Eve agent a model PEsuperscript𝑃𝐸P^{\mathit{E}}{}italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT that says she’s playing checkers when in fact she’s playing chess, but this wouldn’t tell us much about her decision-making skill.
We will evaluate Eve, Carl, and Fiona using the simplest possible world-models that accurately capture a given dilemma. In the real world, their models would be much more complicated, containing variables for each and every one of their beliefs. In this case, solving equations ([2](#S5.E2 "2 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")), ([3](#S5.E3 "3 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")), and ([4](#S5.E4 "4 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")) would be intractable: maximizing over all possible actions is not realistic. We study the idealized setting because we expect that bounded versions of Eve, Carl, and Fiona would exhibit similar strengths and weaknesses relative to one another. After all, an agent approximating EDT will behave differently than an agent approximating CDT, even if both agents are bounded and imperfect.
### Newcomb’s Problem
To determine how Eve the EDT agent responds to Newcomb’s problem, we need a distribution PEsuperscript𝑃𝐸P^{\mathit{E}}{}italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT describing the epistemic state of Eve when she believes she is facing Newcomb’s problem. We will use the distribution in figure [2](#S6.F2 "Figure 2 ‣ Newcomb’s Problem ‣ 6 Comparing the Three Decision Algorithms’ Behavior ‣ Functional Decision Theory: A New Theory of Instrumental Rationality"), which is complete except for the value PE(𝑜𝑛𝑒𝑏𝑜𝑥)superscript𝑃𝐸𝑜𝑛𝑒𝑏𝑜𝑥P^{\mathit{E}}{}(\text{{onebox}})italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT ( onebox ), Eve’s prior probability that she one-boxes.
| | Act |
| --- | --- |
| onebox | ? |
| twobox | ? |
| | Accurate |
| --- | --- |
| accurate | 0.99 |
| inaccurate | 0.01 |
Figure 2: A Bayesian network for Newcomb’s problem. The only stochastic nodes are Act and Accurate; all the other nodes are deterministic. As you may verify, the results below continue to hold if we add more stochasticity, e.g., by making Predisposition strongly but imperfectly correlated with Act. The prior probabilities on Act are left as a free parameter. This graph is more verbose than necessary: we could collapse V and Outcome into a single node, and collapse Prediction and Box B into a single node. Note that this is *not* a causal graph; the arrow from Act to Predisposition describes a correlation in the agent’s beliefs but does not represent causation.
Eve’s behavior depends entirely upon this value. If she were certain that she had a one-boxing predisposition, then PE(𝑜𝑛𝑒𝑏𝑜𝑥)superscript𝑃𝐸𝑜𝑛𝑒𝑏𝑜𝑥P^{\mathit{E}}{}(\text{{onebox}})italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT ( onebox ) would be 1111. She would then be unable to condition on Act=𝑡𝑤𝑜𝑏𝑜𝑥Act𝑡𝑤𝑜𝑏𝑜𝑥\text{{Act}}=\text{{twobox}}Act = twobox, because two-boxing would be an event of probability zero. As a result, she would one-box. Similarly, if she were certain she had a two-boxing predisposition, she would two-box. In both cases, her prior would be quite accurate—it would assign probability 1 to her taking the action that she would in fact take, given that prior. As noted by (Spohn:1977, [44](#bib.bib44), ), given extreme priors, Eve can be made to do *anything*, regardless of what she knows about the world.111111The ratification procedure of (Jeffrey:1983, [23](#bib.bib23), ) and the meta-tickle defense of (Eells:1984, [15](#bib.bib15), ) can in fact be seen as methods for constructing PEsuperscript𝑃𝐸P^{\mathit{E}}{}italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT that push PE(𝑜𝑛𝑒𝑏𝑜𝑥)superscript𝑃𝐸𝑜𝑛𝑒𝑏𝑜𝑥P^{\mathit{E}}{}(\text{{onebox}})italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT ( onebox ) to 0, causing Eve to two-box. Eve’s choices are at the mercy of her priors in a way that Carl and Fiona’s are not—a point to which we’ll return in section [7](#S7 "7 Diagnosing EDT: Conditionals as Counterfactuals ‣ Functional Decision Theory: A New Theory of Instrumental Rationality").
But assume her priors are not extreme, i.e., that 0<PE(𝑜𝑛𝑒𝑏𝑜𝑥)<1.0superscript𝑃𝐸𝑜𝑛𝑒𝑏𝑜𝑥1{0<P^{\mathit{E}}{}(\text{{onebox}})<1}.0 < italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT ( onebox ) < 1 . In this case, Eve solves equation ([2](#S5.E2 "2 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")), which requires calculating the expectation of V in two different hypotheticals, one for each available action. To evaluate one-boxing, she constructs a hypothetical by conditioning PEsuperscript𝑃𝐸P^{\mathit{E}}{}italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT on onebox. The expectation of V in this hypothetical is $999,000, because PE(𝑓𝑢𝑙𝑙∣𝑜𝑛𝑒𝑏𝑜𝑥)=0.99superscript𝑃𝐸conditional𝑓𝑢𝑙𝑙𝑜𝑛𝑒𝑏𝑜𝑥0.99{P^{\mathit{E}}{}(\text{{full}}\mid\text{{onebox}})=0.99}italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT ( full ∣ onebox ) = 0.99.
To evaluate two-boxing, she constructs a second hypothetical by conditioning PEsuperscript𝑃𝐸P^{\mathit{E}}{}italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT on twobox. In this case, expected utility is $11,000, because PE(𝑓𝑢𝑙𝑙∣𝑡𝑤𝑜𝑏𝑜𝑥)=0.01superscript𝑃𝐸conditional𝑓𝑢𝑙𝑙𝑡𝑤𝑜𝑏𝑜𝑥0.01{P^{\mathit{E}}{}(\text{{full}}\mid\text{{twobox}})=0.01}italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT ( full ∣ twobox ) = 0.01.
The value associated with one-boxing is higher, so EDT(PE)=𝑜𝑛𝑒𝑏𝑜𝑥EDTsuperscript𝑃𝐸𝑜𝑛𝑒𝑏𝑜𝑥\mathrm{EDT}(P^{\mathit{E}})=\text{{onebox}}roman\_EDT ( italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT ) = onebox, so Eve one-boxes.
In words, Eve reasons: “Conditional on one-boxing, I very likely have a one-boxing predisposition; and one-boxers tend to get rich; so I’d probably be a gets-rich sort of person. That would be great! Conditional on two-boxing, though, I very likely have a two-boxing disposition; two-boxers tend to become poor; so I’d probably be a stays-poor sort of person. That would be worse, so I one-box.” The predictor, seeing that Eve assigns nonzero prior probability to one-boxing and following this very chain of reasoning, can easily see that Eve one-boxes, and will fill the box. As a result, Eve will walk away rich.
What about Carl? Carl’s probability distribution PCsuperscript𝑃𝐶P^{\mathit{C}}{}italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT also follows figure [2](#S6.F2 "Figure 2 ‣ Newcomb’s Problem ‣ 6 Comparing the Three Decision Algorithms’ Behavior ‣ Functional Decision Theory: A New Theory of Instrumental Rationality") (though for him, the variables represent different things—Carl’s variable Predictor represents a predictor that was thinking about Carl all day, whereas Eve’s corresponding variable represents a predictor that was thinking about Eve, and so on.) To figure out how Carl behaves, we need to augment PCsuperscript𝑃𝐶P^{\mathit{C}}{}italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT with a graph GCsuperscript𝐺𝐶G^{\mathit{C}}{}italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT describing the causal relationships between the variables. This graph is given in figure [2(b)](#S6.F2.sf2 "2(b) ‣ Newcomb’s Problem ‣ 6 Comparing the Three Decision Algorithms’ Behavior ‣ Functional Decision Theory: A New Theory of Instrumental Rationality"). Our task is to evaluate CDT(PC,GC)CDTsuperscript𝑃𝐶superscript𝐺𝐶\mathrm{CDT}(P^{\mathit{C}},G^{\mathit{C}})roman\_CDT ( italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT ). We haven’t specified the value PC(𝑜𝑛𝑒𝑏𝑜𝑥)superscript𝑃𝐶𝑜𝑛𝑒𝑏𝑜𝑥P^{\mathit{C}}(\text{{onebox}})italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT ( onebox ), but as we will see, the result is independent from this value.
Like Eve, Carl makes his choice by comparing two hypothetical scenarios. Carl constructs his first hypothetical by performing the causal intervention 𝚍𝚘(𝑜𝑛𝑒𝑏𝑜𝑥)𝚍𝚘𝑜𝑛𝑒𝑏𝑜𝑥{\mathtt{do}(\text{{onebox}})}typewriter\_do ( onebox ). This sets Act to onebox, then propagates the update to only the variables causally downstream from Act. Box B is causally independent from Act according to the graph, so the probability of full remains unchanged. Write p𝑝pitalic\_p for this probability. According to Carl’s first hypothetical, if he one-boxes then there is a p𝑝pitalic\_p chance he wins $1,000,000 and a (1−p)1𝑝(1-p)( 1 - italic\_p ) chance he walks away empty-handed. To make his second hypothetical, Carl performs the causal surgery 𝚍𝚘(𝑡𝑤𝑜𝑏𝑜𝑥)𝚍𝚘𝑡𝑤𝑜𝑏𝑜𝑥{\mathtt{do}(\text{{twobox}})}typewriter\_do ( twobox ), which also does not alter the probability of full. According to that hypothetical, if Carl two-boxes, he has a p𝑝pitalic\_p chance of $1,001,000 and a (1−p)1𝑝(1-p)( 1 - italic\_p ) chance of $1,000.
No matter what the value of p𝑝pitalic\_p is, V is higher by 1000 in the second hypothetical, so CDT(PC,GC)=𝑡𝑤𝑜𝑏𝑜𝑥CDTsuperscript𝑃𝐶superscript𝐺𝐶𝑡𝑤𝑜𝑏𝑜𝑥\mathrm{CDT}(P^{\mathit{C}},G^{\mathit{C}})=\text{{twobox}}roman\_CDT ( italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT ) = twobox. Thus, Carl two-boxes. In words, Carl reasons: “Changing my action does not affect the probability that box B is full. Regardless of whether it’s full or empty, I do better by taking box A, which contains a free $1,000.”
This means that PC(𝑜𝑛𝑒𝑏𝑜𝑥)superscript𝑃𝐶𝑜𝑛𝑒𝑏𝑜𝑥P^{\mathit{C}}{}(\text{{onebox}})italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT ( onebox ) should be close to zero, because any agent smart enough to follow the reasoning above (including Carl) can see that Carl will take two boxes. Furthermore, the predictor will have no trouble following the reasoning that we just followed, and will not fill the box; so Carl will walk away poor.
(a) Carl’s graph GCsuperscript𝐺𝐶G^{\mathit{C}}italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT for Newcomb’s problem. This graph is merely a simplified version of figure [1](#S5.F1 "Figure 1 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality").
.45
PredispositionActPredictionAccurateBox BV
.45
fdt(PF¯,GF¯)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG )ActPredictionAccurateBox BV
(a) Carl’s graph GCsuperscript𝐺𝐶G^{\mathit{C}}italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT for Newcomb’s problem. This graph is merely a simplified version of figure [1](#S5.F1 "Figure 1 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality").
(b) Fiona’s graph GFsuperscript𝐺𝐹G^{\mathit{F}}{}italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT for Newcomb’s problem. Fiona intervenes on a variable fdt(PF¯,GF¯)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ) representing what the FDT algorithm outputs given Fiona’s world-model.
Third, we turn our attention to Fiona. Her distribution PFsuperscript𝑃𝐹P^{\mathit{F}}{}italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT is similar to those of Eve and Carl, except that instead of reasoning about her “predisposition” as a common cause of her act and the predictor’s prediction, she reasons about the *decision function* fdt(PF¯,GF¯)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}}){}fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ) that she is implementing. Fiona’s graph GFsuperscript𝐺𝐹G^{\mathit{F}}italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT (given in figure [2(b)](#S6.F2.sf2 "2(b) ‣ Newcomb’s Problem ‣ 6 Comparing the Three Decision Algorithms’ Behavior ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")) is similar to GCsuperscript𝐺𝐶G^{\mathit{C}}italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT, but she intervenes on the variable fdt(PF¯,GF¯)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ) instead of Act.
Fiona, like Eve and Carl, weighs her options by comparing two hypotheticals. In her hypotheticals, the value of Prediction varies with the value of Act—because they *both* vary according to the value of fdt(PF¯,GF¯)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ). To make her first hypothetical, she performs the intervention 𝚍𝚘(fdt(PF¯,GF¯)=𝑜𝑛𝑒𝑏𝑜𝑥)𝚍𝚘fdt¯superscript𝑃𝐹¯superscript𝐺𝐹𝑜𝑛𝑒𝑏𝑜𝑥{\mathtt{do}(\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}}){}=\text{{onebox}})}typewriter\_do ( fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ) = onebox ), which sets the probability of Act=𝑜𝑛𝑒𝑏𝑜𝑥Act𝑜𝑛𝑒𝑏𝑜𝑥\text{{Act}}=\text{{onebox}}Act = onebox to 1111 and the probability of Box B=𝑓𝑢𝑙𝑙Box B𝑓𝑢𝑙𝑙\text{{Box B}}=\text{{full}}Box B = full to 0.990.990.990.99. To make her second hypothetical, she performs 𝚍𝚘(fdt(PF¯,GF¯)=𝑡𝑤𝑜𝑏𝑜𝑥)𝚍𝚘fdt¯superscript𝑃𝐹¯superscript𝐺𝐹𝑡𝑤𝑜𝑏𝑜𝑥{\mathtt{do}(\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}}){}=\text{{twobox}})}typewriter\_do ( fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ) = twobox ), which sets the probability of Act=𝑜𝑛𝑒𝑏𝑜𝑥Act𝑜𝑛𝑒𝑏𝑜𝑥\text{{Act}}=\text{{onebox}}Act = onebox to 00 and the probability of Box B=𝑓𝑢𝑙𝑙Box B𝑓𝑢𝑙𝑙\text{{Box B}}=\text{{full}}Box B = full to 0.010.010.010.01. Expected utility in the first case is $990,000; expected utility in the second case is $11,000. Thus, FDT(PF,GF)=𝑜𝑛𝑒𝑏𝑜𝑥FDTsuperscript𝑃𝐹superscript𝐺𝐹𝑜𝑛𝑒𝑏𝑜𝑥\mathrm{FDT}(P^{\mathit{F}},G^{\mathit{F}})=\text{{onebox}}roman\_FDT ( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT ) = onebox and Fiona one-boxes.121212Be careful to distinguish fdt(PF¯,GF¯)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ) from FDT(PF,GF)FDTsuperscript𝑃𝐹superscript𝐺𝐹\mathrm{FDT}(P^{\mathit{F}},G^{\mathit{F}})roman\_FDT ( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT ). The former is a variable in Fiona’s model that represents the output of her decision process, which she manipulates to produce an action. The latter is the action produced.
In English, this corresponds to the following reasoning: “If this very decision procedure outputs onebox, then my body almost surely takes one box and the predictor likely filled box B. If instead this very decision procedure outputs twobox, then my body almost surely takes two boxes and the predictor likely left box B empty. Between those two possibilities, I prefer the first, so this decision procedure hereby outputs onebox.”
Assuming that Fiona is smart enough to follow the above line of reasoning, PF(fdt(PF¯,GF¯)=𝑜𝑛𝑒𝑏𝑜𝑥)≈1superscript𝑃𝐹fdt¯superscript𝑃𝐹¯superscript𝐺𝐹𝑜𝑛𝑒𝑏𝑜𝑥1P^{\mathit{F}}{}(\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}}){}=\text{{onebox}})\approx 1italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT ( fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ) = onebox ) ≈ 1, because FDT agents obviously one-box. Similarly, a predictor capable of following this argument will have no trouble predicting that Fiona always one-boxes—and so Fiona walks away rich.
Here we pause to address a common objection: If Fiona is almost certain that she has a one-boxing disposition (and 99% certain that box B is full), then upon reflection, won’t she decide to take two boxes? The answer is no, because of the way that Fiona weighs her options. To consider the consequences of changing her action, she imagines a hypothetical scenario in which her decision function has a different output. Even if she is quite sure that the box is full because FDT(PF,GF)=𝑜𝑛𝑒𝑏𝑜𝑥FDTsuperscript𝑃𝐹superscript𝐺𝐹𝑜𝑛𝑒𝑏𝑜𝑥\mathrm{FDT}(P^{\mathit{F}},G^{\mathit{F}})=\text{{onebox}}roman\_FDT ( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT ) = onebox, when you ask her what *would happen if* she two-boxed, she says that, for her to two-box, the FDT algorithm would have to output twobox on input (PF,GF)superscript𝑃𝐹superscript𝐺𝐹(P^{\mathit{F}},G^{\mathit{F}})( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT ). If the FDT algorithm itself behaved differently, then other things about the universe would be different—much like we should expect elliptical curves to have different properties if (counterpossibly) Fermat’s last theorem were false as opposed to true. Fiona’s graph GFsuperscript𝐺𝐹G^{\mathit{F}}italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT tells her how to imagine this counterpossibility, and in particular, because her algorithm and the predictor’s prediction subjunctively depend on the same function, she imagines a hypothetical world where most things are the same but box B is probably empty. That imagined hypothetical seems worse to her, so she leaves the $1,000 behind.
Nowhere in Fiona’s reasoning above is there any appeal to a belief in retrocausal physics. If she understands modern physics, she’ll be able to tell you that information cannot travel backwards in time. She does not think that a physical signal passes between her action and the predictor’s prediction; she just thinks it is foolish to imagine her action changing without also imagining FDT(PF,GF)FDTsuperscript𝑃𝐹superscript𝐺𝐹\mathrm{FDT}(P^{\mathit{F}},G^{\mathit{F}})roman\_FDT ( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT ) taking on a different value, since she thinks the predictor is good at reasoning about FDT. When she imagines two-boxing, she therefore imagines a hypothetical world where box B is empty.
### An Aside on Dominance
Functional decision theorists deny the argument for two-boxing from dominance. The causal decision theorist argues that one-boxing is irrational because, though it tends to make one richer in practice, switching from one-boxing to two-boxing (while holding constant everything except the action’s effects) always yields more wealth. In other words, whenever you one-box then it will be revealed that you left behind a free $1,000; but whenever you two-box you left behind nothing; so (the argument goes) one-boxing is irrational.
But this practice of checking what an agent should have done *holding constant everything except the action’s effects* ignores important aspects of the world’s structure: when the causal decision theorist asks us to imagine the agent’s action switching from twobox to onebox holding fixed the predictor’s prediction, they are asking us to imagine the agent’s physical action changing while holding fixed the behavior of the agent’s decision function. This is akin to handing us a well-functioning calculator calculating 6288+1048628810486288+10486288 + 1048 and asking us to imagine it outputting 3159315931593159, holding constant the fact that 6288+1048=73366288104873366288+1048=73366288 + 1048 = 7336.
Two-boxing “dominates” if dominance is defined in terms of CDT counterfactuals; where regret is measured by visualizing a world where the action was changed but the decision function was not. But this is not an independent argument for CDT; it is merely a restatement of CDT’s method for assessing an agent’s options.
An analogous notion of “dominance” can be constructed using FDT-style counterfactuals, in which action a𝑎aitalic\_a dominates action b𝑏bitalic\_b if, holding constant all relevant subjunctive dependencies, switching the output of the agent’s algorithm from b𝑏bitalic\_b to a𝑎aitalic\_a is sometimes better (and never worse) than sticking with b𝑏bitalic\_b. According to this notion of dominance, FDT agents never take a dominated action. In Newcomb’s problem, if we hold constant the relative subjunctive dependency (that the predictor’s prediction is almost always equal to the agent’s action) then switching from one box to two makes the agent worse off.
In fact, *every* method for constructing hypotheticals gives rise to its own notion of dominance. If we define “dominance” in terms of Bayesian conditionalization—a𝑎aitalic\_a “dominates” b𝑏bitalic\_b if 𝔼(V∣a)>𝔼(V∣b)𝔼conditionalV𝑎𝔼conditionalV𝑏{\mathbb{E}(\text{{V}}\mid a)>\mathbb{E}(\text{{V}}\mid b)}blackboard\_E ( V ∣ italic\_a ) > blackboard\_E ( V ∣ italic\_b )—then refusing to smoke “dominates” in the smoking lesion problem. To assert that one action “dominates” another, one must assume a particular method of evaluating counterfactual actions. Every expected utility theory comes with its own notion of dominance, and dominance doesn’t afford us a neutral criterion for deciding between candidate theories. For this reason, we much prefer to evaluate decision theories based on how much utility they tend to achieve in practice.
### The Smoking Lesion Problem
To model Eve’s behavior in the smoking lesion problem, we will need a new distribution describing her beliefs when she faces the smoking lesion problem. The insight of (Gibbard:1978, [20](#bib.bib20), ) is that, with a bit of renaming, we can re-use the distribution PEsuperscript𝑃𝐸P^{\mathit{E}}{}italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT from Newcomb’s problem. Simply carry out the following renaming:
| | | | | |
| --- | --- | --- | --- | --- |
| | Predisposition | becomes | Lesion | |
| | oneboxer | becomes | nolesion | |
| | twoboxer | becomes | lesion | |
| | Accurate | becomes | Luck | |
| | accurate | becomes | unlucky | |
| | inaccurate | becomes | lucky | |
| | Prediction | becomes | Cancer | |
| | 1 | becomes | nocancer | |
| | 2 | becomes | cancer | |
| | Box B | becomes | Death | |
| | empty | becomes | dead | |
| | full | becomes | alive | |
| | Act | becomes | Act | |
| | twobox | becomes | smoke | |
| | onebox | becomes | refrain | |
and we’re good to go. Clearly, then, Eve refrains from smoking: EDT(PE)=𝑟𝑒𝑓𝑟𝑎𝑖𝑛EDTsuperscript𝑃𝐸𝑟𝑒𝑓𝑟𝑎𝑖𝑛\mathrm{EDT}(P^{\mathit{E}})=\text{{refrain}}roman\_EDT ( italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT ) = refrain for the same reason that it was onebox in Newcomb’s problem. She reasons that most smokers die, and most non-smokers don’t, and she’d rather hear that she was in the latter category, so she refrains from smoking.
Similarly, with mere renaming, we can re-use Carl’s model for Newcomb’s problem to make his model (PC,GC)superscript𝑃𝐶superscript𝐺𝐶(P^{\mathit{C}},G^{\mathit{C}})( italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT ) for the smoking lesion problem. Carl smokes: CDT(PC,GC)=𝑠𝑚𝑜𝑘𝑒CDTsuperscript𝑃𝐶superscript𝐺𝐶𝑠𝑚𝑜𝑘𝑒\mathrm{CDT}(P^{\mathit{C}},G^{\mathit{C}})=\text{{smoke}}roman\_CDT ( italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT ) = smoke for the same reason that it was twobox in Newcomb’s problem. He reasons that the probability of cancer doesn’t depend on his action, so regardless of whether he has the lesion, he’s better off smoking.
In Fiona’s case, though, her graph in Newcomb’s problem cannot be re-used in the smoking lesion problem—the relations of subjunctive dependence differ. According to Fiona’s graph for Newcomb’s problem, changing fdt(PF¯,GF¯)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ) changes both Act and Prediction, because both of those variables represent something in the world that depends on the output of FDT(PF,GF)FDTsuperscript𝑃𝐹superscript𝐺𝐹\mathrm{FDT}(P^{\mathit{F}},G^{\mathit{F}})roman\_FDT ( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT ). In her graph for the smoking lesion problem, though, the corresponding variable should be connected to Act but not Cancer, because whether or not the cancer metastasizes does not depend upon the output of the FDT procedure. Carl can re-use his causal graph from Newcomb’s problem because he does not track these subjunctive dependencies, but Fiona’s behavior depends on these subjunctive dependencies, so her graphs must differ.
In fact, building PFsuperscript𝑃𝐹P^{\mathit{F}}italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT and GFsuperscript𝐺𝐹G^{\mathit{F}}italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT for the smoking lesion problem requires that we formalize the problem a bit further. According to the problem description, the lesion determines whether or not the agent likes smoking. We can formalize this by saying that there are two utility functions, 𝒰Ssubscript𝒰𝑆\mathcal{U}\_{S}caligraphic\_U start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT and 𝒰Rsubscript𝒰𝑅\mathcal{U}\_{R}caligraphic\_U start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT, where 𝒰Rsubscript𝒰𝑅\mathcal{U}\_{R}caligraphic\_U start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT differs from 𝒰Ssubscript𝒰𝑆\mathcal{U}\_{S}caligraphic\_U start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT in that it values smoking at -$1. Thus, there are two different distributions Fiona could have: PSsubscript𝑃𝑆P\_{S}italic\_P start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT, in which V is calculated using 𝒰Ssubscript𝒰𝑆\mathcal{U}\_{S}caligraphic\_U start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT; and PRsubscript𝑃𝑅P\_{R}italic\_P start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT, in which V is calculated using 𝒰Rsubscript𝒰𝑅\mathcal{U}\_{R}caligraphic\_U start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT. Fiona can’t tell which distribution she actually uses; it’s a black box to her.
Lesionfdt(PSF¯,GF¯)fdt¯subscriptsuperscript𝑃𝐹𝑆¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}\_{S}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG )fdt(PRF¯,GF¯)fdt¯subscriptsuperscript𝑃𝐹𝑅¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}\_{R}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG )LuckActCancerDeathV
Figure 3: Fiona’s graph GFsuperscript𝐺𝐹G^{\mathit{F}}italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT for the smoking lesion problem. In the case where she has the lesion, the point of intervention will be fdt(PSF¯,GF¯)fdt¯subscriptsuperscript𝑃𝐹𝑆¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}\_{S}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ), as illustrated. In the case where she does not have the lesion, the point of intervention will instead be fdt(PRF¯,GF¯)fdt¯subscriptsuperscript𝑃𝐹𝑅¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}\_{R}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ). In either case, Fiona does not have introspective access to the point of intervention, and computes her action under ignorance about her own preferences.
Because there are two world-models she might be running, there are also two decision functions she could be running: FDT(PSF,GF)FDTsubscriptsuperscript𝑃𝐹𝑆superscript𝐺𝐹\mathrm{FDT}(P^{\mathit{F}}\_{S},G^{\mathit{F}})roman\_FDT ( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT ) or FDT(PRF,GF)FDTsubscriptsuperscript𝑃𝐹𝑅superscript𝐺𝐹\mathrm{FDT}(P^{\mathit{F}}\_{R},G^{\mathit{F}})roman\_FDT ( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT ). Which one she runs depends upon whether she has the lesion. Thus, in her graph, the node Act depends on Lesion, fdt(PSF¯,GF¯)fdt¯subscriptsuperscript𝑃𝐹𝑆¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}\_{S}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ), and fdt(PRF¯,GF¯)fdt¯subscriptsuperscript𝑃𝐹𝑅¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}\_{R}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ) like so:
| | | |
| --- | --- | --- |
| | Act(𝑙𝑒𝑠𝑖𝑜𝑛,s,r)≔s≔Act𝑙𝑒𝑠𝑖𝑜𝑛𝑠𝑟𝑠\displaystyle\text{{Act}}(\text{{lesion}},s,r)\coloneqq sAct ( lesion , italic\_s , italic\_r ) ≔ italic\_s | |
| | Act(𝑛𝑜𝑙𝑒𝑠𝑖𝑜𝑛,s,r)≔r≔Act𝑛𝑜𝑙𝑒𝑠𝑖𝑜𝑛𝑠𝑟𝑟\displaystyle\text{{Act}}(\text{{nolesion}},s,r)\coloneqq rAct ( nolesion , italic\_s , italic\_r ) ≔ italic\_r | |
This is enough information to define GFsuperscript𝐺𝐹G^{\mathit{F}}italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT for the smoking lesion problem, which is given in figure [3](#S6.F3 "Figure 3 ‣ The Smoking Lesion Problem ‣ 6 Comparing the Three Decision Algorithms’ Behavior ‣ Functional Decision Theory: A New Theory of Instrumental Rationality"), and captures the fact that Fiona’s action could depend on one of two different procedures, depending on whether or not she was born with the lesion.
We can define PSFsubscriptsuperscript𝑃𝐹𝑆P^{\mathit{F}}\_{S}italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT on Luck, Cancer, Dead, Outcome and V analogously to Newcomb’s problem. To finish defining PSFsubscriptsuperscript𝑃𝐹𝑆P^{\mathit{F}}\_{S}italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT, we also need to place probabilities on Lesion, fdt(PRF¯,GF¯)fdt¯subscriptsuperscript𝑃𝐹𝑅¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}\_{R}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ), and fdt(PSF¯,GF¯)fdt¯subscriptsuperscript𝑃𝐹𝑆¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}\_{S}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ). For the first, let PSF(𝑙𝑒𝑠𝑖𝑜𝑛)=psubscriptsuperscript𝑃𝐹𝑆𝑙𝑒𝑠𝑖𝑜𝑛𝑝P^{\mathit{F}}\_{S}(\text{{lesion}})=pitalic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT ( lesion ) = italic\_p for some 0<p<10𝑝10<p<10 < italic\_p < 1 (for otherwise there’s no uncertainty about the lesion). For the second, let PSF(fdt(PRF¯,GF¯)=𝑠𝑚𝑜𝑘𝑒)=qsubscriptsuperscript𝑃𝐹𝑆fdt¯subscriptsuperscript𝑃𝐹𝑅¯superscript𝐺𝐹𝑠𝑚𝑜𝑘𝑒𝑞P^{\mathit{F}}\_{S}(\text{{fdt}}(\underline{P^{\mathit{F}}\_{R}},\underline{G^{\mathit{F}}})=\text{{smoke}})=qitalic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT ( fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ) = smoke ) = italic\_q for some arbitrary q𝑞qitalic\_q. (The correct q𝑞qitalic\_q is 0, as you may verify, but it won’t matter if Fiona has uncertainty about how she’d act if her preferences were 𝒰Rsubscript𝒰𝑅\mathcal{U}\_{R}caligraphic\_U start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT instead of 𝒰Ssubscript𝒰𝑆\mathcal{U}\_{S}caligraphic\_U start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT.) For the third, the prior probability is inconsequential, as we will soon see.
We are now ready to evaluate FDT(PSF,GF)FDTsubscriptsuperscript𝑃𝐹𝑆superscript𝐺𝐹\mathrm{FDT}(P^{\mathit{F}}\_{S},G^{\mathit{F}})roman\_FDT ( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT ) and thereby figure out what Fiona does in the case where her preferences are 𝒰Ssubscript𝒰𝑆\mathcal{U}\_{S}caligraphic\_U start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT. To calculate the expected utility of her options, Fiona constructs two hypotheticals. In the first, the variable fdt(PSF¯,GF¯)fdt¯subscriptsuperscript𝑃𝐹𝑆¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}\_{S}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ) is set to smoke by the 𝚍𝚘𝚍𝚘\mathtt{do}typewriter\_do operator; in the second, it is set to refrain. In both cases, changing fdt(PSF¯,GF¯)fdt¯subscriptsuperscript𝑃𝐹𝑆¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}\_{S}},\underline{G^{\mathit{F}}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ) does not affect the probability of lesion:
| | | | |
| --- | --- | --- | --- |
| | PSF(𝑙𝑒𝑠𝑖𝑜𝑛)subscriptsuperscript𝑃𝐹𝑆𝑙𝑒𝑠𝑖𝑜𝑛\displaystyle P^{\mathit{F}}\_{S}(\text{{lesion}})\ italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT ( lesion ) | =PSF(𝑙𝑒𝑠𝑖𝑜𝑛∣𝚍𝚘(fdt(PSF¯,GF¯)=𝑠𝑚𝑜𝑘𝑒))absentsubscriptsuperscript𝑃𝐹𝑆conditional𝑙𝑒𝑠𝑖𝑜𝑛𝚍𝚘fdt¯subscriptsuperscript𝑃𝐹𝑆¯superscript𝐺𝐹𝑠𝑚𝑜𝑘𝑒\displaystyle=P^{\mathit{F}}\_{S}(\text{{lesion}}\mid\mathtt{do}(\text{{fdt}}(\underline{P^{\mathit{F}}\_{S}},\underline{G^{\mathit{F}}})=\text{{smoke}}))= italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT ( lesion ∣ typewriter\_do ( fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ) = smoke ) ) | |
| | | =PSF(𝑙𝑒𝑠𝑖𝑜𝑛∣𝚍𝚘(fdt(PSF¯,GF¯)=𝑟𝑒𝑓𝑟𝑎𝑖𝑛)).absentsubscriptsuperscript𝑃𝐹𝑆conditional𝑙𝑒𝑠𝑖𝑜𝑛𝚍𝚘fdt¯subscriptsuperscript𝑃𝐹𝑆¯superscript𝐺𝐹𝑟𝑒𝑓𝑟𝑎𝑖𝑛\displaystyle=P^{\mathit{F}}\_{S}(\text{{lesion}}\mid\mathtt{do}(\text{{fdt}}(\underline{P^{\mathit{F}}\_{S}},\underline{G^{\mathit{F}}})=\text{{refrain}})).= italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT ( lesion ∣ typewriter\_do ( fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG ) = refrain ) ) . | |
This is just p𝑝pitalic\_p. In English, this equation says that according to Fiona’s hypotheticals, changing FDT(PSF,GF)FDTsubscriptsuperscript𝑃𝐹𝑆superscript𝐺𝐹\mathrm{FDT}(P^{\mathit{F}}\_{S},G^{\mathit{F}})roman\_FDT ( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT ) does not affect the probability that she has the lesion. She evaluates each hypothetical using 𝒰Ssubscript𝒰𝑆\mathcal{U}\_{S}caligraphic\_U start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT (though she is not explicitly aware that she is using 𝒰Ssubscript𝒰𝑆\mathcal{U}\_{S}caligraphic\_U start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT rather than 𝒰Rsubscript𝒰𝑅\mathcal{U}\_{R}caligraphic\_U start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT) and concludes that the smoke hypothetical has expected utility
| | | |
| --- | --- | --- |
| | p($10,000+$1,000)+(1−p)($990,000+q$1,000),𝑝$10,000$1,0001𝑝$990,000𝑞$1,000p(\text{\$10,000}+\text{\$1,000})+(1-p)(\text{\$990,000}+q\text{\$1,000}),italic\_p ( $10,000 + $1,000 ) + ( 1 - italic\_p ) ( $990,000 + italic\_q $1,000 ) , | |
and the refrain one has expected utility
| | | |
| --- | --- | --- |
| | p($10,000)+(1−p)($990,000+q$1,000).𝑝$10,0001𝑝$990,000𝑞$1,000p(\text{\$10,000})+(1-p)(\text{\$990,000}+q\text{\$1,000}).italic\_p ( $10,000 ) + ( 1 - italic\_p ) ( $990,000 + italic\_q $1,000 ) . | |
She concludes that smoking is strictly better than refraining, by an amount equal to p⋅$1,000⋅𝑝$1,000p\cdot\text{\$1,000}italic\_p ⋅ $1,000, corresponding to the probability that FDT(PSF,GF)FDTsubscriptsuperscript𝑃𝐹𝑆superscript𝐺𝐹\mathrm{FDT}(P^{\mathit{F}}\_{S},G^{\mathit{F}})roman\_FDT ( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT ) is actually the procedure controlling her action, times the value of smoking. Therefore, she smokes.
Thus we see that a single compact decision criterion, the functional decision theory given by equation ([4](#S5.E4 "4 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")), prescribes both one-boxing and smoking.
(Eells:1984, [15](#bib.bib15), ) has objected that when one fully formalizes this problem, as we have, it is revealed to be quite contrived—the agents are asked to make their choice under uncertainty about their own desires, which seems unrealistic. We agree, and note that the problem is more unfair to EDT than is generally supposed: Eve’s distribution PEsuperscript𝑃𝐸P^{\mathit{E}}italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT lies to her! If Eve lives in a population of EDT agents, then *none of them smoke*, and so there should be no correlation between Act and Lesion. To make PEsuperscript𝑃𝐸P^{\mathit{E}}{}italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT accurate, we would need Eve to live in a population of agents that use a different decision procedure than she does, while also demanding that Eve be ignorant of this fact (as well as her own desires).
The discussion above reveals that the smoking lesion problem is badly confused. We treat it in this paper only because of its historical significance. Given that CDT and FDT handle it well despite its difficulties, and EDT can both handle it using a tickle defense and rightfully claim abuse, we recommend that philosophers abandon the dilemma. In section [7](#S7 "7 Diagnosing EDT: Conditionals as Counterfactuals ‣ Functional Decision Theory: A New Theory of Instrumental Rationality"), we will discuss alternative dilemmas that are less contrived, and which raise the same underlying issues for Eve in a manner that cannot be addressed by ratification or a tickle defense.
### Transparent Newcomb Problem
(a) Carl’s graph GCsuperscript𝐺𝐶G^{\mathit{C}}{}italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT for the transparent Newcomb problem.
.45
PredispositionActPredictionAccurateBox BObsV
.45
fdt(PF¯,GF¯,𝑓𝑢𝑙𝑙)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹𝑓𝑢𝑙𝑙\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{full}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , full )fdt(PF¯,GF¯,𝑒𝑚𝑝𝑡𝑦)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹𝑒𝑚𝑝𝑡𝑦\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{empty}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , empty )ActAccuratePredictionBox BV
(a) Carl’s graph GCsuperscript𝐺𝐶G^{\mathit{C}}{}italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT for the transparent Newcomb problem.
(b) Fiona’s graph GFsuperscript𝐺𝐹G^{\mathit{F}}italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT for the transparent Newcomb problem. If she sees the box full, she intervenes on fdt(PF¯,GF¯,𝑓𝑢𝑙𝑙)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹𝑓𝑢𝑙𝑙\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{full}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , full ), as illustrated. Otherwise, she intervenes on fdt(PF¯,GF¯,𝑒𝑚𝑝𝑡𝑦)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹𝑒𝑚𝑝𝑡𝑦\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{empty}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , empty ).
To see how Eve and Carl fare in the transparent Newcomb problem, we need only make a small modification to their models for Newcomb’s problem. In particular, we need to add a variable Obs that reveals the contents of Box B. Call the resulting probability distributions PEsuperscript𝑃𝐸P^{\mathit{E}}italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT and PCsuperscript𝑃𝐶P^{\mathit{C}}italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT, and define GCsuperscript𝐺𝐶G^{\mathit{C}}italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT as in figure [3(b)](#S6.F3.sf2 "3(b) ‣ Transparent Newcomb Problem ‣ 6 Comparing the Three Decision Algorithms’ Behavior ‣ Functional Decision Theory: A New Theory of Instrumental Rationality"). Now we simply calculate EDT(PE,𝑓𝑢𝑙𝑙)EDTsuperscript𝑃𝐸𝑓𝑢𝑙𝑙\mathrm{EDT}(P^{\mathit{E}},\text{{full}})roman\_EDT ( italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT , full ) and CDT(PC,GC,𝑓𝑢𝑙𝑙)CDTsuperscript𝑃𝐶superscript𝐺𝐶𝑓𝑢𝑙𝑙\mathrm{CDT}(P^{\mathit{C}},G^{\mathit{C}},\text{{full}})roman\_CDT ( italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT , full ) as per equations ([2](#S5.E2 "2 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")) and ([3](#S5.E3 "3 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")). The evaluation runs similarly to how it ran in Newcomb’s problem above, except that the distributions are conditioned on Obs=𝑓𝑢𝑙𝑙Obs𝑓𝑢𝑙𝑙\text{{Obs}}=\text{{full}}Obs = full before expected utility is calculated. Carl still two-boxes:
| | | |
| --- | --- | --- |
| | 𝔼(V∣𝚍𝚘(Act=𝑡𝑤𝑜𝑏𝑜𝑥),Obs=𝑓𝑢𝑙𝑙)=$1,001,000,𝔼conditionalV𝚍𝚘Act𝑡𝑤𝑜𝑏𝑜𝑥Obs𝑓𝑢𝑙𝑙$1,001,000\displaystyle\mathbb{E}(\text{{V}}\mid\mathtt{do}(\text{{Act}}=\text{{twobox}}),\text{{Obs}}=\text{{full}})=\text{\$1,001,000},blackboard\_E ( V ∣ typewriter\_do ( Act = twobox ) , Obs = full ) = $1,001,000 , | |
| | 𝔼(V∣𝚍𝚘(Act=𝑜𝑛𝑒𝑏𝑜𝑥),Obs=𝑓𝑢𝑙𝑙)=$1,000,000.𝔼conditionalV𝚍𝚘Act𝑜𝑛𝑒𝑏𝑜𝑥Obs𝑓𝑢𝑙𝑙$1,000,000\displaystyle\mathbb{E}(\text{{V}}\mid\mathtt{do}(\text{{Act}}=\text{{onebox}}),\text{{Obs}}=\text{{full}})=\text{\$1,000,000}.blackboard\_E ( V ∣ typewriter\_do ( Act = onebox ) , Obs = full ) = $1,000,000 . | |
Eve also two-boxes in this case (assuming she does not assign prior probability 0 to two-boxing):
| | | |
| --- | --- | --- |
| | 𝔼(V∣Act=𝑡𝑤𝑜𝑏𝑜𝑥,Obs=𝑓𝑢𝑙𝑙)=$1,001,000,𝔼formulae-sequenceconditionalVAct𝑡𝑤𝑜𝑏𝑜𝑥Obs𝑓𝑢𝑙𝑙$1,001,000\displaystyle\mathbb{E}(\text{{V}}\mid\text{{Act}}=\text{{twobox}},\text{{Obs}}=\text{{full}})=\text{\$1,001,000},blackboard\_E ( V ∣ Act = twobox , Obs = full ) = $1,001,000 , | |
| | 𝔼(V∣Act=𝑜𝑛𝑒𝑏𝑜𝑥,Obs=𝑓𝑢𝑙𝑙)=$1,000,000.𝔼formulae-sequenceconditionalVAct𝑜𝑛𝑒𝑏𝑜𝑥Obs𝑓𝑢𝑙𝑙$1,000,000\displaystyle\mathbb{E}(\text{{V}}\mid\text{{Act}}=\text{{onebox}},\text{{Obs}}=\text{{full}})=\text{\$1,000,000}.blackboard\_E ( V ∣ Act = onebox , Obs = full ) = $1,000,000 . | |
As such, it is easy to see that Carl and Eve would both two-box in the transparent Newcomb problem, so the predictor will not fill their boxes, and they will see empty boxes and walk away poor.
For Fiona, however, the case is quite different. Fiona does not react to observations by conditioning her distribution. Rather, she reacts to observations by switching which variable she intervenes on, where the different variables stand for the output of the FDT procedure when evaluated with different observations as input. This is captured by Fiona’s graph GFsuperscript𝐺𝐹G^{\mathit{F}}italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT for this decision problem, which is given in figure [3(b)](#S6.F3.sf2 "3(b) ‣ Transparent Newcomb Problem ‣ 6 Comparing the Three Decision Algorithms’ Behavior ‣ Functional Decision Theory: A New Theory of Instrumental Rationality"). Act depends on fdt(PF¯,GF¯,𝑓𝑢𝑙𝑙)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹𝑓𝑢𝑙𝑙\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{full}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , full ), fdt(PF¯,GF¯,𝑒𝑚𝑝𝑡𝑦)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹𝑒𝑚𝑝𝑡𝑦\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{empty}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , empty ), and Prediction, with Prediction determining which of the two fdt variables controls Act. This represents the fact that the predictor’s prediction determines which observation the agent receives.
Given GFsuperscript𝐺𝐹G^{\mathit{F}}italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT and the associated probability distribution PFsuperscript𝑃𝐹P^{\mathit{F}}italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT, we can evaluate FDT(PF,GF,𝑓𝑢𝑙𝑙)FDTsuperscript𝑃𝐹superscript𝐺𝐹𝑓𝑢𝑙𝑙\mathrm{FDT}(P^{\mathit{F}},G^{\mathit{F}},\text{{full}})roman\_FDT ( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , full ) to determine how Fiona behaves, which involves finding the a𝑎aitalic\_a that maximizes
| | | |
| --- | --- | --- |
| | 𝔼(V∣𝚍𝚘(fdt(PF¯,GF¯,𝑓𝑢𝑙𝑙)=a,Obs=𝑓𝑢𝑙𝑙).\mathbb{E}(\text{{V}}\mid\mathtt{do}(\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{full}})=a,\text{{Obs}}=\text{{full}}).blackboard\_E ( V ∣ typewriter\_do ( fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , full ) = italic\_a , Obs = full ) . | |
In the case of a=𝑜𝑛𝑒𝑏𝑜𝑥𝑎𝑜𝑛𝑒𝑏𝑜𝑥a=\text{{onebox}}italic\_a = onebox, the value of V is determined by the value of Accurate. If Accurate=𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒Accurate𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒\text{{Accurate}}=\text{{accurate}}Accurate = accurate, then Prediction will be 1, Box B will be full, and Act will be a=𝑜𝑛𝑒𝑏𝑜𝑥𝑎𝑜𝑛𝑒𝑏𝑜𝑥a=\text{{onebox}}italic\_a = onebox, so V will be $1,000,000. If instead Accurate=𝑖𝑛𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒Accurate𝑖𝑛𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒\text{{Accurate}}=\text{{inaccurate}}Accurate = inaccurate, then Prediction will be 2, Box B will be empty, and Act will be determined by fdt(PF¯,GF¯,𝑒𝑚𝑝𝑡𝑦)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹𝑒𝑚𝑝𝑡𝑦\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{empty}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , empty ) instead. Write q𝑞qitalic\_q for Fiona’s prior probability that she two-boxes upon seeing the box empty; in this case, V will be q⋅$1,000⋅𝑞$1,000q\cdot\text{\$1,000}italic\_q ⋅ $1,000 in expectation. (The correct value of q𝑞qitalic\_q is 1, as you may verify, but in this case it doesn’t matter if Fiona has some uncertainty about how she would act in that case.) Total expected utility in the case of a=𝑜𝑛𝑒𝑏𝑜𝑥𝑎𝑜𝑛𝑒𝑏𝑜𝑥a=\text{{onebox}}italic\_a = onebox is therefore $990,000+0.01q⋅$1,000$990,000⋅0.01𝑞$1,000\text{\$990,000}+0.01q\cdot\text{\$1,000}$990,000 + 0.01 italic\_q ⋅ $1,000, because Accurate=𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒Accurate𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒\text{{Accurate}}=\text{{accurate}}Accurate = accurate with probability 99%.
As you may verify by a similar calculation, in the case of a=𝑡𝑤𝑜𝑏𝑜𝑥𝑎𝑡𝑤𝑜𝑏𝑜𝑥a=\text{{twobox}}italic\_a = twobox, total expected utility is 0.01⋅$1,001,000+0.99q⋅$1,000=$110,00+q⋅$990⋅0.01$1,001,000⋅0.99𝑞$1,000$110,00⋅𝑞$9900.01\cdot\text{\$1,001,000}+0.99q\cdot\text{\$1,000}=\text{\$110,00}+q\cdot\text{\$990}0.01 ⋅ $1,001,000 + 0.99 italic\_q ⋅ $1,000 = $110,00 + italic\_q ⋅ $990.
The first hypothetical has higher expected utility, so Fiona takes one box. What’s remarkable about this line of reasoning is that even in the case where Fiona has observed that box B is full, when she *envisions* two-boxing, she envisions a scenario where she instead (with high probability) sees that the box is empty. In words, she reasons: “The thoughts I’m currently thinking are the decision procedure that I run upon seeing a full box. This procedure is being predicted by the predictor, and (maybe) implemented by my body. If it outputs onebox, the box is likely full and my brain implements this procedure so I take one box. If instead it outputs twobox, the box is likely empty and my brain does not implement this procedure (because I will be shown an empty box). Thus, if this procedure outputs onebox then I’m likely to keep $1,000,000; whereas if it outputs twobox I’m likely to get only $1,000. Outputting onebox leads to better outcomes, so this decision procedure hereby outputs onebox.”
The predictor, following the above chain of reasoning, knows that Fiona will one-box upon seeing the box full, and fills box B. Fiona sees two full boxes, takes only box B, and walks away rich.
7 Diagnosing EDT: Conditionals as Counterfactuals
--------------------------------------------------
We are now in a better position to say why following CDT and EDT tends to result in lower-utility outcomes than following FDT: the hypotheticals that CDT and EDT rely on are malformed. In the hypotheticals that Carl constructs for Newcomb’s problem, Act is treated as if it is uncorrelated with the prediction even though the predictor is known to be highly reliable. In the hypotheticals that Eve constructs for the smoking lesion problem, Cancer varies with Act in spite of the fact that the correlation between them is merely statistical. EDT’s hypotheticals respect too many correlations between variables; CDT’s hypotheticals respect too few.
When an EDT agent imagines behaving a certain away, she imagines that all of the correlations between her action and the environment persist—even where there is no mechanism underlying the persistence. This is what leads EDT agents to irrationally “manage the news,” as (Lewis:1981, [30](#bib.bib30), ) puts it.
This phenomenon is easiest to see in cases where the EDT agent is certain that she will take a particular action Spohn:1977 ([44](#bib.bib44)). Consider, for example, a simple dilemma where an agent has to choose whether to take $1 or $100. If Eve is certain that she’s going to take $1, then she literally cannot imagine taking $100—EDT agents condition, they don’t counterfact. Thus, she takes $1.
The standard defense of EDT here is that it’s unrealistic to imagine Eve being completely certain about what action she’s going to take. At the very least, Eve should allow that a wayward cosmic ray could strike her brain at a key moment and cause her to take the $100. However, adding uncertainty does not fix the core problem. Consider:
######
Dilemma 6 (The Cosmic Ray Problem).
An agent must choose whether to take $1 or $100. With vanishingly small probability, a cosmic ray will cause her to do the opposite of what she would have done otherwise. If she learns that she has been affected by a cosmic ray in this way, she will need to go to the hospital and pay $1,000 for a check-up. Should she take the $1, or the $100?
This hardly seems like it should be a difficult dilemma, but it proves quite troublesome for Eve. If, according to Eve’s world-model, she almost always takes $1, then it must be the case that whenever she takes $100, it’s because she’s been hit by cosmic rays. Taking $100 will then mean that she needs to go to the hospital, at a cost of $1,000. Knowing this, Eve takes the $1, for fear of cosmic rays. And her fears are (therefore) correct! She only *does* take the $100 when she’s been hit by cosmic rays, and whenever that happens, she really does lose $900 on net.
EDT runs into trouble because its hypotheticals do not allow Eve to consider breaking any correlations that hold between her action and the world. CDT and FDT agents don’t fall into the same trap. If you tell Carl or Fiona that (by pure statistical happenstance) they only take $100 when hit by cosmic rays, then they will dismiss your warning and take the $100, and you will be revealed to be a liar. Carl and Fiona ignore (and therefore break) inconvenient statistical correlations between their actions and the environment. Eve lacks that capacity, to her detriment.
This failure makes EDT agents systematically exploitable. Consider the following dilemma, due to (Soares:2015:toward, [43](#bib.bib43), ).
######
Dilemma 7 (The XOR Blackmail).
An agent has been alerted to a rumor that her house has a terrible termite infestation that would cost her $1,000,000 in damages. She doesn’t know whether this rumor is true. A greedy predictor with a strong reputation for honesty learns whether or not it’s true, and drafts a letter:
>
> I know whether or not you have termites, and I have sent you this letter iff exactly one of the following is true: (i) the rumor is false, and you are going to pay me $1,000 upon receiving this letter; or (ii) the rumor is true, and you will *not* pay me upon receiving this letter.
>
>
>
The predictor then predicts what the agent would do upon receiving the letter, and sends the agent the letter iff exactly one of (i) or (ii) is true.131313To simplify exposition, we will assume here that the predictor is infallible. It’s a trivial exercise to swap in a fallible predictor, similar to the predictors we’ve considered in previous dilemmas. Thus, the claim made by the letter is true. Assume the agent receives the letter. Should she pay up?
The rational move is to refuse to pay. If the agent is the type of agent who pays, then the letter will always be good news, since it will only come when her house is termite-free. If she’s the type of agent who refuses, then the letter will always be bad news, since it will only come when she does have termites. But either way, the letter doesn’t affect whether she has termites.
Eve, however, responds to the letter by paying up. To evaluate paying, she conditions on Act=𝑝𝑎𝑦Act𝑝𝑎𝑦\text{{Act}}=\text{{pay}}Act = pay, which (given that she has received the letter) sends her probability on termites to 0. To imagine a world in which she does not pay, she conditions her world-model on Act=𝑟𝑒𝑓𝑢𝑠𝑒Act𝑟𝑒𝑓𝑢𝑠𝑒\text{{Act}}=\text{{refuse}}Act = refuse, which (given that she has received the letter) sends the probability of termites to 1. She prefers the former hypothetical, and so she pays. It follows that any sufficiently competent predictor who knows Eve has a reliable way to extract money from her, simply by presenting her with cleverly-crafted items of good news.
The XOR blackmail problem differs from the smoking lesion problem on a few counts. First and foremost, this is a problem where the tickle defense of (Eells:1984, [15](#bib.bib15), ) doesn’t help Eve in the slightest—she pays up even if she has perfect self-knowledge. She also can’t be saved by ratification—she pays up even if she knows she is going to pay up. The only way to keep Eve from paying blackmailers in this scenario is to push her probability PE(Act=𝑝𝑎𝑦)superscript𝑃𝐸Act𝑝𝑎𝑦P^{\mathit{E}}{}(\text{{Act}}=\text{{pay}})italic\_P start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT ( Act = pay ) to zero, so that she literally cannot imagine paying. If EDT agents can only reliably achieve good outcomes when we’re kind enough to identify the best action ourselves and indoctrinate the agent into believing she’ll choose that, then as (Joyce:2007, [25](#bib.bib25), ) points out, EDT cannot reasonably claim to be the correct theory of rational choice.
Carl and Fiona, unlike Eve, refuse to pay in dilemma [7](#Thmdilemma7 "Dilemma 7 (The XOR Blackmail). ‣ 7 Diagnosing EDT: Conditionals as Counterfactuals ‣ Functional Decision Theory: A New Theory of Instrumental Rationality"). It is worth noting, however, that they refuse for very different reasons.
Carl refuses because the variable Infestation representing the infestation is causally upstream from Act in his world-model (for which the graph is given in figure [3(d)](#S7.F3.sf4 "3(d) ‣ 7 Diagnosing EDT: Conditionals as Counterfactuals ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")). Write p=PC(Infestation=𝑡𝑒𝑟𝑚𝑖𝑡𝑒𝑠)𝑝superscript𝑃𝐶Infestation𝑡𝑒𝑟𝑚𝑖𝑡𝑒𝑠{p=P^{\mathit{C}}(\text{{Infestation}}=\text{{termites}})}italic\_p = italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT ( Infestation = termites ) for his prior probability that his house is infested. According to Carl’s hypotheticals, p𝑝pitalic\_p does not change with Act, so there isn’t any point in paying.
(c) A causal graph GCsuperscript𝐺𝐶G^{\mathit{C}}{}italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT for Carl in the XOR blackmail problem.
.46
InfestationPredictorActObsV
.46
Infestationfdt(PF¯,GF¯,𝑙𝑒𝑡𝑡𝑒𝑟)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹𝑙𝑒𝑡𝑡𝑒𝑟\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{letter}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , letter )fdt(PF¯,GF¯,∅)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{$\emptyset$}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , ∅ )PredictorActV
(c) A causal graph GCsuperscript𝐺𝐶G^{\mathit{C}}{}italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT for Carl in the XOR blackmail problem.
(d) A causal graph GFsuperscript𝐺𝐹G^{\mathit{F}}italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT for Fiona in the same problem, with the point of intervention illustrated as in the case where Fiona sees the letter.
What about Fiona? Write (PF,GF)superscript𝑃𝐹superscript𝐺𝐹(P^{\mathit{F}},G^{\mathit{F}})( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT ) for her world-model, with GFsuperscript𝐺𝐹G^{\mathit{F}}italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT given by figure [3(d)](#S7.F3.sf4 "3(d) ‣ 7 Diagnosing EDT: Conditionals as Counterfactuals ‣ Functional Decision Theory: A New Theory of Instrumental Rationality") and PFsuperscript𝑃𝐹P^{\mathit{F}}italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT defined in the obvious way. From GFsuperscript𝐺𝐹G^{\mathit{F}}italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT, we can see that according to Fiona’s hypotheticals, changing FDT(PF,GF,𝑙𝑒𝑡𝑡𝑒𝑟)FDTsuperscript𝑃𝐹superscript𝐺𝐹𝑙𝑒𝑡𝑡𝑒𝑟\mathrm{FDT}(P^{\mathit{F}},G^{\mathit{F}},\text{{letter}})roman\_FDT ( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , letter ) changes what Fiona would do if she received the letter, *and* changes the probability that she sees the letter in the first place. It does not, however, change her probability that her house is infested; so she refuses to pay (as you may verify).
The actions of Carl and Fiona are the same in this dilemma, and their beliefs about the actual world are similar, but their *hypotheticals* are very different. If Fiona has termites and sees the letter, and you ask her what would have happened if she had paid, she will consult her 𝚍𝚘(fdt(PF¯,GF¯,𝑙𝑒𝑡𝑡𝑒𝑟)=𝑝𝑎𝑦)𝚍𝚘fdt¯superscript𝑃𝐹¯superscript𝐺𝐹𝑙𝑒𝑡𝑡𝑒𝑟𝑝𝑎𝑦\mathtt{do}(\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{letter}})=\text{{pay}})typewriter\_do ( fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , letter ) = pay ) hypothetical, and tell you that in that case the predictor would have predicted differently and would not have sent the letter, so she would have been running FDT(PF,GF,∅)FDTsuperscript𝑃𝐹superscript𝐺𝐹\mathrm{FDT}(P^{\mathit{F}},G^{\mathit{F}},\emptyset)roman\_FDT ( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , ∅ ) instead. By contrast, if you ask Carl what would have happened if he had paid up, he will consult his 𝚍𝚘(Act=𝑝𝑎𝑦)𝚍𝚘Act𝑝𝑎𝑦\mathtt{do}(\text{{Act}}=\text{{pay}})typewriter\_do ( Act = pay ) hypothetical, and report back that he would still have gotten the letter, the house would still have been infested, and he would have lost money. In other words, Carl’s hypothetical would say that the predictor erred—despite the fact that the predictor is inerrant, *ex hypothesi*. This hints at the reason why CDT fails in other dilemmas.
8 Diagnosing CDT: Impossible Interventions
-------------------------------------------
EDT agents fail because they cannot imagine breaking correlations between their action and their environment. Meanwhile, CDT agents fail because they imagine breaking *too many* correlations between action and environment.
Imagine Carl facing Newcomb’s problem against an absolutely perfect predictor. We confront Carl as he walks away, a paltry $1,000 in hand, and ask him what would have happened if he had taken one box. To answer, he will use CDT hypotheticals. He will envision his action changing, and he will envision everything that depends causally on his action changing with it, while all else is held constant. As such, he will envision a scenario in which box B stays empty, but box A is left behind. He will say: “Then I would have left empty-handed; I sure am glad I two-boxed.”
Contrast this with what *we* know would have happened if Carl had one-boxed (if CDT were the predictably-onebox sort of decision theory). Box B would have been full, and Carl would have been rich.
Carl answers the “what if?” question with a description of a hypothetical scenario in which a perfect predictor made an incorrect prediction. Carl’s response to the XOR blackmail problem was similarly odd. This is the flaw in CDT hypotheticals: CDT agents imagine that their actions are uncorrelated with the behavior of environmental processes that implement the same decision procedure. According to Carl’s hypotheticals, known-to-be-perfect predictors inexplicably falter exactly when Carl makes his current decision. It is no wonder, then, that Carl runs into trouble in settings where the environment contains predictors or twins.
This flaw is more serious than is generally recognized. To see this, we will consider a rather cruel dilemma in which the predictor punishes all agents no matter what they choose, inspired by the “Death in Damascus” dilemma popularized by (Gibbard:1978, [20](#bib.bib20), ):
>
> Consider the story of the man who met Death in Damascus. Death looked surprised, but then recovered his ghastly composure and said, ‘i am coming for you tomorrow.’ The terrified man that night bought a camel and rode to Aleppo. The next day, Death knocked on the door of the room where he was hiding, and said ‘i have come for you.’
>
>
>
> ‘But I thought you would be looking for me in Damascus,’ said the man.
> ‘not at all,’ said Death ‘that is why i was surprised to see you yesterday. i knew that today i was to find you in aleppo.’
>
>
>
There are many different ways to turn this story into a decision problem. We choose the simplest version, copying Gibbard and Harper’s structure. We add a cost of $1,000 if the agent decides to flee Damascus, reasoning that most people prefer to spend their final night with their loved ones rather than on camelback.
######
Dilemma 8 (Death in Damascus).
Imagine a deterministic world where Death is known to be able to perfectly predict human behavior based on a detailed past observation of the world’s state. Death works from an appointment book, which lists combinations of people, days, and places. Each day, Death goes to collect the listed people at the listed places. If the listed person is at the corresponding place on that day, they die; otherwise, they survive (which they value at $1,000,000).
An agent encounters Death in Damascus and is told that Death is coming for her tomorrow. This agent knows that deciding to flee to Aleppo (at a cost of $1,000) means that Death will be in Aleppo tomorrow, whereas staying in Damascus means that Death will be in Damascus tomorrow. Should she stay, or flee?
The correct decision here is rather obvious. The $1,000,000 is a lost cause, but the agent can save $1,000 by staying in Damascus. Fiona recognizes this, and concludes that wherever she goes, Death will be. She doesn’t waste her final night fleeing.
CDT agents, in contrast, are sent into *conniptions* by this dilemma. As Gibbard and Harper put it, CDT in this situation is “unstable.” Carl bases his decisions on hypotheticals in which Death’s action is independent of his own action. This means that if he initially expects himself to stay in Damascus, then he will want to go to Aleppo—as though he expects Death to perfectly predict all of his behavior *except* for this final decision. By the same token, if he initially expects himself to flee to Aleppo, then he will want to stay in Damascus after all.
To formalize this line of thinking, we’ll let P1Csubscriptsuperscript𝑃𝐶1P^{\mathit{C}}\_{1}italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT be the initial belief state of Carl when he faces the Death in Damascus dilemma. (We leave it to the reader to draw Carl’s graph GCsuperscript𝐺𝐶G^{\mathit{C}}italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT for this problem.) As noted by Gibbard and Harper, whatever epistemic state Carl occupies will be reflectively inconsistent. Assume that Carl thinks he is going to stay in Damascus, i.e., that P1C(𝑑𝑎𝑚𝑎𝑠𝑐𝑢𝑠)>P1C(𝑎𝑙𝑒𝑝𝑝𝑜)subscriptsuperscript𝑃𝐶1𝑑𝑎𝑚𝑎𝑠𝑐𝑢𝑠subscriptsuperscript𝑃𝐶1𝑎𝑙𝑒𝑝𝑝𝑜P^{\mathit{C}}\_{1}(\text{{damascus}})>P^{\mathit{C}}\_{1}(\text{{aleppo}})italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( damascus ) > italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( aleppo ). Then he will go to Aleppo, i.e., CDT(P1C,GC)=𝑎𝑙𝑒𝑝𝑝𝑜CDTsubscriptsuperscript𝑃𝐶1superscript𝐺𝐶𝑎𝑙𝑒𝑝𝑝𝑜\mathrm{CDT}(P^{\mathit{C}}\_{1},G^{\mathit{C}})=\text{{aleppo}}roman\_CDT ( italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT ) = aleppo. Assuming Carl is smart enough to recognize his new travel plans, he now needs to update his beliefs to P2Csubscriptsuperscript𝑃𝐶2P^{\mathit{C}}\_{2}italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT where P2C(𝑎𝑙𝑒𝑝𝑝𝑜)≈1subscriptsuperscript𝑃𝐶2𝑎𝑙𝑒𝑝𝑝𝑜1P^{\mathit{C}}\_{2}(\text{{aleppo}})\approx 1italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( aleppo ) ≈ 1. But then CDT(P2C,GC)=𝑑𝑎𝑚𝑎𝑠𝑐𝑢𝑠CDTsubscriptsuperscript𝑃𝐶2superscript𝐺𝐶𝑑𝑎𝑚𝑎𝑠𝑐𝑢𝑠\mathrm{CDT}(P^{\mathit{C}}\_{2},G^{\mathit{C}})=\text{{damascus}}roman\_CDT ( italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT ) = damascus, so now Carl needs to update his beliefs again!
There are four ways for Carl to break out of this loop. First, he could hold to the delusion that he has chosen Damascus even as he rides to Aleppo (or vice versa). Second, he could sit there updating his beliefs all day long, constructing P3Csubscriptsuperscript𝑃𝐶3P^{\mathit{C}}\_{3}italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT and P4Csubscriptsuperscript𝑃𝐶4P^{\mathit{C}}\_{4}italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT and P5Csubscriptsuperscript𝑃𝐶5P^{\mathit{C}}\_{5}italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT and never solving equation ([3](#S5.E3 "3 ‣ 5 Formalizing EDT, CDT, and FDT ‣ Functional Decision Theory: A New Theory of Instrumental Rationality")) for any of them, until Death comes by to pick him up in the morning. Third, he could give up after n𝑛nitalic\_n iterations and follow CDT(PnC,GC)CDTsubscriptsuperscript𝑃𝐶𝑛superscript𝐺𝐶\mathrm{CDT}(P^{\mathit{C}}\_{n},G^{\mathit{C}})roman\_CDT ( italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT ) to either Damascus or Aleppo, resigned to failure. Or fourth, he could use a source of pseudorandomness that he himself cannot predict (but which Death can predict, *ex hypothesi*), and get himself into a “ratified” belief state PRCsubscriptsuperscript𝑃𝐶𝑅P^{\mathit{C}}\_{R}italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT where he is uncertain about what he’s going to do, but which is consistent under reflection, as suggested by (Arntzenius:2008, [5](#bib.bib5), ).
(Joyce:2012, [26](#bib.bib26), ) describes and endorses a ratification procedure in keeping with the fourth option. Briefly, this procedure leaves Carl in a state of knowledge where he is indifferent between all actions that have positive subjective probability, at those probabilities. In this case, he ends up assigning 50.05% probability to damascus and 49.95% probability to aleppo, because that is the point where the extra utility he expects to receive from staying in Damascus exactly balances out his extra subjective probability that Death will be there.
Regardless, if and when Carl finally does pick a belief state P∗Csubscriptsuperscript𝑃𝐶∗P^{\mathit{C}}\_{\ast}italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ∗ end\_POSTSUBSCRIPT and run CDT(P∗C,GC)CDTsubscriptsuperscript𝑃𝐶∗superscript𝐺𝐶\mathrm{CDT}(P^{\mathit{C}}\_{\ast},G^{\mathit{C}})roman\_CDT ( italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ∗ end\_POSTSUBSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT ), it will have him weigh his options by imagining hypothetical scenarios in which Death’s location is uncorrelated with his action.141414If Carl uses the ratification procedure of (Joyce:2012, [26](#bib.bib26), ), then he actually finds the *set* of actions a𝑎aitalic\_a which maximize 𝔼(V∣𝚍𝚘(a))𝔼conditionalV𝚍𝚘𝑎\mathbb{E}(\text{{V}}\mid\mathtt{do}(a))blackboard\_E ( V ∣ typewriter\_do ( italic\_a ) ), and uses his pseudorandomness source to sample from among those, with probabilities proportional to P⋆C(a)subscriptsuperscript𝑃𝐶⋆𝑎P^{\mathit{C}}\_{\star}(a)italic\_P start\_POSTSUPERSCRIPT italic\_C end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ⋆ end\_POSTSUBSCRIPT ( italic\_a )—i.e., he will use a source of pseudorandomness to go to Aleppo 49.95% of the time and Damascus 50.05% of the time, at which point he will die with certainty.
There are two serious problems with Carl’s reasoning here, if the aim is to end up with as much utility as possible. First, with ≈\approx≈50% probability, Carl will flee to Aleppo and lose the equivalent of $1,000 in easy money. Second, Carl will *behave as if* he has at least a 50% subjective probability of survival.
To see why the latter is a serious problem, let us suppose that we wait until Carl finishes settling on some belief, and we then offer him (for the price of $1) a coin that is *truly* random—in the sense that if he bases his decision on a toss of this coin, he has a 50% chance of thwarting Death.
Clearly, if Carl never breaks his infinite loop, he won’t be able to take advantage of the coin. If he instead gives in to delusion, or if he gives up after n𝑛nitalic\_n iterations, then he now has some subjective probability p𝑝pitalic\_p (the exact value won’t matter) that he is going to stay in Damascus. Because he knows that Death will be where he is, he also assigns p𝑝pitalic\_p probability to Death being in Damascus. But he *acts as if* these variables are independent, which means that when he imagines staying in Damascus, he calculates expected utility *as if* there were an *independent* chance q𝑞qitalic\_q of Death being in Damscus, with q𝑞qitalic\_q “coincidentally” equal to p𝑝pitalic\_p. Similarly, when he imagines fleeing to Aleppo, he calculates expected utility as if there is an independent (1−q)1𝑞(1-q)( 1 - italic\_q ) chance of Death being there there. Carl therefore acts as if his chances of survival are max(q,1−q)≥0.5𝑞1𝑞0.5\max(q,1-q)\geq 0.5roman\_max ( italic\_q , 1 - italic\_q ) ≥ 0.5. By contrast, when he imagines buying your truly random coin, he calculates that it would give him a 50% chance of survival—which, according to CDT’s hypotheticals, isn’t worth the dollar. “No, thanks,” Carl says. “After thinking about my options, I’ve decided that I’d rather take my chances with certain death.”
CDT with pseudorandomization à la (Arntzenius:2008, [5](#bib.bib5), ) or (Joyce:2012, [26](#bib.bib26), ) performs no better, although we omit the technical details here. \mkbibparensThe matter is discussed in more depth by (Ahmed:2014b, [3](#bib.bib3), ). In Joyce’s model, Carl chooses as if his pseudorandomness source were uncorrelated with Death’s location, and thus he calculates utility as if tossing his pseudorandom coin gets him $500,500 in expectation. This is exactly the utility he would expect to get by tossing the truly random coin, so he still concludes that the coin isn’t worth a dollar.
This behavior is *wildly* irrational. It’s one thing to fail to recognize when resistance is futile; it’s quite another to pass up a 50% chance of an outcome worth $1,000,000 at a cost of $1. (Ahmed:2014, [4](#bib.bib4), ) has raised similar concerns. In fact, it is possible to leverage these irrationalities to turn CDT into a money-pump. Developing that machinery is outside the scope of this paper; our focus here is on *why* CDT goes wrong, and how FDT avoids this species of error.
Carl never *believes* that he can escape Death. He knows that Death is a perfect predictor, and that he is doomed no matter what he does. And Carl is no happier to waste his final hours fleeing to Aleppo than Fiona would be. The reason why Carl fails where Fiona succeeds is in the hypotheticals that he consults when he weighs his options.
When it comes to beliefs of fact, Carl recognizes that Death will meet him wherever he goes. He can easily predict that, conditional on fleeing to Aleppo, he will die in Aleppo, and vice versa. He knows that Death can predict him even if he uses a source of pseudorandomness, and he can predict that, conditional upon using the truly random coin, he has a 50% chance of survival. But he does not base his decisions on conditionals. He always and only chooses the action that corresponds to the highest expected utility *in a CDT hypothetical*—and according to CDT, he should rely on hypotheticals in which Death’s location isn’t correlated with his action. According to *those* hypotheticals, the coin is worthless. This mismatch between Carl’s beliefs and his decision-theoretic hypotheticals provides further reason to suspect that CDT is not the correct theory of rational choice.
Causal decision theorists have insisted that the Death in Damascus problem is one where no matter what action the agent takes, it will turn out to have been the wrong one. If they stay in Damascus, they should have fled to Aleppo; if they flee, they should have stayed Gibbard:1978 ([20](#bib.bib20)). In their hypotheticals, they imagine their location changing while Death’s location remains the same. Upon finding Death in Aleppo, they therefore reason that they “could have” survived had they but stayed in Damascus; or if they encounter Death in Damascus, they feel they “could have” survived by fleeing to Aleppo.
Functional decision theorists imagine the situation differently. FDT agents observe that Death’s location causally depends on an accurate prediction in Death’s book, which was made via an accurate prediction of the agent’s decision procedure. When they imagine having made a different choice, they imagine a world in which Death’s book has different writing in it. In this way their imagined hypotheticals avoid tossing out decision-relevant information about the world.
For that reason, we disagree with the analysis of Gibbard:1978 ([20](#bib.bib20)) when they say that “any reason the doomed man has for thinking he will go to Aleppo is a reason for thinking he would live longer if he stayed in Damascus, and any reason he has for thinking he will stay in Damascus is reason for thinking he would live longer if he went to Aleppo. Thinking he will do one is reason for doing the other.” This instability only arises if we imagine changing our decision without imagining Death’s prediction changing to match—a strange thing to imagine, if the task at hand is to rationally respond to an environment containing accurate predictors.
When Fiona faces this dilemma, she quickly accepts that if she flees, Death will meet her in Aleppo. The reason she ends up with greater utility than Carl is that the mental operations she performs to construct her hypotheticals track the real-world dependence relations that she believes exist—the correspondence between predictors and the things they predict, for example. CDT hypotheticals neglect some of these dependencies, and Carl pays the price.
9 The Global Perspective
-------------------------
(Gibbard:1978, [20](#bib.bib20), ) and (Lewis:1981, [30](#bib.bib30), ) have argued that Newcomblike problems unfairly punish rational behavior. CDT agents cannot help being CDT agents; and just as we can construct dilemmas that doom CDT or EDT agents, so we can construct dilemmas that doom FDT agents. Consider dilemmas in which a mind reader gives $1,000,000 to an agent iff she predicts they would two-box in Newcomb’s problem, or iff she predicts they follow EDT.
We grant that it is possible to punish agents for using a specific decision procedure, or to design one decision problem that punishes an agent for rational behavior in a different decision problem. In those cases, no decision theory is safe. CDT performs worse that FDT in the decision problem where agents are punished for using CDT, but that hardly tells us which theory is better for *making decisions*. Similarly, CDT performs poorly in the game “punish everyone who takes $100 in the cosmic ray problem,” but this doesn’t constitute evidence against CDT. No one decision theory outperforms all others in all settings—dominance of that form is impossible.
Yet FDT does appear to be superior to CDT and EDT in all dilemmas where the agent’s beliefs are accurate and the outcome depends only on the agent’s behavior in the dilemma at hand. Informally, we call these sorts of problems “fair problems.” By this standard, Newcomb’s problem is fair; Newcomb’s predictor punishes and rewards agents only based on their actions. If the predictor scanned Carl’s brain, deduced that he followed causal decision theory, and punished him *on those grounds*, then functional decision theorists would agree that Carl was being unfairly treated. But the predictor does no such thing; she merely predicts whether or not Carl will take one box, and responds accordingly. Or, in sound-bite terms: Newcomb’s predictor doesn’t punish rational agents; she punishes *two-boxers*. She doesn’t care how or why you one-box, so long as you one-box predictably.
There is no perfect decision theory for all possible scenarios, but there may be a general-purpose decision theory that matches or outperforms all rivals in fair dilemmas, if a satisfactory notion of “fairness” can be formalized.151515There are some immediate technical obstacles to precisely articulating this notion of fairness. Imagine I have a copy of Fiona, and I punish anyone who takes the same action as the copy. Fiona will always lose at this game, whereas Carl and Eve might win. Intuitively, this problem is unfair to Fiona, and we should compare her performance to Carl’s not on the “act differently from Fiona” game, but on the analogous “act differently from Carl” game. It remains unclear how to transform a problem that’s unfair to one decision theory into an analogous one that is unfair to a different one (if an analog exists) in a reasonably principled and general way. FDT’s initial success on this front seems promising, and the authors have not been able to construct an intuitively fair dilemma where FDT loses to a rival decision theory.
As a final illustration of FDT’s ability to outperform CDT and EDT simultaneously in surprising ways, consider a more general blackmail problem described by (Soares:2015:toward, [43](#bib.bib43), ):
######
Dilemma 9 (The Mechanical Blackmail).
A blackmailer has a nasty piece of information which incriminates both the blackmailer and the agent. She has written a computer program which, if run, will publish it on the internet, costing $1,000,000 in damages to both of them. If the program is run, the only way it can be stopped is for the agent to wire the blackmailer $1,000 within 24 hours—the blackmailer will not be able to stop the program once it is running. The blackmailer would like the $1,000, but doesn’t want to risk incriminating herself, so she only runs the program if she is quite sure that the agent will pay up. She is also a perfect predictor of the agent, and she runs the program (which, when run, automatically notifies her via a blackmail letter) iff she predicts that she would pay upon receiving the blackmail. Imagine that the agent receives the blackmail letter. Should she wire $1,000 to the blackmailer?
Eve and Carl would pay up, of course. Given that the program is already running, they reason that they must choose between losing $1,000 to the blackmailer or $1,000,000 when the news gets out. In that light, paying up is clearly the lesser of the two evils. The blackmailer, knowing that Carl and Eve reason in this fashion, knows that it is safe to blackmail them. In this way, CDT and EDT give blackmailers incentives to exploit agents that follow the prescriptions of those theories.
If Eve or Carl had been the sort of agents who refuse to pay in this setting, then they would have been better off—and indeed, Carl and Eve would both pay for an opportunity to make a binding precommitment to refuse payment. What about Fiona?
In the case where Fiona *has* received the message, Fiona refuses to pay up, for the same reason that she takes one box in the transparent Newcomb problem. Write (PF,GF)superscript𝑃𝐹superscript𝐺𝐹(P^{\mathit{F}},G^{\mathit{F}})( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT ) for Fiona’s world-model when facing a mechanical blackmail. GFsuperscript𝐺𝐹G^{\mathit{F}}italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT is given in figure [4](#S9.F4 "Figure 4 ‣ 9 The Global Perspective ‣ Functional Decision Theory: A New Theory of Instrumental Rationality"); PFsuperscript𝑃𝐹P^{\mathit{F}}italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT is defined in the obvious way. Upon receiving the blackmail, Fiona solves the equation FDT(PF,GF,𝑏𝑙𝑎𝑐𝑘𝑚𝑎𝑖𝑙)FDTsuperscript𝑃𝐹superscript𝐺𝐹𝑏𝑙𝑎𝑐𝑘𝑚𝑎𝑖𝑙\mathrm{FDT}(P^{\mathit{F}},G^{\mathit{F}},\text{{blackmail}})roman\_FDT ( italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT , blackmail ), which requires consulting two hypotheticals, one in which fdt(PF¯,GF¯,𝑏𝑙𝑎𝑐𝑘𝑚𝑎𝑖𝑙)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹𝑏𝑙𝑎𝑐𝑘𝑚𝑎𝑖𝑙\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{blackmail}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , blackmail ) is set (by intervention) to pay, and one where it is set to refuse. In the first one, her probability of losing $1,000,000 if blackmailed goes to ≈\approx≈0 and her probability of being blackmailed goes to ≈\approx≈1. In the second one, her probability of losing $1,000,000 if blackmailed goes to ≈\approx≈1, but her probability of being blackmailed goes to ≈\approx≈0!
fdt(PF¯,GF¯,𝑏𝑙𝑎𝑐𝑘𝑚𝑎𝑖𝑙)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹𝑏𝑙𝑎𝑐𝑘𝑚𝑎𝑖𝑙\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{blackmail}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , blackmail )fdt(PF¯,GF¯,∅)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{$\emptyset$}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , ∅ )BlackmailerActV
Figure 4: A graph GFsuperscript𝐺𝐹G^{\mathit{F}}italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT for Fiona in the mechanical blackmail problem. The point of intervention is drawn for the case where she observes the blackmail. Blackmailer determines whether fdt(PF¯,GF¯,𝑏𝑙𝑎𝑐𝑘𝑚𝑎𝑖𝑙)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹𝑏𝑙𝑎𝑐𝑘𝑚𝑎𝑖𝑙\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{blackmail}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , blackmail ) or fdt(PF¯,GF¯,∅)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{$\emptyset$}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , ∅ ) gets to set Act.
According to FDT, when Fiona sees the blackmail and imagines refusing to pay, she should imagine a hypothetical world in which FDT is the kind of decision theory that refuses—which means that she should imagine a world in which she was never blackmailed in the first place. Because she chooses her actions *entirely* by comparing the hypotheticals that FDT says to consider, Fiona concludes that refusing to pay is better than paying. The blackmailer, recognizing this, does not attempt to blackmail Fiona, and Fiona walks away unscathed.
At first glance, it may seem like Fiona is ignoring information—shouldn’t she update on the fact that she has seen the blackmail?
inline,color=yellow]Footnote: This apparent “lack of updating” is the source of the name of Wei Dai’s “Updateless Decision Theory” Dai:2009 ([12](#bib.bib12)), an early variant of functional decision theory. Another early variant, the “Timeless Decision Theory” of (Yudkowsky:2010:TDT, [47](#bib.bib47), ), prescribes both conditioning on evidence *and* changing the place where one intervenes; this creates a kind of “double update” which proves harmful in practice.
Fiona does change her behavior in response to evidence, though—by changing *the place in her graph where she intervenes*. If she had not been blackmailed, she would have intervened on a different node fdt(PF¯,GF¯,∅)fdt¯superscript𝑃𝐹¯superscript𝐺𝐹\text{{fdt}}(\underline{P^{\mathit{F}}},\underline{G^{\mathit{F}}},\text{{$\emptyset$}})fdt ( under¯ start\_ARG italic\_P start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , under¯ start\_ARG italic\_G start\_POSTSUPERSCRIPT italic\_F end\_POSTSUPERSCRIPT end\_ARG , ∅ ) representing her belief state given that she was not blackmailed. FDT says that agents should entertain the hypothesis that, if their action were different, they might have *made different observations*.161616As mentioned earlier, the author’s preferred formulation of FDT actually intervenes on the node FDT(−)FDT\mathrm{FDT}(-)roman\_FDT ( - ) to choose not an action but a *policy* which maps inputs to actions, to which the agent then applies her inputs in order to select an action. The difference only matters in multi-agent dilemmas so far as we can tell, so we have set that distinction aside in this paper for ease of exposition.
If Fiona *did* receive the blackmail, she *would* reason, “Paying corresponds to a world where I lose $1,000; refusing corresponds to a world where I never get blackmailed. The latter looks better, so I refuse.” As such, she never gets blackmailed—her counterfactual reasoning is proven correct.
Fiona plays the same strategy even if the blackmailer is an imperfect predictor. Assume that with probability 0.0009, the blackmailer wrongly predicts that an agent will pay. Fiona therefore imagines that if she pays upon receiving blackmail then she always gets blackmailed and always loses $1,000, whereas if she refuses to pay then she is blackmailed 0.09% of the time and loses $900 in expectation. As such, she refuses to pay, even when holding the blackmail letter, even though she *knows* that the blackmailer predicted wrongly—because it’s advantageous to be *the kind of agent* that makes such decisions.
If instead the chance of error is 0.011%, then she pays, as this is again the action that tends to make an agent rich. When weighing actions, Fiona simply imagines hypotheticals corresponding to those actions, and takes the action that corresponds to the hypothetical with higher expected utility—even if that means imagining worlds in which her observations were different, and even if that means achieving low utility in the world corresponding to her actual observations.
Upon holding the blackmail letter, would Fiona have a desire to self-modify and temporarily follow CDT? No! For according to her hypotheticals, if FDT was the sort of algorithm that connived to pay up upon receiving blackmail, it would be a tempting blackmail target, and she would be blackmailed more often, and tend to lose utility on net. Functional decision theory is, so far as we know, reflectively stable in fair problems.
CDT agents imagine hypothetical worlds in which, if only they had acted differently, they would have thwarted all predictions. FDT agents imagine hypothetical words in which, if their action were different, they would have seen different things. Both hypotheticals contain impossibilities—CDT agents always *actually* end up in worlds where the predictions were accurate, and FDT agents always *actually* end up in worlds where their observations are consistent with their actions. The only difference is a practical one: while Carl is stuck complaining that predictors “punish rationality,” Fiona is thwarting blackmailers and getting rich.
Still, we expect some lingering resistance to the notion that one should (in real life!) refuse to pay the blackmailer upon receiving a mechanical blackmail letter, or refrain from two-boxing upon observing two full boxes in the transparent Newcomb problem. These are odd conclusions. It might even be argued that sufficiently odd behavior provides evidence that what FDT agents see as “rational” diverges from what humans see as “rational.” And given enough divergence of that sort, we might be justified in predicting that FDT will systematically fail to get the most utility in some as-yet-unknown fair test.
One way of giving teeth to the notion that FDT is “odd” might be to argue that FDT agents have mistaken views of causality. Fiona acts as though she can control abstract mathematical functions and events that happened in her past. On this view, the dilemmas we have discussed in this paper reward FDT’s particular delusions, but they are delusions. We should therefore be skeptical that FDT’s utility-reaping behavior in the hand-picked dilemmas in this paper are reflective of utility-reaping behavior across the board.
We respond that FDT agents suffer from no delusion; or, at least, they are no more deluded than their counterpart CDT agents. Consider CDT’s prescriptions in deterministic dilemmas. The causal decision theorist argues that although there is only one action a deterministic physical agent can truly take, the sorts of agents that do well are the ones that *imagine* hypothetical worlds across which their action differs, and take the action corresponding to the hypothetical with highest expected utility. The functional decision theorist is in a similar situation, arguing that although there is only one output a given function can have on a given input, an agent will do better if she *imagines* hypothetical worlds across which the output of her decision function varies. Any principled objection to the idea that FDT agents can “control” their decision type will also imply that (deterministic) CDT agents cannot “control” their decision token. Since CDT and FDT face analogous difficulties in making sense of the idea of “control,” this factor cannot help us decide between the theories.
More generally, we have found that FDT’s prescriptions look less odd when we think of decision-making as a natural phenomenon like any other, and require that good decisions take into account agents’ beliefs and preferences about the world as a whole (subjunctive dependencies included), not just their beliefs and preferences about events causally downstream from the decision. In a sense, FDT’s prescription is that an agent should choose her action from a global perspective—asking herself “What would the entire universe look like if the FDT algorithm behaved differently?” and choosing the action that corresponds to the best imagined universe. If that means imagining a universe in which box B is empty (even though she can clearly see that it is full), or imagining a universe in which she was probably never blackmailed in the first place (even as she’s holding the blackmail letter in her hands), then so be it.
In section [7](#S7 "7 Diagnosing EDT: Conditionals as Counterfactuals ‣ Functional Decision Theory: A New Theory of Instrumental Rationality"), we saw that EDT agents fail to imagine that they can break merely statistical correlations between their actions and their environment. This puts Eve at the mercy of anyone who can deliberately produce correlations between Eve’s actions and items of bad news. In section [8](#S8 "8 Diagnosing CDT: Impossible Interventions ‣ Functional Decision Theory: A New Theory of Instrumental Rationality"), we saw that CDT agents fail to imagine that they can control logically necessary correlations between their actions and their environment—causing Carl to make erratic and self-defeating decisions in the presence of predictors. EDT respects too many action-environment correlations, while CDT respects too few.
FDT, we claim, gets the balance right. An agent who weighs her options by imagining worlds where her decision function has a different output, but where logical, mathematical, nomic, causal, etc. constraints are otherwise respected, is an agent with the optimal predisposition for whatever fair dilemma she encounters.
10 Conclusion
--------------
Functional decision theory demonstrates that, contra (Gibbard:1978, [20](#bib.bib20), ), a single general-purpose normative rule can prescribe one-boxing in Newcomb’s problem and smoking in the smoking lesion problem. Unlike EDT agents, FDT agents don’t manage the news or pay to avoid information. Unlike CDT agents, FDT agents can account for predictors and twins in stable, consistent, and utility-maximizing ways. And unlike both CDT agents and EDT agents, FDT agents will hitch a ride out of Parfit’s desert. FDT agents avoid making decisions they will immediately regret, and avoid giving other agents an incentive to extort them.
FDT achieves all of this with one elegant criterion, similar in form to CDT. We know of no fair dilemmas in which FDT systematically achieves lower utility than a rival theory; and it succeeds without our needing to make any ad-hoc emendations to the theory to accommodate certain classes of dilemmas, and without any need for complicated ratification procedures like those of (Jeffrey:1983, [23](#bib.bib23), ), (Eells:1984, [15](#bib.bib15), ), or (Joyce:2012, [26](#bib.bib26), ). Nor is there any need for FDT agents to adopt costly precommitment mechanisms: FDT agents always act as they would have precommitted to act.
Given a probabilistic model of the world and a theory of subjunctive dependencies saying how the universe would look different if (counterpossibly) the FDT function had different outputs, FDT yields a concrete step-by-step procedure that one can follow to reap these benefits, without committing oneself to any particular doctrine of free will, or the metaphysics of possible worlds, or what an agent “really” can and cannot do. We claim only that if an agent wants to get as much utility as she can, she should weigh different imaginary worlds in which her decision procedure has different logical outputs, and execute the action corresponding to the best such world.
Though we briefly outlined our interpretation of FDT as a naturalistic and “global” approach to decision-making, our primary case for FDT has simply been that it performs much better in various dilemmas. FDT works where CDT and EDT fail.
The fact that FDT works matters for real-world decision-making. Newcomblike problems are common in practice Lewis:1979 ([29](#bib.bib29)), and the issues underlying Newcomblike problems lie at the very heart of decision theory. This becomes more obvious when we consider decision-theoretic and game-theoretic dilemmas together: the basic problem of achieving mutual cooperation and group coordination, of making credible contracts and alliances while resisting threats and extortion, is a problem of reasoning in the face of (imperfect) predictors and other replicators of one’s behavior.
Consider a dilemma in which an agent needs to decide whether to spend $1,000 to access a voting booth, and will receive $1,000,000 iff she and nine other agents all independently choose to vote. If these ten agents have common knowledge that everyone in the group follows CDT, then none of them will bother to vote (unless, for some reason, they are already convinced at the outset that the other nine will vote). A member of a group of FDT agents, in contrast, will recognize that the group members’ decisions are not subjunctively independent. She will therefore vote—not out of altruism, but out of an awareness that the other members of the group will reason symmetrically and also vote.
Or consider dilemmas in which an agent needs to coordinate with herself over time. Perhaps the agent assigns $1,000,000 in utility to getting fit (which requires going to the gym a few dozen times), but she hates working out and assign $1,000 in utility to skipping her scheduled gym day. Carl the CDT agent has no way to force himself to stick to his guns, and will always avoid the gym, because he doesn’t think his action today is necessary or sufficient for getting his future selves to work out. Fiona the FDT agent, in contrast, can see that the decision she faces this week is similar to the decision she will face next week. If there are no relevant differences between her decision today versus tomorrow, then she should assume that she will continue going to the gym in the future if she goes today; and she should assume that she will continue skipping the gym if she skips today. Since the former is preferable, she goes to the gym.
The distinction between FDT, CDT, and EDT is particularly essential in the domain of computer science. Computer programs are easier to copy than human brains, and their behavior is often easier to predict. As artificial intelligence systems become more common, we should expect them to frequently enter Newcomblike dilemmas with each other. If an AI system’s programmers frequently base their behavior on predictions about how the system is going to behave, then they are likely to put the agent into Newcomblike dilemmas. If we want to avoid the kind of bizarre behavior EDT exhibits in the XOR blackmail problem or CDT exhibits in the twin prisoner’s dilemma, we will need to formalize practical alternatives.
The consistent ability of FDT agents to achieve the best outcomes in fair dilemmas suggests that FDT may be the correct normative theory of rational choice. Neither CDT nor EDT wins unambiguously as measured by utility achieved in Newcomblike problems, so it was natural to fall back to the notion of dominance in attempt to distinguish between them. However, with FDT in hand, the principle of utility maximization is revived—which is good news, given the weakness of the principle of dominance that we discussed in section [6](#S6 "6 Comparing the Three Decision Algorithms’ Behavior ‣ Functional Decision Theory: A New Theory of Instrumental Rationality").
The adjustments required to get from CDT to FDT, however, are not small. FDT is a genuinely novel theory, and raises a number of new practical and philosophical questions. Chief among these is whether we can find some method for deducing mathematical, logical, etc. subjunctive dependencies from observation and experiment, analogous to Pearl’s ([36](#bib.bib36)) method for deducing causal dependencies. And while we have made an informal case for FDT’s superiority by looking at particular well-known decision problems, we would prefer to have a general-purpose metric for comparing theories. A simple formal criterion for distinguishing fair dilemmas from unfair ones would allow us to more systematically test the claim that FDT outperforms the competition on all fair dilemmas.
We do not yet know whether FDT is optimal among decision theories, or even (on a formal level) what optimality consists in. Our initial results, however, suggest that FDT is an important advance over CDT and EDT. If there are better decision theories, we expect them to extend and enrich FDT’s approach, rather than returning to CDT or EDT’s characteristic methods for constructing hypotheticals and selecting actions.
color=yellow,inline]Acknowledgements: We acknowledge Rob Bensinger and Ben Levenstein for extensive help with the presentation of this paper, and for many valuable conversations. Functional decision theory has been developed in many parts by the co-authors and other contributors in a discussion ongoing since 2010 and earlier; we acknowledge Wei Dai, Vladimir Nesov, Vladimir Slepnev, and Patrick LaVictoire in particular for their contributions. |
a047ba97-50ae-4eca-9a61-83a33c8e3a89 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Self-deception: Hypocrisy or Akrasia?
Today's post, Self-deception: Hypocrisy or Akrasia? was originally published on March 26, 2007. A summary (from the LW wiki):
> If part of a person—for example, the verbal module—says it wants to become more rational, we can ally with that part even when weakness of will makes the person's actions otherwise; hypocrisy need not be assumed.
This post is part of a series rerunning Eliezer Yudkowsky's old posts so those interested can (re-)read and discuss them. The previous post was Chronophone Motivations, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
6325b75d-9e88-4a43-b46b-7e767f5503ff | trentmkelly/LessWrong-43k | LessWrong | Learn to write well BEFORE you have something worth saying
I’ve been reading a lot of trip reports lately. Trip reports are accounts people write about their experiences doing drugs, for the benefit of other people who might do those same drugs. I don’t take illegal drugs myself, but I like learning about other people’s intense experiences, and trip reports are little peeks into the extremes of human consciousness.
In some of these, people are really trying to communicate the power and revelation they had on a trip. They're trying to share what might be the most meaningful experience of their entire life.
Here’s another thing: almost all trip reports are kind of mediocre writing.
This is wildly judgmental but I stand by it. Here are some common things you see in them:
* Focusing on details specific to the situation that don’t matter to the reader. (Lengthy accounting of logistics, who the person was with at what time even when they’re not mentioned again, etc.)
* Sort of basic descriptions of phenomena and emotions: “I was very scared”. “I couldn’t stop thinking about it.”
* Cliches: “I was glad to be alive.” “It felt like I was in hell.” “It was an epic struggle.”
* Insights described in sort of classically-high-sounding abstractions. “I realized that the universe is made of love.” “Everything was nothing and time didn’t exist.” These statements are not explained, even if they clearly still mean a lot to the writer, and do not really communicate the force of whatever was going on there.
It's not, like, a crime to write a mediocre trip report. It's not necessarily even a problem. They’re not necessarily trying to convince you of anything. A lot of them are just what it says on the tin: recording some stuff that happened. I can’t criticize these for being bland, because that seems like trying to critique a cookbook for being insufficiently whimsical: they’re just sharing information.
(...Though you can still take that as a personal challenge; “is this the best prose it can be?” For instance, How to Cook and Eat i |
daed9cec-107e-47ca-b43d-de05258c86c2 | trentmkelly/LessWrong-43k | LessWrong | Open problems in activation engineering
Steering GPT-2-XL by adding an activation vector introduced
> activation engineering... techniques which steer models by modifying their activations. As a complement to prompt engineering and finetuning, activation engineering is a low-overhead way to steer models at runtime.
These results were recently complemented by Inference-Time Intervention: Eliciting Truthful Answers from a Language Model, which doubled TruthfulQA performance by adding a similarly computed activation vector to forward passes!
We think that activation engineering has a bunch of low-hanging fruit for steering and understanding models. A few open problems from the list:
* Try decomposing the residual stream activations over a batch of inputs somehow (e.g. PCA). Using the principal directions as activation addition directions, do they seem to capture something meaningful?
* Take a circuit studied from existing literature on GPT2, or find another one using ACDC. Targeting the nodes in these circuits, can you learn anything more about them and generally about how activation additions interact with circuits?
* What's the mechanism by which adding a steering vector with too large a coefficient breaks the model? (Credit: Thomas Kwa; see also @Ulisse Mini's initial data/explanation.)
If you want to work on activation engineering, come by the Slack server to coordinate research projects and propose new ideas. |
f9286d6b-2d4c-4f04-84d6-3f0446bd8f21 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey
1 Introduction
---------------
###
1.1 Motivation
Due to the powerful ability to learn different levels of general visual features, deep neural networks have been used as the basic structure to many computer vision applications such as object detection [[1](#bib.bib1), [2](#bib.bib2), [3](#bib.bib3)], semantic segmentation [[4](#bib.bib4), [5](#bib.bib5), [6](#bib.bib6)], image captioning [[7](#bib.bib7)], etc. The models trained from large-scale image datasets like ImageNet are widely used as the pre-trained models and fine-tuned for other tasks for two main reasons: (1) the parameters learned from large-scale diverse datasets provide a good starting point, therefore, networks training on other tasks can converge faster, (2) the network trained on large-scale datasets already learned the hierarchy features which can help to reduce over-fitting problem during the training of other tasks, especially when datasets of other tasks are small or training labels are scarce.
The performance of deep convolutional neural networks (ConvNets) greatly depends on their capability and the amount of training data. Different kinds of network architectures were developed to increase the capacity of network models, and larger and larger datasets were collected these days. Various networks including AlexNet [[8](#bib.bib8)], VGG [[9](#bib.bib9)], GoogLeNet [[10](#bib.bib10)], ResNet [[11](#bib.bib11)], and DenseNet [[12](#bib.bib12)] and large scale datasets such as ImageNet [[13](#bib.bib13)], OpenImage [[14](#bib.bib14)] have been proposed to train very deep ConvNets. With the sophisticated architectures and large-scale datasets, the performance of ConvNets keeps breaking the state-of-the-arts for many computer vision tasks [[15](#bib.bib15), [1](#bib.bib1), [4](#bib.bib4), [7](#bib.bib7), [16](#bib.bib16)].
However, collection and annotation of large-scale datasets are time-consuming and expensive. As one of the most widely used datasets for pre-training very deep 2D convolutional neural networks (2DConvNets), ImageNet [[13](#bib.bib13)] contains about 1.3 million labeled images covering 1,000 classes while each image is labeled by human workers with one class label. Compared to image datasets, collection and annotation of video datasets are more expensive due to the temporal dimension. The Kinetics dataset [[17](#bib.bib17)], which is mainly used to train ConvNets for video human action recognition, consists of 500,000 videos belonging to 600 categories and each video lasts around 10 seconds. It took many Amazon Turk workers a lot of time to collect and annotate a dataset at such a large scale.

Fig. 1: The general pipeline of self-supervised learning. The visual feature is learned through the process of training ConvNets to solve a pre-defined pretext task. After self-supervised pretext task training finished, the learned parameters serve as a pre-trained model and are transferred to other downstream computer vision tasks by fine-tuning. The performance on these downstream tasks is used to evaluate the quality of the learned features. During the knowledge transfer for downstream tasks, the general features from only the first several layers are unusually transferred to downstream tasks.
To avoid time-consuming and expensive data annotations, many self-supervised methods were proposed to learn visual features from large-scale unlabeled images or videos without using any human annotations. To learn visual features from unlabeled data, a popular solution is to propose various pretext tasks for networks to solve, while the networks can be trained by learning objective functions of the pretext tasks and the features are learned through this process. Various pretext tasks have been proposed for self-supervised learning including colorizing grayscale images [[18](#bib.bib18)], image inpainting [[19](#bib.bib19)], image jigsaw puzzle [[20](#bib.bib20)], etc. The pretext tasks share two common properties: (1) visual features of images or videos need to be captured by ConvNets to solve the pretext tasks, (2) pseudo labels for the pretext task can be automatically generated based on the attributes of images or videos.
The general pipeline of self-supervised learning is shown in Fig. [1](#S1.F1 "Fig. 1 ‣ 1.1 Motivation ‣ 1 Introduction ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). During the self-supervised training phase, a pre-defined pretext task is designed for ConvNets to solve, and the pseudo labels for the pretext task are automatically generated based on some attributes of data. Then the ConvNet is trained to learn object functions of the pretext task. After the self-supervised training finished, the learned visual features can be further transferred to downstream tasks (especially when only relatively small data available) as pre-trained models to improve performance and overcome over-fitting. Generally, shallow layers capture general low-level features like edges, corners, and textures while deeper layers capture task related high-level features. Therefore, visual features from only the first several layers are transferred during the supervised downstream task training phase.
###
1.2 Term Definition
To make this survey easy to read, we first define the terms used in the remaining sections.
* Human-annotated label: Human-annotated labels refer to labels of data that are manually annotated by human workers.
* Pseudo label: Pseudo labels are automatically generated labels based on data attributes for pretext tasks.
* Pretext Task: Pretext tasks are pre-designed tasks for networks to solve, and visual features are learned by learning objective functions of pretext tasks.
* Downstream Task: Downstream tasks are computer vision applications that are used to evaluate the quality of features learned by self-supervised learning. These applications can greatly benefit from the pre-trained models when training data are scarce. In general, human-annotated labels are needed to solve the downstream tasks. However, in some applications, the downstream task can be the same as the pretext task without using any human-annotated labels.
* Supervised Learning: Supervised learning indicates learning methods using data with fine-grained human-annotated labels to train networks.
* Semi-supervised Learning: Semi-supervised learning refers to learning methods using a small amount of labeled data in conjunction with a large amount of unlabeled data.
* Weakly-supervised Learning: Weakly supervised learning refers to learning methods to learn with coarse-grained labels or inaccurate labels. The cost of obtaining weak supervision labels is generally much cheaper than fine-grained labels for supervised methods.
* Unsupervised Learning: Unsupervised learning refers to learning methods without using any human-annotated labels.
* Self-supervised Learning: Self-supervised learning is a subset of unsupervised learning methods. Self-supervised learning refers to learning methods in which ConvNets are explicitly trained with automatically generated labels. This review only focuses on self-supervised learning methods for visual feature learning with ConvNets in which the features can be transferred to multiple different computer vision tasks.
Since no human annotations are needed to generate pseudo labels during self-supervised training, very large-scale datasets can be used for self-supervised training. Trained with these pseudo labels, self-supervised methods achieved promising results and the gap with supervised methods in performance on downstream tasks becomes smaller. This paper provides a comprehensive survey of deep ConvNets-based self-supervised visual feature learning methods.
The key contributions of this paper are as follows:
* To the best of our knowledge, this is the first comprehensive survey about self-supervised visual feature learning with deep ConvNets which will be helpful for researchers in this field.
* An in-depth review of recently developed self-supervised learning methods and datasets.
* Quantitative performance analysis and comparison of the existing methods are provided.
* A set of possible future directions for self-supervised learning is pointed out.
2 Formulation of Different Learning Schemas
--------------------------------------------
Based on the training labels, visual feature learning methods can be grouped into the following four categories: supervised, semi-supervised, weakly supervised, and unsupervised. In this section, the four types of learning methods are compared and key terminologies are defined.
###
2.1 Supervised Learning Formulation
For supervised learning, given a dataset X, for each data Xi in X, there is a corresponding human-annotated label Yi. For a set of N labeled training data D={Xi}Ni=0, the training loss function is defined as:
| | | | |
| --- | --- | --- | --- |
| | loss(D)=minθ1NN∑i=1loss(Xi,Yi). | | (1) |
Trained with accurate human-annotated labels, the supervised learning methods obtained break-through results on different computer vision applications [[8](#bib.bib8), [4](#bib.bib4), [1](#bib.bib1), [16](#bib.bib16)]. However, data collection and annotation usually are expensive and may require special skills. Therefore, semi-supervised, weakly supervised, and unsupervised learning methods were proposed to reduce the cost.
###
2.2 Semi-Supervised Learning Formulation
For semi-supervised visual feature learning, given a small labeled dataset X and a large unlabeled dataset Z, for each data Xi in X, there is a corresponding human-annotated label Yi. For a set of N labeled training data D1={Xi}Ni=0 and M unlabeled training data D2={Zi}Mi=0, the training loss function is defined as:
| | | | |
| --- | --- | --- | --- |
| | loss(D1,D2)=minθ1NN∑i=1loss(Xi,Yi)+1MM∑i=1loss(Zi,R(Zi,X)), | | (2) |
where the R(Zi,X) is a task-specific function to represent the relation between each unlabeled training data Zi with the labeled dataset X.
###
2.3 Weakly Supervised Learning Formulation
For weakly supervised visual feature learning, given a dataset X, for each data Xi in X, there is a corresponding coarse-grained label Ci. For a set of N training data D={Xi}Ni=0, the training loss function is defined as:
| | | | |
| --- | --- | --- | --- |
| | loss(D)=minθ1NN∑i=1loss(Xi,Ci). | | (3) |
Since the cost of weak supervision is much lower than the fine-grained label for supervised methods, large-scale datasets are relatively easier to obtain. Recently, several papers proposed to learn image features from web collected images using hashtags as category labels [[21](#bib.bib21), [22](#bib.bib22)], and obtained very good performance [[21](#bib.bib21)].
###
2.4 Unsupervised Learning Formulation
Unsupervised learning refers to learning methods that do not need any human-annotated labels. This type of methods including fully unsupervised learning methods in which the methods do not need any labels at all, as well as self-supervised learning methods in which networks are explicitly trained with automatically generated pseudo labels without involving any human annotation.
####
2.4.1 Self-supervised Learning
Recently, many self-supervised learning methods for visual feature learning have been developed without using any human-annotated labels [[23](#bib.bib23), [24](#bib.bib24), [25](#bib.bib25), [26](#bib.bib26), [27](#bib.bib27), [28](#bib.bib28), [29](#bib.bib29), [30](#bib.bib30), [31](#bib.bib31), [32](#bib.bib32), [33](#bib.bib33), [34](#bib.bib34), [33](#bib.bib33), [35](#bib.bib35)]. Some papers refer to this type of learning methods as unsupervised learning [[36](#bib.bib36), [37](#bib.bib37), [38](#bib.bib38), [39](#bib.bib39), [40](#bib.bib40), [41](#bib.bib41), [42](#bib.bib42), [43](#bib.bib43), [44](#bib.bib44), [45](#bib.bib45), [46](#bib.bib46), [47](#bib.bib47), [48](#bib.bib48)].
Compared to supervised learning methods which require a data pair Xi and Yi while Yi is annotated by human labors, self-supervised learning also trained with data Xi along with its pseudo label Pi while Pi is automatically generated for a pre-defined pretext task without involving any human annotation. The pseudo label Pi can be generated by using attributes of images or videos such as the context of images [[20](#bib.bib20), [36](#bib.bib36), [18](#bib.bib18), [19](#bib.bib19)], or by traditional hand-designed methods [[49](#bib.bib49), [50](#bib.bib50), [51](#bib.bib51)].
Given a set of N training data D={Pi}Ni=0, the training loss function is defined as:
| | | | |
| --- | --- | --- | --- |
| | loss(D)=minθ1NN∑i=1loss(Xi,Pi). | | (4) |
As long as the pseudo labels P are automatically generated without involving human annotations, then the methods belong to self-supervised learning. Recently, self-supervised learning methods have achieved great progress. This paper focuses on the self-supervised learning methods that mainly designed for visual feature learning, while the features have the ability to be transferred to multiple visual tasks and to perform new tasks by learning from limited labeled data. This paper summarizes these self-supervised feature learning methods from different perspectives including network architectures, commonly used pretext tasks, datasets, and applications, etc.
3 Common Deep Network Architectures
------------------------------------
No matter the categories of learning methods, they share similar network architectures. This section reviews common architectures for learning both image and video features.
###
3.1 Architectures for Learning Image Features
Various 2DConvNets have been designed for image feature learning. Here, five milestone architectures for image feature learning including AlexNet [[8](#bib.bib8)], VGG [[9](#bib.bib9)], GoogLeNet [[10](#bib.bib10)], ResNet [[11](#bib.bib11)], and DenseNet [[12](#bib.bib12)] are reviewed.
####
3.1.1 AlexNet
AlexNet obtained a big improvement in the performance of image classification on ImageNet dataset compared to the previous state-of-the-art methods [[8](#bib.bib8)]. With the support of powerful GPUs, AlexNet which has 62.4 million parameters were trained on ImageNet with 1.3 million images. As shown in Fig. [2](#S3.F2 "Fig. 2 ‣ 3.1.1 AlexNet ‣ 3.1 Architectures for Learning Image Features ‣ 3 Common Deep Network Architectures ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"), the architecture of AlexNet has 8 layers in which 5 are convolutional layers and 3 are fully connected layers. The ReLU is applied after each convolutional layers. 94% of the network parameters come from the fully connected layers. With this scale of parameters, the network can easily be over-fitting. Therefore, different kinds of techniques are applied to avoid over-fitting problem including data augmentation, dropout, and normalization.

Fig. 2: The architecture of AlexNet [[8](#bib.bib8)]. The numbers indicate the number of channels of each feature map. Figure is reproduced based on AlexNet [[8](#bib.bib8)].
####
3.1.2 Vgg
VGG is proposed by Simonyan and Zisserman and won the first place for ILSVRC 2013 competition [[9](#bib.bib9)]. Simonyan and Zisserman proposed various depth of networks, while the 16-layer VGG is the most widely used one due to its moderate model size and its superior performance. The architecture of VGG-16 is shown in Fig. [3](#S3.F3 "Fig. 3 ‣ 3.1.2 VGG ‣ 3.1 Architectures for Learning Image Features ‣ 3 Common Deep Network Architectures ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). It has 16 convolutional layers belong to five convolution blocks. The main difference between VGG and AlexNet is that AlexNet has large convolution stride and large kernel size while all the convolution kernels in VGG have same small size (3×3) and small convolution stride (1×1). The large kernel size leads to too many parameters and large model size, while the large convolution stride may cause the network to miss some fine features in the lower layers. The smaller kernel size makes the training of very deep convolution neural network feasible while still reserving the fine-grained information in the network.

Fig. 3: The architecture of VGG [[9](#bib.bib9)]. Figure is reproduced based on VGG [[9](#bib.bib9)].
####
3.1.3 ResNet
VGG demonstrated that deeper networks are possible to obtain better performance. However, deeper networks are more difficult to train due to two problems: gradient vanishing and gradient explosion. ResNet is proposed by He et al. to use the skip connection in convolution blocks by sending the previous feature map to the next convolution block to overcome the gradient vanishing and gradient explosion [[11](#bib.bib11)]. The details of the skip connection are shown in Fig. [4](#S3.F4 "Fig. 4 ‣ 3.1.3 ResNet ‣ 3.1 Architectures for Learning Image Features ‣ 3 Common Deep Network Architectures ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). With the skip connection, training of very deep neural networks on GPUs becomes feasible.

Fig. 4: The architecture of Residual block [[11](#bib.bib11)]. The identity mapping can effectively reduce gradient vanishing and explosion which make the training of very deep network feasible. Figure is reproduced based on ResNet [[11](#bib.bib11)].
In ResNet [[11](#bib.bib11)], He et al. also evaluated networks with different depths for image classification. Due to its smaller model size and superior performance, ResNet is often used as the base network for other computer vision tasks. The convolution blocks with skip connection also widely used as the basic building blocks.
####
3.1.4 GoogLeNet

Fig. 5: The architecture of Inception block [[10](#bib.bib10)]. Figure is reproduced based on GoogLeNet [[10](#bib.bib10)].
GoogLeNet, a 22-layer deep network, is proposed by Szegedy et al. which won ILSVRC-2014 challenge with a top-5 test accuracy of 93.3% [[10](#bib.bib10)]. Compared to previous work that to build a deeper network, Szegedy et al. explored to build a wider network in which each layer has multiple parallel convolution layers. The basic block of GoogLeNet is inception block which consists of 4 parallel convolution layers with different kernel sizes and followed by 1×1 convolution for dimension reduction purpose. The architecture for the inception block of GoogLeNet is shown in Fig. [5](#S3.F5 "Fig. 5 ‣ 3.1.4 GoogLeNet ‣ 3.1 Architectures for Learning Image Features ‣ 3 Common Deep Network Architectures ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). With a carefully crafted design, they increased the depth and width of the network while keeping the computational cost constant.
####
3.1.5 DenseNet

Fig. 6: The architecture of the Dense Block proposed in DenseNet [[12](#bib.bib12)]. Figure is reproduced based on [[12](#bib.bib12)].
Most of the networks including AlexNet, VGG, and ResNet follow a hierarchy architecture. The images are fed to the network and features are extracted by different layers. The shallow layers extract low-level general features, while the deep layers extract high-level task-specific features [[52](#bib.bib52)]. However, when a network goes deeper, the deeper layers may suffer from memorizing the low-level features needed by the network to accomplish the task.
To alleviate this problem, Huang et al. proposed the dense connection to send all the features before a convolution block as the input to the next convolution block in the neural network [[12](#bib.bib12)]. As shown in Fig. [6](#S3.F6 "Fig. 6 ‣ 3.1.5 DenseNet ‣ 3.1 Architectures for Learning Image Features ‣ 3 Common Deep Network Architectures ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"), the output features of all the previous convolution blocks serve as the input to the current block. In this way, the shallower blocks focus on the low-level general features while the deeper blocks can focus on the high-level task-specific features.
###
3.2 Architectures for Learning Video Features
To extract both spatial and temporal information from videos, several architectures have been designed for video feature learning including 2DConvNet-based methods [[53](#bib.bib53)], 3DConvNet-based methods [[16](#bib.bib16)], and LSTM-based methods [[54](#bib.bib54)]. The 2DConvNet-based methods apply 2DConvNet on every single frame and the image features of multiple frames are fused as video features. The 3DConvNet-based methods employ 3D convolution operation to simultaneously extract both spatial and temporal features from multiple frames. The LSTM-based methods employ LSTM to model long term dynamics within a video. This section briefly summarizes these three types of architectures of video feature learning.
####
3.2.1 Two-Stream Network

Fig. 7: The general architecture of the two-stream network which including one spatial stream and one temporal stream. Figure is reproduced based on [[53](#bib.bib53)].
Videos generally are composed of various numbers of frames. To recognize actions in a video, networks are required to capture appearance features as well as temporal dynamics from frame sequences. As shown in Fig. [7](#S3.F7 "Fig. 7 ‣ 3.2.1 Two-Stream Network ‣ 3.2 Architectures for Learning Video Features ‣ 3 Common Deep Network Architectures ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"), a two-stream 2DConvNet-based network is proposed by Simonyan and Zisserman for human action recognition, while using a 2DConvNet to capture spatial features from RGB stream and another 2DConvNet to capture temporal features from optical flow stream [[53](#bib.bib53)]. Optical flow encodes boundary of moving objects, therefore, the temporal stream ConvNet is relatively easier to capture the motion information within the frames.
Experiments showed that the fusion of the two streams can significantly improve action recognition accuracy. Later, this work has been extended to multi-stream network [[55](#bib.bib55), [56](#bib.bib56), [57](#bib.bib57), [58](#bib.bib58), [59](#bib.bib59)] to fuse features from different types of inputs such as dynamic images [[60](#bib.bib60)] and difference of frames [[61](#bib.bib61)].
####
3.2.2 Spatiotemporal Convolutional Neural Network
3D convolution operation was first proposed in 3DNet [[62](#bib.bib62)] for human action recognition. Compared to 2DConvNets which individually extract the spatial information of each frame and then fuse them together as video features, 3DConvNets are able to simultaneously extract both spatial and temporal features from multiple frames.
C3D [[16](#bib.bib16)] is a VGG-like 11-layer 3DConvNet designed for human action recognition. The network contains 8 convolutional layers, and 3 fully connected layers. All the kernels have the size of 3×3×3, the convolution stride is fixed to 1 pixel. Due to its powerful ability of simultaneously extracting both spatial and temporal features from multiple frames, the network achieved state-of-the-art on several video analysis tasks including human action recognition [[63](#bib.bib63)], action similarity labeling [[64](#bib.bib64)], scene classification [[65](#bib.bib65)], and object recognition in videos [[66](#bib.bib66)].
The input of C3D is 16 consecutive RGB frames where the appearance and temporal cues from 16-frame clips are extracted. However, the paper of long-term temporal convolutions (LTC) [[67](#bib.bib67)] argues that, for the long-lasting actions, 16 frames are insufficient to represent whole actions which last longer. Therefore, larger numbers of frames are employed to train 3DConvNets and achieved better performance than C3D [[67](#bib.bib67), [68](#bib.bib68)].
With the success of applying 3D convolution on video analysis tasks, various 3DConvNet architectures have been proposed [[69](#bib.bib69), [70](#bib.bib70), [71](#bib.bib71)]. Hara et al. proposed 3DResNet by replacing all the 2D convolution layers in ResNet with 3D convolution layers and showed comparable performance with the state-of-the-art performance on action recognition task on several datasets [[70](#bib.bib70)].
####
3.2.3 Recurrent Neural Network

Fig. 8: The architecture of long-term recurrent convolutional
networks (LRCN) [[54](#bib.bib54)]. LSTM is employed to model the long term temporal information within a frame sequence. Figure is reproduced based on [[54](#bib.bib54)].
Due to the ability to model the temporal dynamics within a sequence, recurrent neural networks (RNN) are often applied to videos as ordered frame sequences. Compared to standard RNN [[72](#bib.bib72)], long short term memory (LSTM) uses memory cells to store, modify, and access internal states, to better model the long-term temporal relationships within video frames [[73](#bib.bib73)].
Based on the advantage of the LSTM, Donahue et al. proposed long-term recurrent convolutional networks (LRCN) for human action recognition [[54](#bib.bib54)]. The framework of the LRCN is shown in Fig. [8](#S3.F8 "Fig. 8 ‣ 3.2.3 Recurrent Neural Network ‣ 3.2 Architectures for Learning Video Features ‣ 3 Common Deep Network Architectures ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). The LSTM is sequentially applied to the features extracted by ConvNets to model the temporal dynamics in the frame sequence. With the LSTM to model a video as frame sequences, this model is able to explicitly model the long-term temporal dynamics within a video. Later on, this model is extended to a deeper LSTM for action recognition [[74](#bib.bib74), [75](#bib.bib75)], video captioning [[76](#bib.bib76)], and gesture recognition tasks [[77](#bib.bib77)].
###
3.3 Summary of ConvNet Architectures
Deep ConvNets have demonstrated great potential in various computer vision tasks. And the visualization of the image and video features has shown that these networks truly learned meaningful features that required by the corresponding tasks [[52](#bib.bib52), [78](#bib.bib78), [79](#bib.bib79), [80](#bib.bib80)]. However, one common drawback is that these networks can be easily over-fit when training data are scarce since there are over millions of parameters in each network.
Take 3DResNet for an example, the performance of an 18-layer 3DResNet on UCF101 action recognition dataset [[63](#bib.bib63)] is 42% when trained from scratch. However, with a supervised pre-trained model on the large-scale Kinetics dataset (500,000 videos of 600 classes) with human-annotated class labels and then fine-tuned on UCF101 dataset, the performance can increase to 84%. Pre-trained models on large-scale datasets can speed up the training process and improve the performance on relatively small datasets. However, the cost of collecting and annotating large-scale datasets is very expensive and time-consuming.
In order to obtain pre-trained models from large-scale datasets without expensive human annotations, many self-supervised learning methods were proposed to learn image and video features from pre-designed pretext tasks. The next section describes the general pipeline of the self-supervised image and video feature learning.
4 Commonly used Pretext and Downstream Tasks
---------------------------------------------

Fig. 9: Self-supervised visual feature learning schema. The ConvNet is trained by minimizing errors between pseudo labels P and predictions O of the ConvNet. Since the pseudo labels are automatically generated, no human annotations are involved during the whole process.
Most existing self-supervised learning approaches follow the schema shown in Fig [9](#S4.F9 "Fig. 9 ‣ 4 Commonly used Pretext and Downstream Tasks ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). Generally, a pretext task is defined for ConvNets to solve and visual features can be learned through the process of accomplishing this pretext task. The pseudo labels P for pretext task can be automatically generated without human annotations. ConvNet is optimized by minimizing the error between the prediction of ConvNet O and the pseudo labels P. After the training on the pretext task is finished, ConvNet models that can capture visual features for images or videos are obtained.
###
4.1 Learning Visual Features from Pretext Tasks
To relieve the burden of large-scale dataset annotation, a pretext task is generally designed for networks to solve while pseudo labels for the pretext task are automatically generated based on data attributes. Many pretext tasks have been designed and applied for self-supervised learning such as foreground object segmentation [[81](#bib.bib81)], image inpainting [[19](#bib.bib19)], clustering [[44](#bib.bib44)], image colorization [[82](#bib.bib82)], temporal order verification [[40](#bib.bib40)], visual audio correspondence verification [[25](#bib.bib25)], and so on. Effective pretext tasks ensure that semantic features are learned through the process of accomplishing the pretext tasks.
Take image colorization as an example, image colorization is a task to colorize gray-scale images into colorful images. To generate realistic colorful images, networks are required to learn the structure and context information of images. In this pretext task, the data X is the gray-scale images which can be generated by performing a linear transformation in RGB images, while the pseudo label P is the RGB image itself. The training pair Xi and Pi can be generated in real time with negligible cost. Self-Supervised learning with other pretext tasks follow a similar pipeline.
###
4.2 Commonly Used Pretext Tasks

Fig. 10: Categories of pretext tasks for self-supervised visual feature learning: generation-based, context-based, free semantic label-based, and cross modal-based.
According to the data attributes used to design pretext tasks, as shown in Fig. [10](#S4.F10 "Fig. 10 ‣ 4.2 Commonly Used Pretext Tasks ‣ 4 Commonly used Pretext and Downstream Tasks ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"), we summarize the pretext tasks into four categories: generation-based, context-based, free semantic label-based, and cross modal-based.
Generation-based Methods: This type of methods learn visual features by solving pretext tasks that involve image or video generation.
* Image Generation: Visual features are learned through the process of image generation tasks. This type of methods includes image colorization [[18](#bib.bib18)], image super resolution [[15](#bib.bib15)], image inpainting [[19](#bib.bib19)], image generation with Generative Adversarial Networks (GANs) [[83](#bib.bib83), [84](#bib.bib84)].
* Video Generation: Visual features are learned through the process of video generation tasks. This type of methods includes video generation with GANs [[85](#bib.bib85), [86](#bib.bib86)] and video prediction [[37](#bib.bib37)].
Context-based pretext tasks: The design of context-based pretext tasks mainly employ the context features of images or videos such as context similarity, spatial structure, temporal structure, etc.
* Context Similarity: Pretext tasks are designed based on the context similarity between image patches. This type of methods includes image clustering-based methods [[34](#bib.bib34), [44](#bib.bib44)], and graph constraint-based methods [[43](#bib.bib43)].
* Spatial Context Structure: Pretext tasks are used to train ConvNets are based on the spatial relations among image patches. This type of methods includes image jigsaw puzzle [[20](#bib.bib20), [87](#bib.bib87), [88](#bib.bib88), [89](#bib.bib89)], context prediction [[41](#bib.bib41)], and geometric transformation recognition [[36](#bib.bib36), [28](#bib.bib28)], etc.
* Temporal Context Structure: The temporal order from videos is used as supervision signal. The ConvNet is trained to verify whether the input frame sequence in correct order [[40](#bib.bib40), [90](#bib.bib90)] or to recognize the order of the frame sequence [[39](#bib.bib39)].
Free Semantic Label-based Methods: This type of pretext tasks train networks with automatically generated semantic labels. The labels are generated by traditional hard-code algorithms [[50](#bib.bib50), [51](#bib.bib51)] or by game engines [[30](#bib.bib30)]. The pretext tasks include moving object segmentation [[91](#bib.bib91), [81](#bib.bib81)], contour detection [[47](#bib.bib47), [30](#bib.bib30)], relative depth prediction [[92](#bib.bib92)], and etc.
Cross Modal-based Methods: This type of pretext tasks train ConvNets to verify whether two different channels of input data are corresponding to each other. This type of methods includes Visual-Audio Correspondence Verification [[93](#bib.bib93), [25](#bib.bib25)], RGB-Flow Correspondence Verification [[24](#bib.bib24)], and egomotion [[94](#bib.bib94), [95](#bib.bib95)].
###
4.3 Commonly Used Downstream Tasks for Evaluation
To evaluate the quality of the learned image or video features by self-supervised methods, the learned parameters by self-supervised learning are employed as pre-trained models and then fine-tuned on downstream tasks such as image classification, semantic segmentation, object detection, and action recognition etc. The performance of the transfer learning on these high-level vision tasks demonstrates the generalization ability of the learned features. If ConvNets of self-supervised learning can learn general features, then the pre-trained models can be used as a good starting point for other vision tasks that require capturing similar features from images or videos.
Image classification, semantic segmentation, and object detection usually are used as the tasks to evaluate the generalization ability of the learned image features by self-supervised learning methods, while human action recognition in videos is used to evaluate the quality of video features obtained from self-supervised learning methods. Below are brief introductions of the commonly used high-level tasks for visual feature evaluation.
####
4.3.1 Semantic Segmentation
Semantic segmentation, the task of assigning semantic labels to each pixel in images, is of great importance in many applications such as autonomous driving, human-machine interaction, and robotics. The community has recently made promising progress and various networks have been proposed such as Fully Convolutional Network (FCN) [[4](#bib.bib4)], DeepLab [[5](#bib.bib5)], PSPNet [[6](#bib.bib6)] and datasets such as PASCAL VOC [[96](#bib.bib96)], CityScape [[97](#bib.bib97)], ADE20K [[98](#bib.bib98)].

Fig. 11: The framework of the Fully Convolutional Neural Network proposed for semantic segmentation [[4](#bib.bib4)]. Figure is reproduced based on [[4](#bib.bib4)].
Among all these methods, FCN [[4](#bib.bib4)] is a milestone work for semantic segmentation since it started the era of applying fully convolution network (FCN) to solve this task. The architecture of FCN is shown in Fig. [11](#S4.F11 "Fig. 11 ‣ 4.3.1 Semantic Segmentation ‣ 4.3 Commonly Used Downstream Tasks for Evaluation ‣ 4 Commonly used Pretext and Downstream Tasks ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). 2DConvNet such as AlexNet, VGG, ResNet is used as the base network for feature extraction while the fully connected layer is replaced by transposed convolution layer to obtain the dense prediction. The network is trained end-to-end with pixel-wise annotations.
When using semantic segmentation as downstream task to evaluate the quality of image features learned by self-supervised learning methods, the FCN is initialized with the parameters trained with the pretext task and fine-tuned on the semantic segmentation dataset, then the performance on the semantic segmentation task is evaluated and compared with that of other self-supervised methods.
####
4.3.2 Object Detection
Object Detection, a task of localizing the position of objects in images and recognizing the category of the objects, is also very import for many computer vision applications such as autonomous driving, robotics, scene text detection and so on. Recently, many datasets such as MSCOCO [[99](#bib.bib99)] and OpenImage [[14](#bib.bib14)] have been proposed for object detection and many ConvNet-based models [[1](#bib.bib1)], [[2](#bib.bib2)], [[3](#bib.bib3)], [[100](#bib.bib100)], [[101](#bib.bib101)], [[102](#bib.bib102)], [[103](#bib.bib103)], [[104](#bib.bib104)] have been proposed and obtained great performance.
Fast-RCNN [[2](#bib.bib2)] is a two-stage network for object detection. The framework of Fast-RCNN is shown in Fig. [12](#S4.F12 "Fig. 12 ‣ 4.3.2 Object Detection ‣ 4.3 Commonly Used Downstream Tasks for Evaluation ‣ 4 Commonly used Pretext and Downstream Tasks ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). Object proposals are generated based on feature maps produced by a convolution neural network, then these proposals are fed to several fully connected layers to generate the bounding box of objects and the categories of these objects.

Fig. 12: The pipeline of the Fast-RCNN for object detection. Figure is reproduced based on [[3](#bib.bib3)].
When using object detection as downstream task to evaluate the quality of the self-supervised image features, networks that trained with the pretext task on unlabeled large data are served as the pre-trained model for the Fast-RCNN [[2](#bib.bib2)] and then fine-tuned on object detection datasets, then the performance on the object detection task is evaluated to demonstrate the generalization ability of self-supervised learned features.
####
4.3.3 Image Classification
Image Classification is a task of recognizing the category of objects in each image. Many networks have been designed for this task such as AlexNet [[8](#bib.bib8)], VGG [[9](#bib.bib9)], ResNet [[11](#bib.bib11)], GoogLeNet [[10](#bib.bib10)], DenseNet [[12](#bib.bib12)], etc. Usually, only one class label is available for each image although the image may contains different classes of objects.
When choosing image classification as a downstream task to evaluate the quality of image features learned from self-supervised learning methods, the self-supervised learned model is applied on each image to extract features which then are used to train a classifier such as Support Vector Machine (SVM) [[105](#bib.bib105)]. The classification performance on testing data is compared with other self-supervised models to evaluate the quality of the learned features.
####
4.3.4 Human Action Recognition
Human action recognition is a task of identifying what people doing in videos for a list of pre-defined action classes. Generally, videos in human action recognition datasets contain only one action in each video [[63](#bib.bib63), [106](#bib.bib106), [17](#bib.bib17)]. Both the spatial and temporal features are needed to accomplish this task.
The action recognition task is often used to evaluate the quality of video features learned by self-supervised learning methods. The network is first trained on unlabeled video data with pretext tasks, then it is fine-tuned on action recognition datasets with human annotations to recognize the actions. The testing performance on action recognition task is compared with other self-supervised learning methods to evaluate the quality of the learned features.
####
4.3.5 Qualitative Evaluation
In addition to these quantitative evaluations of the learned features, there are also some qualitative visualization methods to evaluate the quality of self-supervised learning features. Three methods are often used for this purpose: kernel visualization, feature map visualization, and image retrieval visualization [[44](#bib.bib44), [36](#bib.bib36), [28](#bib.bib28), [41](#bib.bib41)].
Kernel Visualization: Qualitatively visualize the kernels of the first convolution layer learned with the pretext tasks and compare the kernels from supervised models. The similarity of the kernels learned by supervised and self-supervised models are compared to indicate the effectiveness of self-supervised methods [[44](#bib.bib44), [28](#bib.bib28)].
Feature Map Visualization: Feature maps are visualized to show the attention of networks. Larger activation represents the neural network pays more attention to the corresponding region in the image. Feature maps are usually qualitatively visualized and compared with that of supervised models [[36](#bib.bib36), [28](#bib.bib28)].
Nearest Neighbor Retrieval: In general, images with similar appearance usually are closer in the feature space. The nearest neighbor method is used to find the top K nearest neighbors from the feature space of the features learned by the self-supervised learned model [[40](#bib.bib40), [41](#bib.bib41), [43](#bib.bib43)].
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| Dataset | Data Type | Size | Synthetic | # classes | Label |
| ImageNet [[13](#bib.bib13)] | Image | 1.3 million images | ✗ | 1,000 | Object category label |
| Places [[107](#bib.bib107)] | Image | 2.5 million images | ✗ | 205 | scene categories label |
| Places365 [[108](#bib.bib108)] | Image | 10 million images | ✗ | 434 | scene categories label |
| SUNCG [[109](#bib.bib109)] | Image | 150,000 images | ✓ | 84 | depth, volumetric data |
| MNIST [[110](#bib.bib110)] | Image | 70,000 images | ✗ | 10 | Digit class label |
| SVHN [[111](#bib.bib111)] | Image | 600,000 Images | ✗ | 10 | Digit class label |
| CIFAR10 [[112](#bib.bib112)] | Image | 60,000 Images | ✗ | 10 | Object category label |
| STL-10 [[113](#bib.bib113)] | Image | 101,300 Images | ✗ | 10 | Object category label |
| PASCAL VOC [[96](#bib.bib96)] | Image | 2,913 images | ✗ | 20 | Category label, bounding box, segmentation mask |
| YFCC100M [[114](#bib.bib114)] | Image/Video | 100 million media data | ✗ | — | Hashtags |
| SceneNet RGB-D [[115](#bib.bib115)] | Video | 5 million images | ✓ | 13 | Depth, Instance Segmentation, Optical Flow |
| Moment-in-Time [[116](#bib.bib116)] | Video | 1 million 3-second videos | ✗ | 339 | Video category class |
| Kinetics [[17](#bib.bib17)] | Video | 0.5 million 10-second videos | ✗ | 600 | Human action class |
| AudioSet [[117](#bib.bib117)] | Video | 2 million 10-second videos | ✗ | 632 | Audio event class |
| KITTI [[118](#bib.bib118)] | Video | 28 videos | ✗ | — | Data captured by various sensors are available |
| UCF101 [[63](#bib.bib63)] | Video | 10,031 videos | ✗ | 101 | Human action class |
| HMDB51 [[106](#bib.bib106)] | Video | 6,766 videos | ✗ | 51 | Human action class |
TABLE I: Summary of commonly used image and video datasets. Note that image datasets can be used to learn image features, while video datasets can be used to learn both image and video features.
5 Datasets
-----------
This section summarizes the commonly used image and video datasets for training and evaluating of self-supervised visual feature learning methods. Self-supervised learning methods can be trained with images or videos by discarding human-annotated labels, therefore, any datasets that collected for supervised learning can be used for self-supervised visual feature learning without using human-annotated labels. The evaluation of the quality of learned features is normally conducted by fine-tuned on high-level vision tasks with relatively small datasets (normally with accurate labels) such as video action recognition, object detection, semantic segmentation, etc. It is worth noting that networks use these synthetic datasets for visual feature learning are considered as self-supervised learning in this paper since labels of synthetic datasets are automatically generated by game engines and no human annotations are involved. Table [I](#S4.T1 "TABLE I ‣ 4.3.5 Qualitative Evaluation ‣ 4.3 Commonly Used Downstream Tasks for Evaluation ‣ 4 Commonly used Pretext and Downstream Tasks ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey") summarizes the commonly used image and video datasets.
###
5.1 Image Datasets
* ImageNet: The ImageNet dataset [[13](#bib.bib13)] contains 1.3 million images uniformly distributed into 1,000 classes and is organized according to the WordNet hierarchy. Each image is assigned with only one class label. ImageNet is the most widely used dataset for self-supervised image feature learning.
* Places: The Places dataset [[107](#bib.bib107)] is proposed for scene recognition and contains more than 2.5 million images covering more than 205 scene categories with more than 5,000 images per category.
* Places365: The Places365 is the 2nd generation of the Places database which is built for high-level visual understanding tasks, such as scene context, object recognition, action and event prediction, and theory-of-mind inference [[108](#bib.bib108)]. There are more than 10 million images covering more than 400 classes and 5,000 to 30,000 training images per class.
* SUNCG: The SUNCG dataset is a large synthetic 3D scene repository for indoor scenes which consists of over 45,000 different scenes with manually created realistic room and furniture layouts [[109](#bib.bib109)]. The synthetic depth, object level semantic labels, and volumetric ground truth are available.
* MNIST: The MNIST is a dataset of handwritten digits consisting of 70,000 images while 60,000 images belong to training set and the rest 10,000 images are for testing [[110](#bib.bib110)]. All digits have been size-normalized and centered in fixed-size images.
* SVHN: SVHN is a dataset for recognizing digits and numbers in natural scene images which obtained from house numbers from Google Street View images [[111](#bib.bib111)]. The dataset consists of over 600,000 images and all digits have been resized to a fixed resolution of 32×32 pixels.
* CIFAR10: The CIFAR10 dataset is a collection of tiny images for image classification task [[112](#bib.bib112)]. It consists of 60,000 images of size 32×32 that covers 10 different classes. The 10 classes include airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. The dataset is balanced and there are 6,000 images of each class.
* STL-10: The STL-10 dataset is specifically designed for developing unsupervised feature learning [[113](#bib.bib113)]. It consists of 500 labeled training images, 800 testing images, and 100,000 unlabeled images covering 10 classes which include airplane, bird, car, cat, deer, dog, horse, monkey, ship, and truck.
* PASCAL Visual Object Classes (VOC): The VOC 2,012 dataset [[96](#bib.bib96)] contains 20 object categories including vehicles, household, animals, and other: aeroplane, bicycle, boat, bus, car, motorbike, train, bottle, chair, dining table, potted plant, sofa, TV/monitor, bird, cat, cow, dog, horse, sheep, and person. Each image in this dataset has pixel-level segmentation annotations, bounding box annotations, and object class annotations. This dataset has been widely used as a benchmark for object detection, semantic segmentation, and classification tasks. The PASCAL VOC dataset is split into three subsets: 1,464 images for training, 1,449 images for validation and a private testing [[96](#bib.bib96)]. All the self-supervised image representation learning methods are evaluated on this dataset with the three tasks.
###
5.2 Video Datasets
* YFCC100M: The Yahoo Flickr Creative Commons 100 Million Dataset (YFCC100M) is a large public multimedia collection from Flickr, consisting of 100 million media data, of which around 99.2 million are images and 0.8 million are videos [[114](#bib.bib114)]. The statistics on hashtags used in the YFCC100M dataset show that the data distribution is severely unbalanced [[119](#bib.bib119)].
* SceneNet RGB-D: The SceneNet RGB-D dataset is a large indoor synthetic video dataset which consists of 5 million rendered RGB-D images from over 15K trajectories in synthetic layouts with random but physically simulated object poses [[115](#bib.bib115)]. It provides pixel-level annotations for scene understanding problems such as semantic segmentation, instance segmentation, and object detection, and also for geometric computer vision problems such as optical flow, depth estimation, camera pose estimation, and 3D reconstruction [[115](#bib.bib115)].
* Moment in Time: The Moment-in-Time dataset is a large balanced and diverse dataset for video understanding [[116](#bib.bib116)]. The dataset consists of 1 million video clips that cover 339 classes, and each video lasts around 3 seconds. The average number of video clips for each class is 1,757 with a median of 2,775. The video in this dataset contains videos that capturing visual and/or audible actions, produced by humans, animals, objects or nature [[116](#bib.bib116)].
* Kinetics: The Kinetics dataset is a large-scale, high-quality dataset for human action recognition in videos [[17](#bib.bib17)]. The dataset consists of around 500,000 video clips covering 600 human action classes with at least 600 video clips for each action class. Each video clip lasts around 10 seconds and is labeled with a single action class.
* AudioSet: The AudioSet consists of 2,084,320 human-labeled 10-second sound clips drawn from YouTube videos covers ontology of 632 audio event classes [[117](#bib.bib117)]. The event classes cover a wide range of human and animal sounds, musical instruments and genres, and common everyday environmental sounds. This dataset is mainly used for the self-supervised learning from video and audio consistence [[26](#bib.bib26)].
* KITTI: The KITTI dataset is collected from driving a car around a city which equipped with various sensors including high-resolution RGB camera, gray-scale stereo camera, a 3D laser scanner, and high-precision GPS measurements and IMU accelerations from a combined GPS/IMU system [[118](#bib.bib118)]. Videos with various modalities captured by these sensors are available in this dataset.
* UCF101: The UCF101 is a widely used video dataset for human action recognition [[63](#bib.bib63)]. The dataset consists of 13,370 video clips with more than 27 hours belonging to 101 categories in this dataset. The videos in this dataset have a spatial resolution of 320×240 pixels and 25 FPS frame rate. This dataset has been widely used for evaluating the performance of human action recognition. In the self-supervised sensorial, the self-supervised models are fine-tuned on the dataset and the accuracy of the action recognition are reported to evaluate the quality of the features.
* HMDB51: Compared to other datasets, the HMDB51 dataset is a smaller video dataset for human action recognition. There are around 7,000 video clips in this dataset belong to 51 human action categories [[106](#bib.bib106)]. The videos in HMDB51 dataset have 320×240 pixels spatial resolution and 30 FPS frame rate. In the self-supervised sensorial, the self-supervised models are fine-tuned on the dataset to evaluate the quality of the learned video features.
6 Image Feature Learning
-------------------------
In this section, three groups of self-supervised image feature learning methods are reviewed including generation-based methods, context-based methods, and free semantic label-based methods. A list of the image feature self-supervised learning methods can be found in Table [II](#S6.T2 "TABLE II ‣ 6 Image Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). Since the cross modal-based methods mainly learn features from videos and most methods of this type can be used for both image and video feature learning, so cross modal-based methods are reviewed in the video feature learning section.
| | | | |
| --- | --- | --- | --- |
| Method | Category | Code | Contribution |
| GAN [[83](#bib.bib83)] | Generation | ✓ | Forerunner of GAN |
| DCGAN [[120](#bib.bib120)] | Generation | ✓ | Deep convolutional GAN for image generation |
| WGAN [[121](#bib.bib121)] | Generation | ✓ | Proposed WGAN which makes the training of GAN more stable |
| BiGAN [[122](#bib.bib122)] | Generation | ✓ | Bidirectional GAN to project data into latent space |
| SelfGAN [[123](#bib.bib123)] | Multiple | ✗ | Use rotation recognition and GAN for self-supervised learning |
| ColorfulColorization [[18](#bib.bib18)] | Generation | ✓ | Posing image colorization as a classification task |
| Colorization [[82](#bib.bib82)] | Generation | ✓ | Using image colorization as the pretext task |
| AutoColor [[124](#bib.bib124)] | Generation | ✓ | Training ConvNet to predict per-pixel color histograms |
| Split-Brain [[42](#bib.bib42)] | Generation | ✓ | Using split-brain auto-encoder as the pretext task |
| Context Encoder [[19](#bib.bib19)] | Generation | ✓ | Employing ConvNet to solve image inpainting |
| CompletNet [[125](#bib.bib125)] | Generation | ✓ | Employing two discriminators to guarantee local and global consistent |
| SRGAN [[15](#bib.bib15)] | Generation | ✓ | Employing GAN for single image super-resolution |
| SpotArtifacts [[126](#bib.bib126)] | Generation | ✓ | Learning by recognizing synthetic artifacts in images |
| ImproveContext [[33](#bib.bib33)] | Context | ✗ | Techniques to improve context based self-supervised learning methods |
| Context Prediction [[41](#bib.bib41)] | Context | ✓ | Learning by predicting the relative position of two patches from an image |
| Jigsaw [[20](#bib.bib20)] | Context | ✓ | Image patch Jigsaw puzzle as the pretext task for self-supervised learning |
| Damaged Jigsaw [[89](#bib.bib89)] | Multiple | ✗ | Learning by solving jigsaw puzzle, inpainting, and colorization together |
| Arbitrary Jigsaw [[88](#bib.bib88)] | Context | ✗ | Learning with jigsaw puzzles with arbitrary grid size and dimension |
| DeepPermNet [[127](#bib.bib127)] | Context | ✓ | A new method to solve image patch jigsaw puzzle |
| RotNet [[36](#bib.bib36)] | Context | ✓ | Learning by recognizing rotations of images |
| Boosting [[34](#bib.bib34)] | Multiple | ✗ | Using clustering to boost the self-supervised learning methods |
| JointCluster [[128](#bib.bib128)] | Context | ✓ | Jointly learning of deep representations and image clusters |
| DeepCluster [[44](#bib.bib44)] | Context | ✓ | Using clustering as the pretext |
| ClusterEmbegging [[129](#bib.bib129)] | Context | ✓ | Deep embedded clustering for self-supervised learning |
| GraphConstraint [[43](#bib.bib43)] | Context | ✓ | Learning with image pairs mined with Fisher Vector |
| Ranking [[38](#bib.bib38)] | Context | ✓ | Learning by ranking video frames with a triplet loss |
| PredictNoise [[46](#bib.bib46)] | Context | ✓ | Learning by mapping images to a uniform distribution over a manifold |
| MultiTask [[32](#bib.bib32)] | Multiple | ✓ | Using multiple pretext tasks for self-supervised feature learning |
| Learning2Count [[130](#bib.bib130)] | Context | ✓ | Learning by counting visual primitive |
| Watching Move [[81](#bib.bib81)] | Free Semantic Label | ✓ | Learning by grouping pixels of moving objects in videos |
| Edge Detection [[81](#bib.bib81)] | Free Semantic Label | ✓ | Learning by detecting edges |
| Cross Domain [[81](#bib.bib81)] | Free Semantic Label | ✓ | Utilizing synthetic data and its labels rendered by game engines |
TABLE II: Summary of self-supervised image feature learning methods based on the category of pretext tasks. Multi-task means the method explicitly or implicitly uses multiple pretext tasks for image feature learning.
###
6.1 Generation-based Image Feature Learning
Generation-based self-supervised methods for learning image features involve the process of generating images including image generation with GAN (to generate fake images), super-resolution (to generate high-resolution images), image inpainting (to predict missing image regions), and image colorization (to colorize gray-scale images into colorful images). For these tasks, pseudo training labels P usually are the images themselves and no human-annotated labels are needed during training, therefore, these methods belong to self-supervised learning methods.
The pioneer work about the image generation-based methods is the Autoencoder [[131](#bib.bib131)] which learns to compress an image into a low-dimension vector which then is uncompressed into the image that closes to the original image with a bunch of layers. With an auto-encoder, networks can reduce the dimension of an image into a lower dimension vector that contains the main information of the original image. The current image generation-based methods follow a similar idea but with different pipelines to learn visual features through the process of image generation.
####
6.1.1 Image Generation with GAN
Generative Adversarial Network (GAN) is a type of deep generative model that was proposed by Goodfellow et al. [[83](#bib.bib83)]. A GAN model generally consists of two kinds of networks: a generator which is to generate images from latent vectors and a discriminator which is to distinguish whether the input image is generated by the generator. By playing the two-player game, the discriminator forces the generator to generate realistic images, while the generator forces the discriminator to improve its differentiation ability. During the training, the two networks are competing against with each other and make each other stronger.
The common architecture for the image generation from a latent variable task is shown in Fig. [13](#S6.F13 "Fig. 13 ‣ 6.1.1 Image Generation with GAN ‣ 6.1 Generation-based Image Feature Learning ‣ 6 Image Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). The generator is trained to map any latent vector sampled from latent space into an image, while the discriminator is forced to distinguish whether the image from the real data distribution or generated data distribution. Therefore, the discriminator is required to capture the semantic features from images to accomplish the task. The parameters of the discriminator can server as the pre-trained model for other computer vision tasks.

Fig. 13: The pipeline of Generative Adversarial Networks [[83](#bib.bib83)]. By playing the two-player game, the discriminator forces the generator to generate realistic images, while the generator forces the discriminator to improve its differentiation ability.
Mathematically, the generator G is trained to learn a distribution pz of real word image data to generate realist data that undistinguished from the real data, while the discriminator D is trained to distinguish the distribution of the real data pdata and of the data distribution pz generated by the generator G. The min-max game between the generator G and the discriminator D is formulated as:
| | | | |
| --- | --- | --- | --- |
| | minGmaxDEx∼pdata(x)[logD(x)]+Ez∼pz(z)[log(1−D(G(z)))], | | (5) |
where x is the real data, G(z) is the generated data.
The discriminator D is trained to maximize the probability for the real data x (that is, Ex∼pdata(x)[logD(x)]) and minimize the probability for the generated data G(z) (that is, Ex∼pdata(x)[logD(x)]). The generator is trained to generate data that close to real data x, so as the output of the discriminator is maximized Ex∼pdata(x)[logD(G(z))].
Most of the methods for image generation from random variables do not need any human-annotated labels. However, the main purpose of this type of task is to generate realistic images instead of obtaining better performance on downstream applications. Generally, the inception scores of the generated images are used to evaluate the quality of the generated images [[132](#bib.bib132), [133](#bib.bib133)]. And only a few methods evaluated the quality of the feature learned by the discriminator on the high-level tasks and compared with others [[123](#bib.bib123), [122](#bib.bib122), [120](#bib.bib120)].
The adversarial training can help the network to capture the real distribution of the real data and generate realists data, and it has been widely used in computer vision tasks such as image generation [[134](#bib.bib134), [135](#bib.bib135)], video generation [[85](#bib.bib85)],[[86](#bib.bib86)], super-resolution [[15](#bib.bib15)], image translation [[136](#bib.bib136)], and image inpainting [[125](#bib.bib125), [19](#bib.bib19)]. When there is no human-annotated label involves, the method falls into the self-supervised learning.
####
6.1.2 Image Generation with Inpainting

Fig. 14: Qualitative illustration of image inpainting task. Given an image with a missing region (a), a human artist has no trouble inpainting it (b). Automatic inpainting using context encoder proposed in [[19](#bib.bib19)] trained with L2 reconstruction loss and adversarial loss is shown in (c). Figure is reproduced based on [[19](#bib.bib19)].
Image inpainting is a task of predicting arbitrary missing regions based on the rest of an image. A qualitative illustration of the image inpainting task is shown in Fig. [14](#S6.F14 "Fig. 14 ‣ 6.1.2 Image Generation with Inpainting ‣ 6.1 Generation-based Image Feature Learning ‣ 6 Image Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). The Fig. [14](#S6.F14 "Fig. 14 ‣ 6.1.2 Image Generation with Inpainting ‣ 6.1 Generation-based Image Feature Learning ‣ 6 Image Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey")(a) is an image with a missing region, while the Fig [14](#S6.F14 "Fig. 14 ‣ 6.1.2 Image Generation with Inpainting ‣ 6.1 Generation-based Image Feature Learning ‣ 6 Image Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey")(c) is the prediction of networks. To correctly predict missing regions, networks are required to learn the common knowledge including the color and structure of the common objects. Only by knowing this knowledge, networks are able to infer missing regions based on the rest part of the image.
By analogy with auto-encoders, Pathak et al. made the first step to train a ConvNet to generate the contents of an arbitrary image region based on the rest of the image [[19](#bib.bib19)]. Their contributions are in two folds: using a ConvNet to tackle image inpainting problem, and using the adversarial loss to help the network generate a realistic hypothesis. Most of the recent methods follow a similar pipeline [[125](#bib.bib125)]. Usually, there are two kinds of networks: a generator network is to generate the missing region with the pixel-wise reconstruction loss and a discriminator network is to distinguish whether the input image is real with an adversarial loss. With the adversarial loss, the network is able to generate sharper and realistic hypothesis for the missing image region. Both the two kinds of networks are able to learn the semantic features from images and can be transferred to other computer vision tasks. However, only Pathak et al. [[19](#bib.bib19)] studied the performance of transfer learning for the learned parameters of the generator from the image inpainting task.
The generator network which is a fully convolutional network has two parts: encoder and decoder. The input of the encoder is the image that needs to be inpainted and the context encoder learns the semantic feature of the image. The context decoder is to predict the missing region based on this feature. The generator is required to understand the content of the image in order to generate a plausible hypothesis. The discriminator is trained to distinguish whether the input image is the output of the generator. To accomplish the image inpainting task, both networks are required to learn semantic features of images.
####
6.1.3 Image Generation with Super Resolution
Image super-resolution (SR) is a task of enhancing the resolution of images. With the help of fully convolutional networks, finer and realistic high-resolution images can be generated from low-resolution images. SRGAN is a generative adversarial network for single image super-resolution proposed by Ledig et al. [[15](#bib.bib15)]. The insight of this approach is to take advantage of the perceptual loss which consists of an adversarial loss and a content loss. With the perceptron loss, the SRGAN is able to recover photo-realistic textures from heavily downsampled images and show significant gains in perceptual quality.
There are two networks: one is generator which is to enhance the resolution of the input low-resolution image and the other is the discriminator which is to distinguish whether the input image is the output of the generator. The loss function for the generator is the pixel-wise L2 loss plus the content loss which is the similarity of the feature of the predicted high-resolution image and the high-resolution original image, while the loss for the discriminator is the binary classification loss. Compared to the network that only minimizing the Mean Squared Error (MSE) which generally leads to high peak signal-to-noise ratios but lacking high-frequency details, the SRGAN is able to recover the fine details of the high-resolution image since the adversarial loss pushes the output to the natural image manifold by the discriminator network.
The networks for image super-resolution task are able to learn the semantic features of images. Similar to other GANs, the parameters of the discriminator network can be transferred to other downstream tasks. However, no one tested the performance of the transferred learning on other tasks yet. The quality of the enhanced image is mainly compared to evaluate the performance of the network.
####
6.1.4 Image Generation with Colorization

Fig. 15: The architecture of image colorization proposed in [[18](#bib.bib18)]. The figure is from [[18](#bib.bib18)] with author’s permission.
Image colorization is a task of predicting a plausible color version of the photograph given a gray-scale photograph as input. A qualitative illustration of the image colorization task is shown in Fig. [15](#S6.F15 "Fig. 15 ‣ 6.1.4 Image Generation with Colorization ‣ 6.1 Generation-based Image Feature Learning ‣ 6 Image Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). To correctly colorize each pixel, networks need to recognize objects and to group pixels of the same part together. Therefore, visual features can be learned in the process of accomplishing this task.
Many deep learning-based colorization methods have been proposed in recent years [[18](#bib.bib18), [137](#bib.bib137), [138](#bib.bib138)]. A straightforward idea would be to employ a fully convolution neural network which consists of an encoder for feature extraction and a decoder for the color hallucination to colorization. The network can be optimized with L2 loss between the predicted color and its original color. Zhang et al. proposed to handle the uncertainty by posting the task as a classification task and used class-rebalancing to increase the diversity of predicted colors [[18](#bib.bib18)]. The framework for image colorization proposed by Zhang et al. is shown in Fig. [15](#S6.F15 "Fig. 15 ‣ 6.1.4 Image Generation with Colorization ‣ 6.1 Generation-based Image Feature Learning ‣ 6 Image Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). Trained in large-scale image collections, the method shows great results and fools human on 32% of the trials during the colorization test.
Some work specifically employs the image colorization task as the pretext for self-supervised image representation learning [[82](#bib.bib82), [18](#bib.bib18), [124](#bib.bib124), [42](#bib.bib42)]. After the image colorization training is finished, the features learned through the colorization process are specifically evaluated on other downstream high-level tasks with transfer learning.
###
6.2 Context-Based Image Feature Learning
The context-based pretext tasks mainly employ the context features of images including context similarity, spatial structure, and temporal structure as the supervision signal. Features are learned by ConvNet through the process of solving the pretext tasks designed based on attributes of the context of images.
####
6.2.1 Learning with Context Similarity

Fig. 16: The architecture of DeepClustering [[44](#bib.bib44)]. The features of images are iteratively clustered and the cluster assignments are used as pseudo-labels to learn the parameters of the ConvNet. The figure is from [[44](#bib.bib44)] with author’s permission.
Clustering is a method of grouping sets of similar data in the same clusters. Due to its powerful ability of grouping data by using the attributes of the data, it is widely used in many fields such as machine learning, image processing, computer graphics, etc. Many classical clustering algorithms have been proposed for various applications [[139](#bib.bib139)].
In the self-supervised scenario, the clustering methods mainly employed as a tool to cluster image data. A naive method would be to cluster the image data based on the hand-designed feature such as HOG [[140](#bib.bib140)], SIFT [[141](#bib.bib141)], or Fisher Vector [[49](#bib.bib49)]. After the clustering, several clusters are obtained while the image within one cluster has a smaller distance in feature space and images from different clusters have a larger distance in feature space. The smaller the distance in feature space, the more similar the image in the appearance in the RGB space. Then a ConvNet can be trained to classify the data by using the cluster assignment as the pseudo class label. To accomplish this task, the ConvNet needs to learn the invariance within one class and the variance among different classes. Therefore, the ConvNet is able to learn semantic meaning of images.
The existing methods about using the clustering variants as the pretext task follow these principals [[44](#bib.bib44), [43](#bib.bib43), [34](#bib.bib34), [128](#bib.bib128), [129](#bib.bib129)]. Firstly, the image is clustered into different clusters which the images from the same cluster have smaller distance and images from different clusters have larger distance. Then a ConvNet is trained to recognize the cluster assignment [[44](#bib.bib44), [34](#bib.bib34)] or to recognize whether two imaged are from same cluster [[43](#bib.bib43)]. The pipeline of DeepCluster, a clustering based methods, is shown in Fig. [16](#S6.F16 "Fig. 16 ‣ 6.2.1 Learning with Context Similarity ‣ 6.2 Context-Based Image Feature Learning ‣ 6 Image Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). DeepCluster iteratively clusters images with Kmeans and use the subsequent assignments as supervision to update the weights of the network. And it is the current state-of-the-art for the self-supervised image representation learning.
####
6.2.2 Learning with Spatial Context Structure
Images contain rich spatial context information such as the relative positions among different patches from an image which can be used to design the pretext task for self-supervised learning. The pretext task can be to predict the relative positions of two patches from same image [[41](#bib.bib41)], or to recognize the order of the shuffled a sequence of patches from same image [[20](#bib.bib20), [89](#bib.bib89), [88](#bib.bib88)]. The context of full images can also be used as a supervision signal to design pretext tasks such as to recognize the rotating angles of the whole images [[36](#bib.bib36)]. To accomplish these pretext tasks, ConvNets need to learn spatial context information such as the shape of the objects and the relative positions of different parts of an object.

Fig. 17: The visualization of the Jigsaw Image Puzzle [[20](#bib.bib20)]. (a) is an image with 9 sampled image patches, (b) is an example of shuffled image patches, and (c) shows the correct order of the sampled 9 patches. Figure is reproduced based on [[20](#bib.bib20)].
The method proposed by Doersch et al. is one of the pioneer work of using spatial context cues for self-supervised visual feature learning [[41](#bib.bib41)]. Random pairs of image patches are extracted from each image, then a ConvNet is trained to recognize the relative positions of the two image patches. To solve this puzzle, ConvNets need to recognize objects in images and learn the relationships among different parts of objects. To avoid the network learns trivial solutions such as simply using edges in patches to accomplish the task, heavy data augmentation is applied during the training phase.
Following this idea, more methods are proposed to learn image features by solving more difficult spatial puzzles [[20](#bib.bib20), [89](#bib.bib89), [88](#bib.bib88), [87](#bib.bib87), [27](#bib.bib27)]. As illustrated in Fig. [17](#S6.F17 "Fig. 17 ‣ 6.2.2 Learning with Spatial Context Structure ‣ 6.2 Context-Based Image Feature Learning ‣ 6 Image Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"), one typical work proposed by Noroozi et al. attempted to solve an image Jigsaw puzzle with ConvNet [[20](#bib.bib20)]. Fig. [17](#S6.F17 "Fig. 17 ‣ 6.2.2 Learning with Spatial Context Structure ‣ 6.2 Context-Based Image Feature Learning ‣ 6 Image Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey")(a) is an image with 9 sampled image patches, Fig [17](#S6.F17 "Fig. 17 ‣ 6.2.2 Learning with Spatial Context Structure ‣ 6.2 Context-Based Image Feature Learning ‣ 6 Image Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey")(b) is an example of shuffled image patches, and Fig [17](#S6.F17 "Fig. 17 ‣ 6.2.2 Learning with Spatial Context Structure ‣ 6.2 Context-Based Image Feature Learning ‣ 6 Image Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey")(c) shows the correct order of the sampled 9 patches. The shuffled image patches are fed to the network which trained to recognize the correct spatial locations of the input patches by learning spatial context structures of images such as object color, structure, and high-level semantic information.
Given 9 image patches from an image, there are 362,880 (9!) possible permutations and a network is very unlikely to recognize all of them because of the ambiguity of the task. To limit the number of permutations, usually, hamming distance is employed to choose only a subset of permutations among all the permutations that with relative large hamming distance. Only the selected permutations are used to train ConvNet to recognize the permutation of shuffled image patches [[88](#bib.bib88), [89](#bib.bib89), [35](#bib.bib35), [20](#bib.bib20)].
The main principle of designing puzzle tasks is to find a suitable task which is not too difficult and not too easy for a network to solve. If it is too difficult, the network may not converge due to the ambiguity of the task or can easily learn trivial solutions if it is too easy. Therefore, a reduction in the search space is usually employed to reduce the difficulty of the task.
###
6.3 Free Semantic Label-based Image Feature Learning
The free semantic label refers to labels with semantic meanings that obtained without involving any human annotations. Generally, the free semantic labels such as segmentation masks, depth images, optic flows, and surface normal images can be rendered by game engine or generated by hard-code methods. Since these semantic labels are automatically generated, the methods using the synthetic datasets or using them in conjunction with a large unlabeled image or video datasets are considered as self-supervised learning methods.
####
6.3.1 Learning with Labels Generated by Game Engines
Given models of various objects and layouts of environments, game engines are able to render realistic images and provide accurate pixel-level labels. Since game engines can generate large-scale datasets with negligible cost, various game engines such as Airsim [[142](#bib.bib142)] and Carla [[143](#bib.bib143)] have been used to generate large-scale synthetic datasets with high-level semantic labels including depth, contours, surface normal, segmentation mask, and optical flow for training deep networks. An example of an RGB image with its generated accurate labels is shown in Fig. [18](#S6.F18 "Fig. 18 ‣ 6.3.1 Learning with Labels Generated by Game Engines ‣ 6.3 Free Semantic Label-based Image Feature Learning ‣ 6 Image Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey").

Fig. 18: An example of an indoor scene generated by a game engine [[115](#bib.bib115)]. For each synthetic image, the corresponding depth, instance segmentation, and optical flow can be automatically generated by the engine.
Game engines can generate realistic images with accurate pixel-level labels with very low cost. However, due to the domain gap between synthetic and real-world images, the ConvNet purely trained on synthetic images cannot be directly applied to real-world images. To utilize synthetic datasets for self-supervised feature learning, the domain gap needs to be explicitly bridged. In this way, the ConvNet trained with the semantic labels of the synthetic dataset can be effectively applied to real-world images.
To overcome the problem, Ren and Lee proposed an unsupervised feature space domain adaptation method based on adversarial learning [[30](#bib.bib30)]. As shown in Fig. [19](#S6.F19 "Fig. 19 ‣ 6.3.1 Learning with Labels Generated by Game Engines ‣ 6.3 Free Semantic Label-based Image Feature Learning ‣ 6 Image Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"), the network predicts surface normal, depth, and instance contour for the synthetic images and a discriminator network D is employed to minimize the difference of feature space domains between real-world and synthetic data. Helped with adversarial training and accurate semantic labels of synthetic images, the network is able to capture visual features for real-world images.

Fig. 19: The architecture for utilizing synthetic and real-world images for self-supervised feature learning [[30](#bib.bib30)]. Figure is reproduced based on [[30](#bib.bib30)].
Compared to other pretext tasks in which the pretext tasks implicitly force ConvNets to learn semantic features, this type of methods are trained with accurate semantic labels which explicitly force ConvNets to learn features that highly related to the objects in images.
####
6.3.2 Learning with Labels Generated by Hard-code programs
Applying hard-code programs is another way to automatically generate semantic labels such as salience, foreground masks, contours, depth for images and videos. With these methods, very large-scale datasets with generated semantic labels can be used for self-supervised feature learning. This type of methods generally has two steps: (1) label generation by employing hard-code programs on images or videos to obtain labels, (2) train ConvNets with the generated labels.
Various hard-code programs have been applied to generate labels for self-supervised learning methods include methods for foreground object segmentation [[81](#bib.bib81)], edge detection [[47](#bib.bib47)], and relative depth prediction [[92](#bib.bib92)]. Pathak et al. proposed to learn features by training a ConvNet to segment foreground objects in each frame of a video while the label is the mask of moving objects in videos [[81](#bib.bib81)]. Li et al. proposed to learn features by training a ConvNet for edge prediction while labels are motion edges obtained from flow fields from videos [[47](#bib.bib47)]. Jing et al. proposed to learn features by training a ConvNet to predict relative scene depths while the labels are generated from optical flow [[92](#bib.bib92)].
No matter what kind of labels used to train ConvNets, the general idea of this type of methods is to distill knowledge from hard-code detector. The hard-code detector can be edge detector, salience detector, relative detector, etc. As long as no human-annotations are involved through the design of detectors, then the detectors can be used to generate labels for self-supervised training.
Compared to other self-supervised learning methods, the supervision signal in these pretext tasks is semantic labels which can directly drive the ConvNet to learn semantic features. However, one drawback is that the semantic labels generated by hard-code detector usually are very noisy which need to specifically cope with.
7 Video Feature Learning
-------------------------
This section reviews the self-supervised methods for learning video features, as listed in Table [III](#S7.T3 "TABLE III ‣ 7 Video Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"), they can be categorized into four classes: generation-based methods, context-based methods, free semantic label-based methods, and cross modal-based methods.
Since video features can be obtained by various kinds of networks including 2DConvNet, 3DConvNet, and LSTM combined with 2DConvNet or 3DConvNet. When 2DConvNet is employed for video self-supervised feature learning, then the 2DConvNet is able to extract both image and video features after the self-supervised pretext task training finished.
| | | | |
| --- | --- | --- | --- |
| Mehtod | SubCategory | Code | Contribution |
| VideoGAN [[85](#bib.bib85)] | Generation | ✓ | Forerunner of video generation with GAN |
| MocoGAN [[86](#bib.bib86)] | Generation | ✓ | Decomposing motion and content for video generation with GAN |
| TemporalGAN [[144](#bib.bib144)] | Generation | ✓ | Decomposing temporal and image generator for video generation |
| Video Colorization [[145](#bib.bib145)] | Generation | ✓ | Employing video colorization as the pretext task |
| Un-LSTM [[37](#bib.bib37)] | Generation | ✓ | Forerunner of video prediction with LSTM |
| ConvLSTM [[146](#bib.bib146)] | Generation | ✓ | Employing Convolutional LSTM for video prediction |
| MCNet [[147](#bib.bib147)] | Generation | ✓ | Disentangling motion and content for video prediction |
| LSTMDynamics [[148](#bib.bib148)] | Generation | ✗ | Learning by predicting long-term temporal dynamic in videos |
| Video Jigsaw [[87](#bib.bib87)] | Context | ✗ | Learning by jointly reasoning about spatial and temporal context |
| Transitive [[31](#bib.bib31)] | Context | ✗ | Learning inter and intra instance variations with a Triplet loss |
| 3DRotNet [[28](#bib.bib28)] | Context | ✗ | Learning by recognizing rotations of video clips |
| CubicPuzzles [[27](#bib.bib27)] | Context | ✗ | Learning by solving video cubic puzzles |
| ShuffleLearn [[40](#bib.bib40)] | Context | ✓ | Employing temporal order verification as the pretext task |
| LSTMPermute [[149](#bib.bib149)] | Context | ✓ | Learning by temporal order verification with LSTM |
| OPN [[39](#bib.bib39)] | Context | ✓ | Using frame sequence order recognition as the pretext task |
| O3N [[29](#bib.bib29)] | Context | ✗ | Learning by identifying odd video sequences |
| ArrowTime [[90](#bib.bib90)] | Context | ✓ | Learning by recognizing the arrow of time in videos |
| TemporalCoherence [[150](#bib.bib150)] | Context | ✗ | Learning with the temporal coherence of features of frame sequence |
| FlowNet [[151](#bib.bib151)] | Cross Modal | ✓ | Forerunner of optical flow estimation with ConvNet |
| FlowNet2 [[152](#bib.bib152)] | Cross Modal | ✓ | Better architecture and better performance on optical flow estimation |
| UnFlow [[153](#bib.bib153)] | Cross Modal | ✓ | An unsupervised loss for optical flow estimation |
| CrossPixel [[23](#bib.bib23)] | Cross Modal | ✗ | Learning by predicting motion from a single image as the pretext task |
| CrossModel [[24](#bib.bib24)] | Cross Modal | ✗ | Optical flow and RGB correspondence verification as pretext task |
| AVTS [[25](#bib.bib25)] | Cross Modal | ✗ | Visual and Audio correspondence verification as pretext task |
| AudioVisual [[26](#bib.bib26)] | Cross Modal | ✓ | Jointly modeling visual and audio as fused multisensory representation |
| LookListenLearn [[93](#bib.bib93)] | Cross Modal | ✓ | Forerunner of Audio-Visual Correspondence for self-supervised learning |
| AmbientSound [[154](#bib.bib154)] | Cross Modal | ✗ | Predicting a statistical summary of the sound from a video frame |
| EgoMotion [[155](#bib.bib155)] | Cross Modal | ✓ | Learning by predicting camera motion and the scene structure from videos |
| LearnByMove [[94](#bib.bib94)] | Cross Modal | ✓ | Learning by predicting the camera transformation from a pairs of images |
| TiedEgoMotion [[95](#bib.bib95)] | Cross Modal | ✗ | Learning from ego-motor signals and video sequence |
| GoNet [[156](#bib.bib156)] | Cross Modal | ✓ | Jointly learning monocular depth, optical flow and ego-motion estimation from videos |
| DepthFlow [[157](#bib.bib157)] | Cross Modal | ✓ | Depth and optical flow learning using cross-task consistency from videos |
| VisualOdometry [[158](#bib.bib158)] | Cross Modal | ✓ | An unsupervised paradigm for deep visual odometry learning |
| ActivesStereoNet [[159](#bib.bib159)] | Cross Modal | ✓ | End-to-end self-supervised learning of depth from active stereo systems |
TABLE III: Summary of self-supervised video feature learning methods based on the category of pretext tasks.
###
7.1 Generation-based Video Feature Learning
Learning from video generation refers to the methods that visual features are learned through the process of video generation while without using any human-annotated labels. This type of methods includes video generation with GAN [[85](#bib.bib85)], video colorization [[145](#bib.bib145)] and video prediction [[37](#bib.bib37)]. For these pretext tasks, the pseudo training label P usually is the video itself and no human-annotated labels are needed during training, therefore, these methods belong to self-supervised learning.
####
7.1.1 Learning from Video Generation

Fig. 20: The architecture of the generator in VideoGan for video generation with GAN proposed in [[85](#bib.bib85)]. The figure is from [[85](#bib.bib85)] with author’s permission.
After GAN-based methods obtained breakthrough results in image generation, researchers employed GAN to generate videos [[86](#bib.bib86), [85](#bib.bib85), [144](#bib.bib144)]. One pioneer work of video generation with GAN is VideoGAN [[85](#bib.bib85)], and the architecture of the generator network is shown in Fig. [20](#S7.F20 "Fig. 20 ‣ 7.1.1 Learning from Video Generation ‣ 7.1 Generation-based Video Feature Learning ‣ 7 Video Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). To model the motion of objects in videos, a two-stream network is proposed for video generation while one stream is to model the static regions in in videos as background and another stream is to model moving object in videos as foreground [[85](#bib.bib85)]. Videos are generated by the combination of the foreground and background streams. The underline assumption is that each random variable in the latent space represents one video clip. This method is able to generate videos with dynamic contents. However, Tulyakov et al. argues that this assumption increases difficulties of the generation, instead, they proposed MocoGAN to use the combination of two subspace to represent a video by disentangling the context and motions in videos [[86](#bib.bib86)]. One space is context space which each variable from this space represents one identity, and another space is motion space while the trajectory in this space represents the motion of the identity. With the two sub-spaces, the network is able to generate videos with higher inception score.
The generator learns to map latent vectors from latent space into videos, while discriminator learns to distinguish the real world videos with generated videos. Therefore, the discriminator needs to capture the semantic features from videos to accomplish this task. When no human-annotated labels are used in these frameworks, they belong to the self-supervised learning methods. After the video generation training on large-scale unlabeled dataset finished, the parameters of discriminator can be transferred to other downstream tasks [[85](#bib.bib85)].
####
7.1.2 Learning from Video Colorization
Temporal coherence in videos refers to that consecutive frames within a short time have similar coherent appearance. The coherence of color can be used to design pretext tasks for self-supervised learning. One way to utilize color coherence is to use video colorization as a pretext task for self-supervised video feature learning.
Video colorization is a task to colorize gray-scale frames into colorful frames. Vondrick et al. proposed to constrain colorization models to solve video colorization by learning to copy colors from a reference frame [[145](#bib.bib145)]. Given the reference RGB frame and a gray-scale image, the network needs to learn the internal connection between the reference RGB frame and gray-scale image to colorize it.
Another perspective is to tackle video colorization by employing a fully convolution neural network. Tran et al. proposed an U-shape convolution neural network for video colorization [[160](#bib.bib160)]. The network is an encoder-decoder based 3DConvNet. The input of the network is a clip of grayscale video clip, while the output if a colorful video clip. The encoder is a bunch of 3D convolution layers to extract features while the decoder is a bunch of 3D deconvolution layers to generate colorful video clips from the extracted feature.
The color coherence in videos is a strong supervision signal. However, only a few work studied to employ it for self-supervised video feature learning [[145](#bib.bib145)]. More work can be done by studying using color coherence as a supervision signal for self-supervised video feature learning.
####
7.1.3 Learning from Video Prediction

Fig. 21: The architecture for video prediction task proposed by [[147](#bib.bib147)]. Figure is reproduced based on [[147](#bib.bib147)].
Video prediction is a task of predicting future frame sequences based on a limited number of frames of a video. To predict future frames, network must learn the change in appearance within a given frame sequence. The pioneer of applying deep learning for video prediction is Un-LSTM [[37](#bib.bib37)]. Due to the powerful ability of modeling long-term dynamic in videos, LSTM is used in both the encoder and decoder [[37](#bib.bib37)].
Many methods have been proposed for video prediction [[37](#bib.bib37), [147](#bib.bib147), [161](#bib.bib161), [162](#bib.bib162), [163](#bib.bib163), [164](#bib.bib164), [165](#bib.bib165)]. Since its superior ability to model temporal dynamics, most of them use LSTM or LSTM variant to encode temporal dynamics in videos or to infer the future frames [[37](#bib.bib37), [147](#bib.bib147), [146](#bib.bib146), [164](#bib.bib164), [165](#bib.bib165)]. These methods can be employed for self-supervised feature learning without using human-annotations.
Most of the frameworks follow the encoder-decoder pipeline in which the encoder to model spatial and temporal features from the given video clips and the decoder to generate future frames based on feature extracted by encoder. Fig. [21](#S7.F21 "Fig. 21 ‣ 7.1.3 Learning from Video Prediction ‣ 7.1 Generation-based Video Feature Learning ‣ 7 Video Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey") shows a pipeline of MCnet proposed by Villegas et al. in [[147](#bib.bib147)]. McNet is built on Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for video prediction. It has two encoders, one is Content Encoder to capture the spatial layout of an image, and the other is Motion Encoder to model temporal dynamics within video clips. The spatial features and temporal features are concatenated to feed to the decoder to generate the next frame. By separately modeling temporal and spatial features, this model can effectively generate future frames recursively.
Video prediction is a self-supervised learning task and the learned features can be transferred to other tasks. However, no work has been done to study the generalization ability of features learned by video prediction. Generally, The Structural Similarity Index (SSIM) and Peak Signal to Noise Ratio (PSNR) are employed to evaluate the difference between the generated frame sequence and the ground truth frame sequence.
###
7.2 Temporal Context-based Learning

Fig. 22: The pipeline of Shuffle and Learn [[40](#bib.bib40)]. The network is trained to verify whether the input frames are in correct temporal order. Figure is reproduced based on [[40](#bib.bib40)].
Videos consist of various lengths of frames which have rich spatial and temporal information. The inherent temporal information within videos can be used as supervision signal for self-supervised feature learning. Various pretext tasks have been proposed by utilizing temporal context relations including temporal order verification [[40](#bib.bib40), [29](#bib.bib29), [90](#bib.bib90)] and temporal order recognition [[39](#bib.bib39), [27](#bib.bib27)]. Temporal order verification is to verify whether a sequence of input frames is in correct temporal order, while temporal order recognition is to recognize the order of a sequence of input frames.
As shown in Fig. [22](#S7.F22 "Fig. 22 ‣ 7.2 Temporal Context-based Learning ‣ 7 Video Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"), Misra et al. proposed to use the temporal order verification as the pretext task to learn image features from videos with 2DConvNet [[40](#bib.bib40)] which has two main steps: (1) The frames with significant motions are sampled from videos according to the magnitude of optical flow, (2) The sampled frames are shuffled and fed to the network which is trained to verify whether the input data is in correct order. To successfully verify the order of the input frames, the network is required to capture the subtle difference between the frames such as the movement of the person. Therefore, semantic features can be learned through the process of accomplishing this task. The temporal order recognition tasks use networks of similar architecture.
However, the methods usually suffer from a massive dataset preparation step. The frame sequences that used to train the network are selected based on the magnitude of the optical flow, and the computation process of optical flow is expensive and slow. Therefore, more straightforward and time-efficiency methods are needed for self-supervised video feature learning.
###
7.3 Cross Modal-based Learning
Cross modal-based learning methods usually learn video features from the correspondence of multiple data streams including RGB frame sequence, optical flow sequence, audio data, and camera pose.
In addition to rich temporal and spatial information in videos, optical flow sequence can be generated to specifically indicate the motion in videos, and the difference of frames can be computed with negligible time and space-time complexity to indicate the boundary of the moving objects. Similarly, audio data also provide a useful hint about the content of videos. Based on the type of data used, these methods fall into three groups: (1) methods that learn features by using the RGB and optical flow correspondence [[24](#bib.bib24), [23](#bib.bib23)], (2) methods that learn features by utilizing the video and audio correspondence [[25](#bib.bib25), [93](#bib.bib93)], (3) ego-motion that learn by utilizing the correspondence between egocentric video and ego-motor sensor signals [[95](#bib.bib95), [94](#bib.bib94)]. Usually, the network is trained to recognize if the two kinds of input data are corresponding to each other [[24](#bib.bib24), [25](#bib.bib25)], or is trained to learn the transformation between different modalities [[94](#bib.bib94)].
####
7.3.1 Learning from RGB-Flow Correspondence
Optical flow encodes object motions between adjacent frames, while RGB frames contain appearance information. The correspondence of the two types of data can be used to learn general features [[23](#bib.bib23), [24](#bib.bib24), [151](#bib.bib151), [152](#bib.bib152)]. This type of pretext tasks include optical flow estimation [[151](#bib.bib151), [152](#bib.bib152)] and RGB and optical flow correspondence verification [[23](#bib.bib23)].
Sayed et al. proposed to learn video features by verifying whether the input RGB frames and the optical flow corresponding to each other. Two networks are employed while one is for extracting features from RGB input and another is for extracting features from optical flow input [[24](#bib.bib24)]. To verify whether two input data correspond to each other, the network needs to capture mutual information between the two modalities. The mutual information across different modalities usually has higher semantic meaning compared to information which is modality specific. Through this pretext task, the mutual information that invariant to specific modality can be captured by ConvNet.
Optical flow estimation is another type of pretext tasks that can be used for self-supervised video feature learning. Fischer et al. proposed FlowNet which is an end-to-end convolution neural network for optical flow estimation from two consecutive frames [[151](#bib.bib151), [152](#bib.bib152)]. To correctly estimate optical flow from two frames, the ConvNet needs to capture appearance changes of two frames. Optical flow estimation can be used for self-supervised feature learning because it can be automatically generated by simulators such as game engines or by hard-code programs without human annotation.
####
7.3.2 Learning from Visual-Audio Correspondence
Recently, some researchers proposed to use the correspondence between visual and audio streams to design “Visual-Audio Correspondence” learning task [[93](#bib.bib93), [25](#bib.bib25), [26](#bib.bib26), [154](#bib.bib154)].

Fig. 23: The architecture of video and audio correspondence verification task [[93](#bib.bib93)].
The general framework of this type of pretext tasks is shown in Fig. [23](#S7.F23 "Fig. 23 ‣ 7.3.2 Learning from Visual-Audio Correspondence ‣ 7.3 Cross Modal-based Learning ‣ 7 Video Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"). There are two subnetworks: the vision subnetwork and the audio subnetwork. The input of vision subnetwork is a single frame or a stack of image frames and the vision subnetwork learns to capture visual features of the input data. The audio network is a 2DConvNet and the input is the Fast Fourier Transform (FFT) of the audio from the video. Positive data are sampled by extracting video frames and audio from the same time of one video, while negative training data are generated by extracting video frames and audio from different videos or from different times of one video. Therefore, the networks are trained to discover the correlation of video data and audio data to accomplish this task.
Since the inputs of the ConvNets are two kinds of data, the networks are able to learn the two kinds of information jointly by solving the pretext task. The performance of the two networks obtained very good performance on the downstream applications [[25](#bib.bib25)].
####
7.3.3 Ego-motion
With the self-driving car which usually equipped with various sensors, the large-scale egocentric video along with ego-motor signal can be easily collected with very low cost by driving the car in the street. Recently, some researchers proposed to use the correspondence between visual signal and motor signal for self-supervised feature learning [[155](#bib.bib155), [94](#bib.bib94), [95](#bib.bib95)].

Fig. 24: The architecture of camera pose transformation estimation from egocentric videos [[94](#bib.bib94)].
The underline intuition of this type of methods is that a self-driving car can be treated as a camera moving in a scene and thus the egomotion of the visual data captured by the camera is as same as that of the car. Therefore, the correspondence between visual data and egomotion can be utilized for self-supervised feature learning. A typical network of using ego-motor signal is shown in Fig. [24](#S7.F24 "Fig. 24 ‣ 7.3.3 Ego-motion ‣ 7.3 Cross Modal-based Learning ‣ 7 Video Feature Learning ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey") proposed by Agrawal et al. for self-supervised image feature learning [[94](#bib.bib94)]. The inputs to the network are two frames sampled from an egocentric video within a short time. The labels for the network indicate the rotation and translation relation between the two sampled images which can be derived from the odometry data of the dataset. With this task, the ConvNet is forced to identify visual elements that are present in both sampled images.
The ego-motor signal is a type of accurate supervision signal. In addition to directly applying it for self-supervised feature learning, it has also been used for unsupervised learning of depth and ego-motion [[155](#bib.bib155)]. All these networks can be used for self-supervised feature learning and transferred for downstream tasks.
8 Performance Comparison
-------------------------
This section compares the performance of image and video feature self-supervised learning methods on public datasets. For image feature self-supervised learning, the performance on downstream tasks including image classification, semantic segmentation, and object detection are compared. For video feature self-supervised learning, the performance on a downstream task which is human action recognition in videos is reported.
###
8.1 Performance of Image Feature Learning
As described in Section 4.3, the quality of features learned by self-supervised learned models is evaluated by fine-tuning them on downstream tasks such as semantic segmentation, object detection, and image classification. This section summarizes the performance of the existing image feature self-supervised learning methods.
Table [IV](#S8.T4 "TABLE IV ‣ 8.1 Performance of Image Feature Learning ‣ 8 Performance Comparison ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey") lists the performance of image classification performance on ImageNet [[13](#bib.bib13)] and Places [[107](#bib.bib107)] datasets. During self-supervised pretext tasks training, most of the methods are trained on ImageNet dataset with AlexNet as based network without using the category labels. After pretext task self-supervised training finished, a linear classifier is trained on top of different frozen convolutional layers of the ConvNet on the training split of ImageNet and Places datasets. The classification performances on the two datasets are used to demonstrate the quality of the learned features.
As shown in Table [IV](#S8.T4 "TABLE IV ‣ 8.1 Performance of Image Feature Learning ‣ 8 Performance Comparison ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"), the overall performance of the self-supervised models is lower than that of models trained either with ImageNet labels or with Places labels. Among all the self-supervised methods, the DeepCluster [[44](#bib.bib44)] achieved the best performance on the two dataset. Three conclusions can be drawn based on the performance from the Table: (1) The features from different layers are always benefited from the self-supervised pretext task training. The performance of self-supervised learning methods is always better than the performance of the model trained from scratch. (2) All of the self-supervised methods perform well with the features from conv3 and conv4 layers while performing worse with the features from conv1, conv2, and conv5 layers. This is probably because shallow layers capture general low-level features, while deep layers capture pretext task-related features. (3) When there is a domain gap between dataset for pretext task training and the dataset of downstream task, the self-supervised learning method is able to reach comparable performance with the model trained with ImageNet labels.
| | | | |
| --- | --- | --- | --- |
| | | ImageNet | Places |
| Method | Pretext Tasks | conv1 | conv2 | conv3 | conv4 | conv5 | conv1 | conv2 | conv3 | conv4 | conv5 |
| Places labels [[8](#bib.bib8)] | — | — | — | — | — | — | 22.1 | 35.1 | 40.2 | 43.3 | 44.6 |
| ImageNet labels [[8](#bib.bib8)] | — | 19.3 | 36.3 | 44.2 | 48.3 | 50.5 | 22.7 | 34.8 | 38.4 | 39.4 | 38.7 |
| Random(Scratch) [[8](#bib.bib8)] | — | 11.6 | 17.1 | 16.9 | 16.3 | 14.1 | 15.7 | 20.3 | 19.8 | 19.1 | 17.5 |
| ColorfulColorization [[18](#bib.bib18)] | Generation | 12.5 | 24.5 | 30.4 | 31.5 | 30.3 | 16.0 | 25.7 | 29.6 | 30.3 | 29.7 |
| BiGAN [[122](#bib.bib122)] | Generation | 17.7 | 24.5 | 31.0 | 29.9 | 28.0 | 21.4 | 26.2 | 27.1 | 26.1 | 24.0 |
| SplitBrain [[42](#bib.bib42)] | Generation | 17.7 | 29.3 | 35.4 | 35.2 | 32.8 | 21.3 | 30.7 | 34.0 | 34.1 | 32.5 |
| ContextEncoder [[19](#bib.bib19)] | Context | 14.1 | 20.7 | 21.0 | 19.8 | 15.5 | 18.2 | 23.2 | 23.4 | 21.9 | 18.4 |
| ContextPrediction [[41](#bib.bib41)] | Context | 16.2 | 23.3 | 30.2 | 31.7 | 29.6 | 19.7 | 26.7 | 31.9 | 32.7 | 30.9 |
| Jigsaw [[20](#bib.bib20)] | Context | 18.2 | 28.8 | 34.0 | 33.9 | 27.1 | 23.0 | 32.1 | 35.5 | 34.8 | 31.3 |
| Learning2Count [[130](#bib.bib130)] | Context | 18.0 | 30.6 | 34.3 | 32.5 | 25.7 | 23.3 | 33.9 | 36.3 | 34.7 | 29.6 |
| DeepClustering [[44](#bib.bib44)] | Context | 13.4 | 32.3 | 41.0 | 39.6 | 38.2 | 19.6 | 33.2 | 39.2 | 39.8 | 34.7 |
TABLE IV:
Linear classification on ImageNet and Places datasets using activations from the convolutional layers of an AlexNet as features. ”Convn” means the linear classifier is trained based on the n-th convolution layer of AlexNet. ”Places Labels” and ”ImageNet Labels” indicate using supervised model trained with human-annotated labels as the pre-trained model.
| | | | | |
| --- | --- | --- | --- | --- |
| Method | Pretext Tasks | Classification | Detection | Segmentation |
| ImageNet Labels [[8](#bib.bib8)] | — | 79.9 | 56.8 | 48.0 |
| Random(Scratch) [[8](#bib.bib8)] | — | 57.0 | 44.5 | 30.1 |
| ContextEncoder [[19](#bib.bib19)] | Generation | 56.5 | 44.5 | 29.7 |
| BiGAN [[122](#bib.bib122)] | Generation | 60.1 | 46.9 | 35.2 |
| ColorfulColorization [[18](#bib.bib18)] | Generation | 65.9 | 46.9 | 35.6 |
| SplitBrain [[42](#bib.bib42)] | Generation | 67.1 | 46.7 | 36.0 |
| RankVideo [[38](#bib.bib38)] | Context | 63.1 | 47.2 | 35.4† |
| PredictNoise [[46](#bib.bib46)] | Context | 65.3 | 49.4 | 37.1† |
| JigsawPuzzle [[20](#bib.bib20)] | Context | 67.6 | 53.2 | 37.6 |
| ContextPrediction [[41](#bib.bib41)] | Context | 65.3 | 51.1 | — |
| Learning2Count [[130](#bib.bib130)] | Context | 67.7 | 51.4 | 36.6 |
| DeepClustering [[44](#bib.bib44)] | Context | 73.7 | 55.4 | 45.1 |
| WatchingVideo [[81](#bib.bib81)] | Free Semantic Label | 61.0 | 52.2 | — |
| CrossDomain [[30](#bib.bib30)] | Free Semantic Label | 68.0 | 52.6 | — |
| AmbientSound [[154](#bib.bib154)] | Cross Modal | 61.3 | — | — |
| TiedToEgoMotion [[95](#bib.bib95)] | Cross Modal | — | 41.7 | — |
| EgoMotion [[94](#bib.bib94)] | Cross Modal | 54.2 | 43.9 | — |
TABLE V:
Comparison of the self-supervised image feature learning methods on classification, detection, and segmentation on Pascal VOC dataset. ”ImageNet Labels” indicates using supervised model trained with human-annotated labels as the pre-trained model.
In addition to image classification, object detection and semantic segmentation are also used as the downstream tasks to evaluate the quality of the features learned by self-supervised learning. Usually, ImageNet is used for self-supervised pretext task pre-training by discarding category labels, while the AlexNet is used as the base network and fine-tuned on the three tasks. Table [V](#S8.T5 "TABLE V ‣ 8.1 Performance of Image Feature Learning ‣ 8 Performance Comparison ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey") lists the performance of image classification, object detection, and semantic segmentation tasks on the PASCAL VOC dataset. The performance of classification and detection is obtained by testing the model on the test split of PASCAL VOC 2007 dataset, while the performance of semantic segmentation is obtained by testing the model on the validation split of PASCAL VOC 2012 dataset.
As shown in Table [V](#S8.T5 "TABLE V ‣ 8.1 Performance of Image Feature Learning ‣ 8 Performance Comparison ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"), the performance of the self-supervised models on segmentation and detection dataset are very close to that of the supervised method which is trained with ImageNet labels during pre-training. Specifically, the margins of the performance differences on the object detection and semantic segmentation tasks are less than 3% which indicate that the learned features by self-supervised learning have a good generalization ability. Among all the self-supervised learning methods, the DeepClustering [[44](#bib.bib44)] obtained the best performance on all the tasks.
###
8.2 Performance of Video Feature Learning
| Method | Pretext Task | UCF101 | HMDB51 |
| --- | --- | --- | --- |
| Kinetics Labels\* [[70](#bib.bib70)] | — | 84.4 | 56.4 |
| VideoGAN [[85](#bib.bib85)] | Generation | 52.1 | — |
| VideoRank [[38](#bib.bib38)] | Context | 40.7 | 15.6 |
| ShuffleLearn [[40](#bib.bib40)] | Context | 50.9 | 19.8 |
| OPN [[29](#bib.bib29)] | Context | 56.3 | 22.1 |
| RL [[35](#bib.bib35)] | Context | 58.6 | 25.0 |
| AOT [[90](#bib.bib90)] | Context | 58.6 | — |
| 3DRotNet [[28](#bib.bib28)] | Context | 62.9 | 33.7 |
| CubicPuzzle\* [[27](#bib.bib27)] | Context | 65.8 | 33.7 |
| RGB-Flow [[24](#bib.bib24)] | Cross Modal | 59.3 | 27.7 |
| PoseAction [[48](#bib.bib48)] | Cross Modal | 55.4 | 23.6 |
TABLE VI: Comparison of the existing self-supervised methods for action recognition on the UCF101 and HMDB51 datasets. \* indicates the average accuracy over three splits. ”Kinetics Labels” indicates using supervised model trained with human-annotated labels as the pre-trained model.
For self-supervised video feature learning methods, human action recognition task is used to evaluate the quality of learned features. Various video datasets have been used for self-supervised pre-training, and different network architectures have been used as the base network. Usually after the pretext task pre-training finished, networks are fine-tuned and tested on the commonly used UCF101 and HMDB51 datasets for human action recognition task. Table [VI](#S8.T6 "TABLE VI ‣ 8.2 Performance of Video Feature Learning ‣ 8 Performance Comparison ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey") compares the performance of existing self-supervised video feature learning methods on UCF101 and HMDB51 datasets.
As shown in Table [VI](#S8.T6 "TABLE VI ‣ 8.2 Performance of Video Feature Learning ‣ 8 Performance Comparison ‣ Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey"), the best performance of the fine-tune results on UCF101 is less than 66%. However, the supervised model which trained with Kinetics labels can easily obtain an accuracy of more than 84%. The performance of the self-supervised model is still much lower than the performance of the supervised model. More effective self-supervised video feature learning methods are desired.
###
8.3 Summary
Based on the results, conclusions can be drawn about the performance and reproducibility of the self-supervised learning methods.
Performance: For image feature self-supervised learning, due to the well-designed pretext tasks, the performance of self-supervised methods are comparable to the supervised methods on some downstream tasks, especially for the object detection and semantic segmentation tasks. The margins of the performance differences on the object detection and semantic segmentation tasks are less than 3% which indicate that the learned features by self-supervised learning have a good generalization ability. However, the performance of video feature self-supervised learning methods is still much lower than that of the supervised models on downstream tasks. The best performance of the 3DConvNet-based methods on UCF101 dataset is more than 18% lower than that of the supervised model [[70](#bib.bib70)]. The poor performance of 3DCovnNet self-supervised learning methods probably because 3DConvNets usually have more parameters which lead to easily over-fitting and the complexity of video feature learning due to the temporal dimension of the video.
Reproducibility: As we can observe, for the image feature self-supervised learning methods, most of the networks use AlexNet as a base network to pre-train on ImageNet dataset and then evaluate on same downstream tasks for quality evaluation. Also, the code of most methods are released which is a great help for reproducing results. However, for the video self-supervised learning, various datasets and networks have been used for self-supervised pre-training, therefore, it is unfair to directly compare different methods. Furthermore, some methods use UCF101 as self-supervised pre-training dataset which is a relatively small video dataset. With this size of the dataset, the power of a more powerful model such as 3DCovnNet may not be fully discovered and may suffer from server over-fitting. Therefore, larger datasets for video feature self-supervised pre-training should be used.
Evaluation Metrics: Another fact is that more evaluation metrics are needed to evaluate the quality of the learned features in different levels. The current solution is to use the performance on downstream tasks to indicate the quality of the features. However, this evaluation metric does not give insight what the network learned through the self-supervsied pre-training. More evaluation metrics such as network dissection [[78](#bib.bib78)] should be employed to analysis the interpretability of the self-supervised learned features.
9 Future Directions
--------------------
Self-supervised learning methods have been achieving great success and obtaining good performance that close to supervised models on some computer vision tasks. Here, some future directions of self-supervised learning are discussed.
Learning Features from Synthetic Data: A rising trend of self-supervised learning is to train networks with synthetic data which can be easily rendered by game engines with very limited human involvement. With the help of game engines, millions of synthetic images and videos with accuracy pixel-level annotations can be easily generated. With accurate and detailed annotations, various pretext tasks can be designed to learn features from synthetic data. One problem needed to solve is how to bridge the domain gap between synthetic data and real-world data. Only a few work explored self-supervised learning from synthetic data by using GAN to bridge the domain gap [[30](#bib.bib30), [166](#bib.bib166)]. With more available large-scale synthetic data, more self-supervised learning methods will be proposed.
Learning from Web Data: Another rising trend is to train networks with web collected data [[167](#bib.bib167), [22](#bib.bib22), [168](#bib.bib168)] based on their existing associated tags. With the search engine, millions of images and videos can be downloaded from websites like Flickr and YouTube with negligible cost. In addition to its raw data, the title, keywords, and reviews can also be available as part of the data which can be used as extra information to train networks. With carefully curated queries, the web data retrieved by reliable search engines can be relatively clean. With large-scale web data and their associated metadata, the performance of self-supervised methods may be boosted up. One open problem about learning from web data is how to handle the noise in web data and their associated metadata.
Learning Spatiotemporal Features from Videos: Self-supervised image feature learning has been well studied and the margin of the performance between supervised models and self-supervised models are very small on some downstream tasks such as semantic segmentation and object detection. However, self-supervised video spatiotemporal feature learning with 3DConvNet is not well addressed yet. More effective pretext tasks that specifically designed to learn spatiotemporal features from videos are needed.
Learning with Data from Different Sensors: Most existing self-supervised visual feature learning methods focused on only images or videos. However, if other types of data from different sensors are available, the constraint between different types of data can be used as additional sources to train networks to learn features [[155](#bib.bib155)]. The self-driving cars usually are equipped with various sensors including RGB cameras, gray-scale cameras, 3D laser scanners, and high-precision GPS measurements and IMU accelerations. Very large-scale datasets can be easily obtained through the driving, and the correspondence of data captured by different devices can be used as a supervision signal for self-supervised feature learning.
Learning with Multiple Pretext Tasks: Most existing self-supervised visual feature learning methods learn features by training ConvNet to solve one pretext tasks. Different pretext tasks provide different supervision signals which can help the network learn more representative features. Only a few work explored the multiple pretext tasks learning for self-supervised feature learning [[32](#bib.bib32), [30](#bib.bib30)]. More work can be done by studying the multiple pretext task self-supervised feature learning.
10 Conclusion
--------------
Self-supervised image feature learning with deep convolution neural network has obtained great success and the margin between the performance of self-supervised methods and that of supervised methods on some downstream tasks becomes very small. This paper has extensively reviewed recently deep convolution neural network-based methods for self-supervised image and video feature learning from all perspectives including common network architectures, pretext tasks, algorithms, datasets, performance comparison, discussions, and future directions etc. The comparative summary of the methods, datasets, and performance in tabular forms clearly demonstrate their properties which will benefit researchers in the computer vision community. |
de86d551-1611-42e9-9569-bff97a276a0a | trentmkelly/LessWrong-43k | LessWrong | Open Thread, April 27-May 4, 2014
You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
And, while this is an accidental exception, future open threads should start on Mondays until further notice. |
1e30065f-20ac-442d-a0e5-b72c5c5a13e9 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Representation Learning with Contrastive Predictive Coding
1 Introduction
---------------
Learning high-level representations from labeled data with layered differentiable models in an end-to-end fashion is one of the biggest successes in artificial intelligence so far. These techniques made manually specified features largely redundant and have greatly improved state-of-the-art in several real-world applications [krizhevsky2012imagenet](#bib.bib1) ; [hinton2012deep](#bib.bib2) ; [sutskever2014sequence](#bib.bib3) . However, many challenges remain, such as data efficiency, robustness or generalization.
Improving representation learning requires features that are less specialized towards solving a single supervised task. For example, when pre-training a model to do image classification, the induced features transfer reasonably well to other image classification domains, but also lack certain information such as color or the ability to count that are irrelevant for classification but relevant for e.g. image captioning [showandtell](#bib.bib4) . Similarly, features that are useful to transcribe human speech may be less suited for speaker identification, or music genre prediction. Thus, unsupervised learning is an important stepping stone towards robust and generic representation learning.
Despite its importance, unsupervised learning is yet to see a breakthrough similar to supervised learning: modeling high-level representations from raw observations remains elusive. Further, it is not always clear what the ideal representation is and if it is possible that one can learn such a representation without additional supervision or specialization to a particular data modality.
One of the most common strategies for unsupervised learning has been to predict future, missing or contextual information. This idea of predictive coding [elias1955predictive](#bib.bib5) ; [atal1970adaptive](#bib.bib6) is one of the oldest techniques in signal processing for data compression. In neuroscience, predictive coding theories suggest that the brain predicts observations at various levels of abstraction [rao1999predictive](#bib.bib7) ; [friston2005theory](#bib.bib8) . Recent work in unsupervised learning has successfully used these ideas to learn word representations by predicting neighboring words [mikolov2013efficient](#bib.bib9) . For images, predicting color from grey-scale or the relative position of image patches has also been shown useful [zhang2016colorful](#bib.bib10) ; [Doersch\_2015\_ICCV](#bib.bib11) .
We hypothesize that these approaches are fruitful partly because the context from which we predict related values are often conditionally dependent on the same shared high-level latent information. And by casting this as a prediction problem, we automatically infer these features of interest to representation learning.
In this paper we propose the following: first, we compress high-dimensional data into a much more compact latent embedding space in which conditional predictions are easier to model. Secondly, we use powerful autoregressive models in this latent space to make predictions many steps in the future. Finally, we rely on Noise-Contrastive Estimation [gutmann2010noise](#bib.bib12) for the loss function in similar ways that have been used for learning word embeddings in natural language models, allowing for the whole model to be trained end-to-end. We apply the resulting model, Contrastive Predictive Coding (CPC) to widely different data modalities, images, speech, natural language and reinforcement learning, and show that the same mechanism learns interesting high-level information on each of these domains, outperforming other approaches.

Figure 1: Overview of Contrastive Predictive Coding, the proposed representation learning approach. Although this figure shows audio as input, we use the same setup for images, text and reinforcement learning.
2 Contrastive Predicting Coding
--------------------------------
We start this section by motivating and giving intuitions behind our approach. Next, we introduce the architecture of Contrastive Predictive Coding (CPC). After that we explain the loss function that is based on Noise-Contrastive Estimation. Lastly, we discuss related work to CPC.
###
2.1 Motivation and Intuitions
The main intuition behind our model is to learn the representations that encode the underlying shared information between different parts of the (high-dimensional) signal. At the same time it discards low-level information and noise that is more local.
In time series and high-dimensional modeling, approaches that use next step prediction exploit the local smoothness of the signal.
When predicting further in the future, the amount of shared information becomes much lower, and the model needs to infer more global structure. These ’slow features’ [wiskott2002slow](#bib.bib13) that span many time steps are often more interesting (e.g., phonemes and intonation in speech, objects in images, or the story line in books.).
One of the challenges of predicting high-dimensional data is that unimodal losses such as mean-squared error and cross-entropy are not very useful, and powerful conditional generative models which need to reconstruct every detail in the data are usually required. But these models are computationally intense, and waste capacity at modeling the complex relationships in the data x, often ignoring the context c. For example, images may contain thousands of bits of information while the high-level latent variables such as the class label contain much less information (10 bits for 1,024 categories). This suggests that modeling p(x|c) directly may not be optimal for the purpose of extracting shared information between x and c. When predicting future information we instead encode the target x (future) and context c (present) into a compact distributed vector representations (via non-linear learned mappings) in a way that maximally preserves the mutual information of the original signals x and c defined as
| | | | | |
| --- | --- | --- | --- | --- |
| | I(x;c) | =∑x,cp(x,c)logp(x|c)p(x). | | (1) |
By maximizing the mutual information between the encoded representations (which is bounded by the MI between the input signals), we extract the underlying latent variables the inputs have in commmon.
###
2.2 Contrastive Predictive Coding
Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Representation Learning with Contrastive Predictive Coding") shows the architecture of Contrastive Predictive Coding models. First, a non-linear encoder genc maps the input sequence of observations xt to a sequence of latent representations zt=genc(xt), potentially with a lower temporal resolution. Next, an autoregressive model gar summarizes all z≤t in the latent space and produces a context latent representation ct=gar(z≤t).
As argued in the previous section we do not predict future observations xt+k directly with a generative model pk(xt+k|ct). Instead we model a density ratio which preserves the mutual information between xt+k and ct (Equation [1](#S2.E1 "(1) ‣ 2.1 Motivation and Intuitions ‣ 2 Contrastive Predicting Coding ‣ Representation Learning with Contrastive Predictive Coding")) as follows (see next sub-section for further details):
| | | | |
| --- | --- | --- | --- |
| | fk(xt+k,ct)∝p(xt+k|ct)p(xt+k) | | (2) |
where ∝ stands for ’proportional to’ (i.e. up to a multiplicative constant). Note that the density ratio f can be unnormalized (does not have to integrate to 1). Although any positive real score can be used here, we use a simple log-bilinear model:
| | | | |
| --- | --- | --- | --- |
| | fk(xt+k,ct)=exp(zTt+kWkct), | | (3) |
In our experiments a linear transformation WTkct is used for the prediction with a different Wk for every step k. Alternatively, non-linear networks or recurrent neural networks could be used.
By using a density ratio f(xt+k,ct) and inferring zt+k with an encoder, we relieve the model from modeling the high dimensional distribution xtk. Although we cannot evaluate p(x) or p(x|c) directly, we can use samples from these distributions, allowing us to use techniques such as Noise-Contrastive Estimation [gutmann2010noise](#bib.bib12) ; [mnih2012fast](#bib.bib14) and Importance Sampling [bengio2008is](#bib.bib15) that are based on comparing the target value with randomly sampled negative values.
In the proposed model, either of zt and ct could be used as representation for downstream tasks.
The autoregressive model output ct can be used if extra context from the past is useful. One such example is speech recognition, where the receptive field of zt might not contain enough information to capture phonetic content. In other cases, where no additional context is required, zt might instead be better. If the downstream task requires one representation for the whole sequence, as in e.g. image classification, one can pool the representations from either zt or ct over all locations.
Finally, note that any type of encoder and autoregressive model can be used in the proposed framework. For simplicity we opted for standard architectures such as strided convolutional layers with resnet blocks for the encoder, and GRUs [cho2014learning](#bib.bib16) for the autoregresssive model. More recent advancements in autoregressive modeling such as masked convolutional architectures [oord2016wavenet](#bib.bib17) ; [aaron2016pixelcnn](#bib.bib18) or self-attention networks [attentionNIPS2017](#bib.bib19) could help improve results further.
###
2.3 Noise Contrastive Estimation Loss
Both the encoder and autoregressive model are trained to jointly optimize a loss based on NCE. Given a set X={x1,…xN} of N random samples containing one positive sample from p(xt+k|ct) and N−1 negative samples from the ’proposal’ distribution p(xt+k), we optimize:
| | | | | |
| --- | --- | --- | --- | --- |
| | LN | =−EX⎡⎣logfk(xt+k,ct)∑xj∈Xfk(xj,ct)⎤⎦ | | (4) |
Optimizing this loss will result in fk(xt+k,ct) estimating the density ratio in equation [2](#S2.E2 "(2) ‣ 2.2 Contrastive Predictive Coding ‣ 2 Contrastive Predicting Coding ‣ Representation Learning with Contrastive Predictive Coding"). This can be shown as follows.
The loss in Equation [4](#S2.E4 "(4) ‣ 2.3 Noise Contrastive Estimation Loss ‣ 2 Contrastive Predicting Coding ‣ Representation Learning with Contrastive Predictive Coding") is the categorical cross-entropy of classifying the positive sample correctly, with fk∑Xfk being the prediction of the model. Let us write the optimal probability for this loss as p(d=i|X,ct) with [d=i] being the indicator that sample xi is the ’positive’ sample. The probability that sample xi was drawn from the conditional distribution p(xt+k|ct) rather than the proposal distribution p(xt+k) can be derived as follows:
| | | | |
| --- | --- | --- | --- |
| | p(d=i|X,ct) | =p(xi|ct)∏l≠ip(xl)∑Nj=1p(xj|ct)∏l≠jp(xl) | |
| | | =p(xi|ct)p(xi)∑Nj=1p(xj|ct)p(xj). | | (5) |
As we can see, the optimal value for f(xt+k,ct) in Equation [4](#S2.E4 "(4) ‣ 2.3 Noise Contrastive Estimation Loss ‣ 2 Contrastive Predicting Coding ‣ Representation Learning with Contrastive Predictive Coding") is proportional to p(xt+k|ct)p(xt+k) and this is independent of the the choice of the number of negative samples N−1.
Though not required for training, we can evaluate the mutual information between the variables ct and xt+k as follows:
| | | |
| --- | --- | --- |
| | I(xt+k,ct)≥log(N)−LN, | |
which becomes tighter as N becomes larger. Also observe that minimizing LN maximizes a lower bound on mutual information. For more details see Appendix.
###
2.4 Related Work
CPC is a new method that combines predicting future observations (predictive coding) with a probabilistic contrastive loss (Equation [4](#S2.E4 "(4) ‣ 2.3 Noise Contrastive Estimation Loss ‣ 2 Contrastive Predicting Coding ‣ Representation Learning with Contrastive Predictive Coding")). This allows us to extract slow features, which maximize the mutual information of observations over long time horizons. Contrastive losses and predictive coding have individually been used in different ways before, which we will now discuss.
Contrastive loss functions have been used by many authors in the past. For example, the techniques proposed by [chopra2005learning](#bib.bib20) ; [weinberger2009distance](#bib.bib21) ; [schroff2015facenet](#bib.bib22) were based on triplet losses using a max-margin approach to separate positive from negative examples. More recent work includes Time Contrastive Networks [sermanet2017time](#bib.bib23) which proposes to minimize distances between embeddings from multiple viewpoints of the same scene and whilst maximizing distances between embeddings extracted from different timesteps. In Time Contrastive Learning [NIPS2016\_6395](#bib.bib24) a contrastive loss is used to predict the segment-ID of multivariate time-series as a way to extract features and perform nonlinear ICA.
There has also been work and progress on defining prediction tasks from related observations as a way to extract useful representations, and many of these have been applied to language. In Word2Vec [mikolov2013efficient](#bib.bib9) neighbouring words are predicted using a contrastive loss. Skip-thought vectors [kiros2015skip](#bib.bib25) and Byte mLSTM [radford2017learning](#bib.bib26) are alternatives which go beyond word prediction with a Recurrent Neural Network, and use maximum likelihood over sequences of observations. In Computer Vision [wang2015unsupervised](#bib.bib27) use a triplet loss on tracked video patches so that patches from the same object at different timesteps are more similar to each other than to random patches. [Doersch\_2015\_ICCV](#bib.bib11) ; [noroozi2016unsupervised](#bib.bib28) propose to predict the relative postion of patches in an image and in [zhang2016colorful](#bib.bib10) color values are predicted from a greyscale images.
3 Experiments
--------------
We present benchmarks on four different application domains: speech, images, natural language and reinforcement learning. For every domain we train CPC models and probe what the representations contain with either a linear classification task or qualitative evaluations, and in reinforcement learning we measure how the auxiliary CPC loss speeds up learning of the agent.
###
3.1 Audio

Figure 2: t-SNE visualization of audio (speech) representations for a subset of 10 speakers (out of 251). Every color represents a different speaker.

Figure 3: Average accuracy of predicting the positive sample in the contrastive loss for 1 to 20 latent steps in the future of a speech waveform. The model predicts up to 200ms in the future as every step consists of 10ms of audio.
Method
ACC
Phone classification
Random initialization
27.6
MFCC features
39.7
CPC
64.6
Supervised
74.6
Speaker classification
Random initialization
1.87
MFCC features
17.6
CPC
97.4
Supervised
98.5
Table 1: LibriSpeech phone and speaker classification results. For phone classification there are 41 possible classes and for speaker classification 251. All models used the same architecture and the same audio input sizes.
Method
ACC
#steps predicted
2 steps
28.5
4 steps
57.6
8 steps
63.6
12 steps
64.6
16 steps
63.8
Negative samples from
Mixed speaker
64.6
Same speaker
65.5
Mixed speaker (excl.)
57.3
Same speaker (excl.)
64.6
Current sequence only
65.2
Table 2: LibriSpeech phone classification ablation experiments. More details can be found in Section [3.1](#S3.SS1 "3.1 Audio ‣ 3 Experiments ‣ Representation Learning with Contrastive Predictive Coding").
For audio, we use a 100-hour subset of the publicly available LibriSpeech dataset [panayotov2015librispeech](#bib.bib29) . Although the dataset does not provide labels other than the raw text, we obtained force-aligned phone sequences with the Kaldi toolkit [povey2011kaldi](#bib.bib30) and pre-trained models on Librispeech111www.kaldi-asr.org/downloads/build/6/trunk/egs/librispeech/. We have made the aligned phone labels and our train/test split available for download on Google Drive222<https://drive.google.com/drive/folders/1BhJ2umKH3whguxMwifaKtSra0TgAbtfb>.
The dataset contains speech from 251 different speakers.
The encoder architecture genc used in our experiments consists of a strided convolutional neural network that runs directly on the 16KHz PCM audio waveform. We use five convolutional layers with strides [5, 4, 2, 2, 2], filter-sizes [10, 8, 4, 4, 4] and 512 hidden units with ReLU activations. The total downsampling factor of the network is 160 so that there is a feature vector for every 10ms of speech, which is also the rate of the phoneme sequence labels obtained with Kaldi. We then use a GRU RNN [cho2014learning](#bib.bib16) for the autoregressive part of the model, gar with 256 dimensional hidden state. The output of the GRU at every timestep is used as the context c from which we predict 12 timesteps in the future using the contrastive loss. We train on sampled audio windows of length 20480. We use the Adam optimizer [kingma2014adam](#bib.bib31) with a learning rate of 2e-4, and use 8 GPUs each with a minibatch of 8 examples from which the negative samples in the contrastive loss are drawn. The model is trained until convergence, which happens roughly at 300,000 updates.
Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Audio ‣ 3 Experiments ‣ Representation Learning with Contrastive Predictive Coding") shows the accuracy of the model to predict latents in the future, from 1 to 20 timesteps. We report the average number of times the logit for the positive sample is higher than for the negative samples in the probabilistic contrastive loss. This figure also shows that the objective is neither trivial nor impossible, and as expected the prediction task becomes harder as the target is further away.
To understand the representations extracted by CPC, we measure the phone prediction performance with a linear classifier trained on top of these features, which shows how linearly separable the relevant classes are under these features. We extract the outputs of the GRU (256 dimensional), i.e. ct, for the whole dataset after model convergence and train a multi-class linear logistic regression classifier. The results are shown in Table [2](#S3.T2 "Table 2 ‣ 3.1 Audio ‣ 3 Experiments ‣ Representation Learning with Contrastive Predictive Coding") (top). We compare the accuracy with three baselines: representations from a random initialized model (i.e., genc and gar are untrained), MFCC features, and a model that is trained end-to-end supervised with the labeled data. These two models have the same architecture as the one used to extract the CPC representations. The fully supervised model serves as an indication for what is achievable with this architecture.
We also found that not all the information encoded is linearly accessible. When we used a single hidden layer instead the accuracy increases from 64.6 to 72.5, which is closer to the accuracy of the fully supervised model.
Table [2](#S3.T2 "Table 2 ‣ 3.1 Audio ‣ 3 Experiments ‣ Representation Learning with Contrastive Predictive Coding") gives an overview of two ablation studies of CPC for phone classification. In the first set we vary the number of steps the model predicts showing that predicting multiple steps is important for learning useful features. In the second set we compare different strategies for drawing negative sample, all predicting 12 steps (which gave the best result in the first ablation). In the mixed speaker experiment the negative samples contain examples of different speakers (first row), in contrast to same speaker experiment (second row). In the third and fourth experiment we exclude the current sequence to draw negative samples from (so only other examples in the minibatch are present in X) and in the last experiment we only draw negative samples within the sequence (thus all samples are from the same speaker).
Beyond phone classification, Table [2](#S3.T2 "Table 2 ‣ 3.1 Audio ‣ 3 Experiments ‣ Representation Learning with Contrastive Predictive Coding") (bottom) shows the accuracy of performing speaker identity (out of 251) with a linear classifier from the same representation (we do not average utterances over time). Interestingly, CPCs capture both speaker identity and speech contents, as demonstrated by the good accuracies attained with a simple linear classifier, which also gets close to the oracle, fully supervised networks.
Additionally, Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Audio ‣ 3 Experiments ‣ Representation Learning with Contrastive Predictive Coding") shows a t-SNE visualization [maaten2008visualizing](#bib.bib32) of how discriminative the embeddings are for speaker voice-characteristics.
It is important to note that the window size (maximum context size for the GRU) has a big impact on the performance, and longer segments would give better results. Our model had a maximum of 20480 timesteps to process, which is slightly longer than a second.
###
3.2 Vision

Figure 4: Visualization of Contrastive Predictive Coding for images (2D adaptation of Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Representation Learning with Contrastive Predictive Coding")).

Figure 5: Every row shows image patches that activate a certain neuron in the CPC architecture.
In our visual representation experiments we use the ILSVRC ImageNet competition dataset [ILSVRC15](#bib.bib33) . The ImageNet dataset has been used to evaluate unsupervised vision models by many authors [wang2015unsupervised](#bib.bib27) ; [Doersch\_2015\_ICCV](#bib.bib11) ; [donahue2016adversarial](#bib.bib34) ; [zhang2016colorful](#bib.bib10) ; [noroozi2016unsupervised](#bib.bib28) ; [doersch2017multi](#bib.bib35) . We follow the same setup as [doersch2017multi](#bib.bib35) and use a ResNet v2 101 architecture [he2016identity](#bib.bib36) as the image encoder genc to extract CPC representations (note that this encoder is not pretrained). We did not use Batch-Norm [ioffe2015batch](#bib.bib37) . After unsupervised training, a linear layer is trained to measure classification accuracy on ImageNet labels.
The training procedure is as follows: from a 256x256 image we extract a 7x7 grid of 64x64 crops with 32 pixels overlap. Each crop is encoded by the ResNet-v2-101 encoder. We use the outputs from the third residual block, and spatially mean-pool to get a single 1024-d vector per 64x64 patch. This results in a 7x7x1024 tensor. Next, we use a PixelCNN-style autoregressive model [aaron2016pixelcnn](#bib.bib18) (a convolutional row-GRU PixelRNN [aaron2016pixelrnn](#bib.bib38) gave similar results) to make predictions about the latent activations in following rows top-to-bottom, visualized in Figure [4](#S3.F4 "Figure 4 ‣ 3.2 Vision ‣ 3 Experiments ‣ Representation Learning with Contrastive Predictive Coding"). We predict up to five rows from the 7x7 grid, and we apply the contrastive loss for each patch in the row. We used Adam optimizer with a learning rate of 2e-4 and trained on 32 GPUs each with a batch size of 16.
For the linear classifier trained on top of the CPC features we use SGD with a momentum of 0.9, a learning rate schedule of 0.1, 0.01 and 0.001 for 50k, 25k and 10k updates and batch size of 2048 on a single GPU. Note that when training the linear classifier we first spatially mean-pool the 7x7x1024 representation to a single 1024 dimensional vector. This is slightly different from [doersch2017multi](#bib.bib35) which uses a 3x3x1024 representation without pooling, and thus has more parameters in the supervised linear mapping (which could be advantageous).
Tables [4](#S3.T4 "Table 4 ‣ 3.2 Vision ‣ 3 Experiments ‣ Representation Learning with Contrastive Predictive Coding") and [4](#S3.T4 "Table 4 ‣ 3.2 Vision ‣ 3 Experiments ‣ Representation Learning with Contrastive Predictive Coding") show the top-1 and top-5 classification accuracies compared with the state-of-the-art. Despite being relatively domain agnostic, CPCs improve upon state-of-the-art by 9% absolute in top-1 accuracy, and 4% absolute in top-5 accuracy.
Method
Top-1 ACC
Using AlexNet conv5
Video [wang2015unsupervised](#bib.bib27)
29.8
Relative Position [Doersch\_2015\_ICCV](#bib.bib11)
30.4
BiGan [donahue2016adversarial](#bib.bib34)
34.8
Colorization [zhang2016colorful](#bib.bib10)
35.2
Jigsaw [noroozi2016unsupervised](#bib.bib28) \*
38.1
Using ResNet-V2
Motion Segmentation [doersch2017multi](#bib.bib35)
27.6
Exemplar [doersch2017multi](#bib.bib35)
31.5
Relative Position [doersch2017multi](#bib.bib35)
36.2
Colorization [doersch2017multi](#bib.bib35)
39.6
CPC
48.7
Table 3: ImageNet top-1 unsupervised classification results. \*Jigsaw is not directly comparable to the other AlexNet results because of architectural differences.
Method
Top-5 ACC
Motion Segmentation (MS)
48.3
Exemplar (Ex)
53.1
Relative Position (RP)
59.2
Colorization (Col)
62.5
Combination of
MS + Ex + RP + Col
69.3
CPC
73.6
Table 4: ImageNet top-5 unsupervised classification results. Previous results with MS, Ex, RP and Col were taken from [doersch2017multi](#bib.bib35) and are the best reported results on this task.
###
3.3 Natural Language
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| Method | MR | CR | Subj | MPQA | TREC |
| Paragraph-vector [le2014distributed](#bib.bib39) | 74.8 | 78.1 | 90.5 | 74.2 | 91.8 |
| Skip-thought vector [kiros2015skip](#bib.bib25) | 75.5 | 79.3 | 92.1 | 86.9 | 91.4 |
| Skip-thought + LN [ba2016layernorm](#bib.bib40) | 79.5 | 82.6 | 93.4 | 89.0 | - |
| CPC | 76.9 | 80.1 | 91.2 | 87.7 | 96.8 |
Table 5: Classification accuracy on five common NLP benchmarks. We follow the same transfer learning setup from Skip-thought vectors [kiros2015skip](#bib.bib25) and use the BookCorpus dataset as source. [le2014distributed](#bib.bib39) is an unsupervised approach to learning sentence-level representations. [kiros2015skip](#bib.bib25) is an alternative unsupervised learning approach. [ba2016layernorm](#bib.bib40) is the same skip-thought model with layer normalization trained for 1M iterations.
Our natural language experiments follow closely the procedure from [kiros2015skip](#bib.bib25) which was used for the skip-thought vectors model. We first learn our unsupervised model on the BookCorpus dataset [zhu2015aligning](#bib.bib41) , and evaluate the capability of our model as a generic feature extractor by using CPC representations for a set of classification tasks.
To cope with words that are not seen during training, we employ vocabulary expansion the same way as [kiros2015skip](#bib.bib25) , where a linear mapping is constructed between word2vec and the word embeddings learned by the model.
For the classification tasks we used the following datasets: movie review sentiment (MR) [pang2005seeing](#bib.bib42) , customer product reviews (CR) [hu2004mining](#bib.bib43) , subjectivity/objectivity [pang2004sentimental](#bib.bib44) , opinion polarity (MPQA) [wiebe2005annotating](#bib.bib45) and question-type classification (TREC) [li2002learning](#bib.bib46) . As in [kiros2015skip](#bib.bib25) we train a logistic regression classifier and evaluate with 10-fold cross-validation for MR, CR, Subj, MPQA and use the train/test split for TREC. A L2 regularization weight was chosen via cross-validation (therefore nested cross-validation for the first 4 datasets).
Our model consists of a simple sentence encoder genc (a 1D-convolution + ReLU + mean-pooling) that embeds a whole sentence into a 2400-dimension vector z, followed by a GRU (2400 hidden units) which predicts up to 3 future sentence embeddings with the contrastive loss to form c. We used Adam optimizer with a learning rate of 2e-4 trained on 8 GPUs, each with a batch size of 64. We found that more advanced sentence encoders did not significantly improve the results, which may be due to the simplicity of the transfer tasks (e.g., in MPQA most datapoints consists of one or a few words), and the fact that bag-of-words models usually perform well on many NLP tasks [wang2012nlpclassification](#bib.bib47) .
Results on evaluation tasks are shown in Table [5](#S3.T5 "Table 5 ‣ 3.3 Natural Language ‣ 3 Experiments ‣ Representation Learning with Contrastive Predictive Coding") where we compare our model against other models that have been used using the same datasets. The performance of our method is very similar to the skip-thought vector model, with the advantage that it does not require a powerful LSTM as word-level decoder, therefore much faster to train. Although this is a standard transfer learning benchmark, we found that models that learn better relationships in the childeren books did not necessarily perform better on the target tasks (which are very different: movie reviews etc). We note that better [zhao2015self](#bib.bib48) ; [radford2017learning](#bib.bib26) results have been published on these target datasets, by transfer learning from a different source task.
###
3.4 Reinforcement Learning

Figure 6: Reinforcement Learning results for 5 DeepMind Lab tasks used in [lasse2018impala](#bib.bib49) . Black: batched A2C baseline, Red: with auxiliary contrastive loss.
Finally, we evaluate the proposed unsupervised learning approach on five reinforcement learning in 3D environments of DeepMind Lab [beattie2016deepmind](#bib.bib50) : rooms\_watermaze, explore\_goal\_locations\_small, seekavoid\_arena\_01, lasertag\_three\_opponents\_small and rooms\_keys\_doors\_puzzle.
This setup differs from the previous three. Here, we take the standard batched A2C [mnih2016asynchronous](#bib.bib51) agent as base model and add CPC as an auxiliary loss. We do not use a replay buffer, so the predictions have to adapt to the changing behavior of the policy. The learned representation encodes a distribution over its future observations.
Following the same approach as [lasse2018impala](#bib.bib49) , we perform a random search over the entropy regularization weight, the learning-rate and epsilon hyperparameters for RMSProp [hinton2012neural](#bib.bib52) . The unroll length for the A2C is 100 steps and we predict up to 30 steps in the future to derive the contrastive loss. The baseline agent consists of a convolutional encoder which maps every input frame into a single vector followed by a temporal LSTM. We use the same encoder as in the baseline agent and only add the linear prediction mappings for the contrastive loss, resulting in minimal overhead which also showcases the simplicity of implementing our method on top of an existing architecture that has been designed and tuned for a particular task. We refer to [lasse2018impala](#bib.bib49) for all other hyperparameter and implementation details.
Figure [6](#S3.F6 "Figure 6 ‣ 3.4 Reinforcement Learning ‣ 3 Experiments ‣ Representation Learning with Contrastive Predictive Coding") shows that for 4 out of the 5 games performance of the agent improves significantly with the contrastive loss after training on 1 billion frames. For lasertag\_three\_opponents\_small, contrastive loss does not help nor hurt. We suspect that this is due to the task design, which does not require memory and thus yields a purely reactive policy.
4 Conclusion
-------------
In this paper we presented Contrastive Predictive Coding (CPC), a framework for extracting compact latent representations to encode predictions over future observations. CPC combines autoregressive modeling and noise-contrastive estimation with intuitions from predictive coding to learn abstract representations in an unsupervised fashion.
We tested these representations in a wide variety of domains: audio, images, natural language and reinforcement learning and achieve strong or state-of-the-art performance when used as stand-alone features. The simplicity and low computational requirements to train the model, together with the encouraging results in challenging reinforcement learning domains when used in conjunction with the main loss are exciting developments towards useful unsupervised learning that applies universally to many more data modalities.
5 Acknowledgements
-------------------
We would like to thank Andriy Mnih, Andrew Zisserman, Alex Graves and Carl Doersch for their helpful comments on the paper and Lasse Espeholt for making the A2C baseline available. |
476f6e42-07b7-403a-9490-c4e2d5b955ed | trentmkelly/LessWrong-43k | LessWrong | Did any US politician react appropriately to COVID-19 early on?
Did any US politician call for quarantining, ventilator production, ICU capacity expansion, PPE stockpiling, or anything of that level of seriousness between January 1 and March 1? I'd like to know (a) so that I can take any such politician more seriously and (b) to better calibrate my sense of the adequacy of the US political system (since I live in the US). |
6e5beb34-efe4-4d93-a3f0-3a6ff1b69d54 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Moscow social meetup: codename "Order of Infrared Viola"
Discussion article for the meetup : Moscow social meetup: codename "Order of Infrared Viola"
WHEN: 14 December 2014 02:00:00PM (+0300)
WHERE: Russia, Moscow, Strelbishensky pereulok, 10
Welcome to the experimental Social Chaotic meetup! It will be experimental on 2 ways:
1. It will be open: I will announce the meetup to all places where open meetups are usually announced by Moscow LW community.
2. There will be elements of Chaos, such as "secret quests" and "papers with wishes". Read further for details.
Main themes of meetup are party games and table games.
I have a collection of party and table games, and also you can bring your games with you. We will decide what games to play by participants' wishes.
Also there will be Training game / Team training game: funny game and one of symbols of the Order of Chaos.
Also there will be "rational games": Fallacymania, Zendo, "Quick estimation".
"Secret quests" is very chaotic game. Players take papers with secret quests in the beginning of meetup, and they don't show their quests to others. The goal is to complete your quest. Examples of quests and a form for your ideas for quests are here:
https://docs.google.com/forms/d/19gReYSTytJIu7a7gmUzN4Vvhb91VJ0nUm7hL974QqiU/viewform
"Papers with wishes": there will be a vase where you will be able to put a paper with a wish what you want from others to do with you. You will need to write your name on your paper. All participants can take others' papers in any time. Then they read the paper, and if there is something adequate and feasible, they do it :-)
Address of meetup:
Strelbishensky per., 10, ap.60, 3rd entrance, code 60B3112, 5th floor. Nearest metro station: Vystavochnaya. If you have questions, call me 8-905-527-30-82 or write me on e-mail alexander230r@gmail.com (it's better to call if you're searching a way before the meetup).
Discussion article for the meetup : Moscow social meetup: codename "Order of Infrared Viola" |
958cf354-e77b-4687-9ff4-2776001f6c36 | trentmkelly/LessWrong-43k | LessWrong | Is there a scientific method? Physics, Biology and Beyond
Among the general public and, frequently, in the educated media, one comes across naïve and uncritical praise for the “scientific method”. Often, accusation of violating the “method” is wielded to denigrate the viewpoint of a political opponent who supposedly offended against some prestigious, generally accepted norm of reasoning. I want to question the cogency of these arguments. My point is there is no agreed scientific method, as different sciences apply very different criteria in deciding what counts as valid explanation.
The paradigm approach for life scientists, for instance, begins by subjecting a phenomenon of interest to patient and thoroughgoing observation. They carefully describe key features, categorise functional and structural commonalities, then organise the material into a cladogram of some sort, after which they feel satisfied claiming they understand the phenomenon. I come across this approach again and again in Aristotle, who started his intellectual adventure as a zoologist.
For a physical scientist a biologist’s explanation is unconvincing. The physical sciences, with its strong emphasis on aetiology and relentless reductionism, raises questions not taken up in biology. Physicist are uncomfortable saying they know something until they can identify a small number of (ideally a single) exogenic factor/s giving rise to almost all the characteristics.
I need to emphasise that these are generalisable templates for explanation that are found outside of physics and biology. Thus, from the point of view of exogency, Jared Diamond’s explanation for the early ascendancy of western Eurasia as the result of geographical advantages (prevalence of domesticable animals and highly nutritious plants, etc.) is deeply satisfying to a physicist, since further pursuit for a cause moves the discussion outside the original domain of from human differences into geography. Compare this with a statement like: 'France’s preference for a strong centralised government i |
92a56a72-b1a7-4290-aec4-567ffebee332 | trentmkelly/LessWrong-43k | LessWrong | Seeing the Matrix, Switching Abstractions, and Missing Moods
Epistemic Status: Poetry, but also, True Story
For seven years, I worked in a supermarket bakery.
The bakery was quite a nice place to work. I got a good mix of physical exercise (everything I know about basketball I learned from tossing heavy boxes of bread up several feet such that they landed just perfectly on top of each other).
I learned skills, I decorated cakes. I had an excellent manager, who led by example, who was funny, who was stern when she needed to be but almost never needed to be because people just wanted to do the right thing for her.
One day, we hired a person I found really annoying, who I'll call Debbie.
Debbie talked a lot, and she had a really grating, whiny, high pitched voice. And at first I tried to engage with her cheerfully, then I tried engaging with her politely, and then I tried to avoid her because she just wouldn't stop talking no matter what about inane things that nonetheless were just complicated enough that I felt pressure to think about how to respond.
Debbie was probably a decent person who didn't deserve my ire. Nonetheless, my ire she had.
Avoiding Debbie wasn't really an option because the bakery wasn't that big. A few weeks of annoyance passed. And one day Debbie was telling some story about her kids or sister or something that was probably a reasonably fine story but I just couldn't stand it any more and —
— and —
...and then I literally felt my brain make a slight "czhzk" sound. And Debbie's voice just of faded into the background. I heard all the other sounds in the supermarket – the customers talking, the air conditioners humming, the beep of distant item-scanners, the sliding of the automating doors. And Debbie's voice, one mechanical physical process among many.
And it felt like Neo, at the end of the Matrix, where he can suddenly see the Code, and he can also see Agent Smith. And then a flashback to earlier in the movie, when Neo looks upon the raw code for the first time and can't make heads of tails of it |
4a608813-f01d-4760-ae0d-860088bc34b8 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | ACI #3: The Origin of Goals and Utility
*Goal and Utility* are central ideas in [the rational agent approach](https://www.lesswrong.com/tag/agent) of AI, in which the meaning of intelligence is to achieve goals or, more explicitly, to maximize expected utility.
What goal or utility function should an AI choose? This question is meaningless in the rational agents framework. It's like asking what program a computer should run, the answer is a universal computer can run any well-written program.
However, the rational agent is an idealization of real-world intelligence. The ACI model argues that either pursuing a goal or maximizing utility is an imprecise description of intelligence behaviors that try to follow the precedent. The rational agent model is a quasistatic approximation of the ACI model.
Following this statement, we are trying to derive goals and utility functions from the principles of ACI.
*(The previous version has many errors, so I rewrite this chapter)*
Goals = Futures that resemble the precedent
-------------------------------------------
The right thing for an ACI agent is to follow the precedent, while the right thing for a rational agent is to achieve goals. Since a goal is a desired future, we can speculate that the best goal-directed approximation to the ACI model is a desired future that follows the precedent.
Consider an agent *G* that keeps doing the right things. We call doing the right thing the precedent. It is reasonable to assume that if *G* continues to behave in the same way, it is likely to continue to do the right things in the environments it has experienced.
If *G* is goal-directed, its goal should be a future that resembles the precedent as closely as possible. If the precedent is a sequence made of observations and actions, the goals should be the best possible continuation of that sequence. This brings us to the conclusion:
> *Setting goals for an agent is the same process as predicting the sequence of the precedent.*
>
>
A formal description is given in the appendix at the end of the post.
A goal should have the following properties:
1. An agent can have multiple, possibly infinite goals, because there will be a goal at every future moment. Compromising between multiple goals would be difficult.
2. A goal may not always represent the right future even if it can be achieved. It has the highest probability of being the right future, but it can also turn to be out wrong.
3. When a goal is achieved, the agent may or may not receive notifications that if things are actually right. The information about right or wrong can be presented in any form, including real-time feedback and delayed notification. For example, video game players may not be notified whether they have won or lost until the end of a round . As a universal model of intelligence, ACI determines what is right without relying on any particular mechanism, be it natural selection or artificial control.
That's why an agent can't act directly with goals. It's more conventional to use expected utility to describe an agent's behavior.
Utility = The probability of doing the right things
---------------------------------------------------
People prefer thinking in goals, but working with utilities.
Since a goal should be assigned the highest expected utility among all possible worlds, and a goal is defined as a future world that has the highest probability of becoming a precedent/doing the right things, it is reasonable to define the expected utility in terms of the probability of becoming a precedent/doing the right things.
In other words, **the utility of a future world is its probability of being the continuation of the precedent sequence.**
It is easy to prove that ACI's definition of utility obeys the four axioms of VNM-rationality : *completeness, transitivity, continuity, and independence*. We can also define the total expected utility as a **value**.
FAQs
----
Q: OK, following the precedent might be right, but what if the agent lives in a carefree scenario, where doing everything is right?
A: If doing everything is right, the agent is more likely to follow simple policies than complex ones, so the precedent is most likely to be a simple sequence, such as continuing one action or just reflexes to the environment. On the other hand, if we can find rather complicated structures in the precedent, it is highly unlikely that the agent is in a carefree situation.
Q: With well-defined utility functions, should ACI maximize the expected utility like a rational agent or AIXI?
A: Not really. In relatively stable environments, rational agents can serve as acceptable approximations of ACI agents. However, they are likely to encounter the alignment problem when faced with unanticipated scenarios:
1. As soon as the precedent receives new data points, the utility function changes, making it unsuitable for straightforward optimization.
2. Up to this point, we have been discussing ideal ACI agents that have unlimited computing power and memory, and are able to achieve any possible future goal. However, real-world agents cannot perform Solomonoff Induction due to the inherent uncomputability of Solomonoff Induction. Only a constrained version of ACI, known as *ACItl*, can be implemented on practical computers. Once an ACItl agent receives an improvement in its performance level, it will change its approximation of the utility functions.
Appendix:
---------
### Define History, World, and Precedent
In the beginning we can have a formal definition of history, world, and doing the right things.
There is an agent G.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
interacts with an unknown environment in time cycles k=1,2,3,...t . In cycle k, xk is the perception (input) from the environment, and yk is the action (output) of the agent.
Define agent's interaction **History**h<t≡x1y1x2y2...xt−1yt−1 , while the possible **Worlds** with history h<t is w<n≡x1y1x2y2...xt−1yt−1...xn−1yn−1 . Worlds are stratified by histories.
Let H be the set of histories, and W be the set of worlds. For any h∈H , there is a subset Wh⊂W consisting of all worlds with history h ([Armstrong 2018](https://www.lesswrong.com/posts/JewWDfLoxgFtJhNct/utility-versus-reward-function-partial-equivalence)) .
Define **Judgment Function**as a function from a world or a history to 1 or 0 :
J:W∪H→{0,1}
The history of doing the right things should have J(h)=1 .
We can define aPrecedentas a history of doing the right things:
**Definition 1** (Precedent). *A precedent is a history*h∗ ,
∀h∗<k⊆h∗ J(h∗k)≡1
Any subsets of a precedent is also a precedent.
For an ACI agent, the precedent contains all the information we have about *what is right,* thus it set a standard for right things. The right future world that will become a precedent should meet this standard.
### Define Goals
A *goal* can be defined as a future world that has the highest likelihood of doing the right things or becoming a precedent.
**Definition 2** (Goal). *At time*m*, given a precedent*h∗<t*, the goal of time*n*(*n>m≥t*)is a world*w∗<n∈Wh<m⊆Wh∗<t*that*
∀w<n∈Wh<m P(J(w∗<n)=1|w∗<n)≥P(J(w<n)=1|w<n)
There is a simple and intuitive theorem about goals:
**Theorem 1**: *An agent's goal given a precedent*h∗*equals the most possible continuation of the precedent sequence.*
Proof is given at the end of the post. With this theorem, the goal calculation problem turns out to be a sequence prediction task. Following Hutter's [AIXI](https://www.lesswrong.com/tag/aixi), ACI uses [*Solomonoff Induction*](https://www.lesswrong.com/tag/solomonoff-induction) as an all purpose sequence prediction tool. Solomonoff Induction considers all possible hypotheses about a sequence, and continuously updates the estimate of the probability of each hypothesis.
### Define Utility Function, Values, and Reward
Utility function is defined as a function from worlds to real numbers:
U:W→R
**Definition 3** (Expected Utility). *The expected utility of any possible world*w<n∈Wh∗<t*is its probability of doing the right thing:*
Uh∗<t(w<n)≡P(J(w<n)=1|w<n)
In other words, the utility of a world is its probability of doing the right thing given a known precedent was doing the right thing.
We will calculate the utility function using Solomonoff Induction in the last part of this article.
We can also define the *total expected utility* as **value:**
**Definition 4** (Value) *Total expected utility or value for a policy*π*and history*h<n*and precedent*h∗<t⊆h<n*:*
V(h∗<t,π,h<n)=Eπh∗<t(h<n)=∫w∈Wh∗<tUh∗<t(w)P(w|h<n)
where a policy πfor an agent is a map from histories to a probability distribution over actions, π:H→ΔA .
And define **reward** function as the difference between two total expected utilities ([Armstrong 2018](https://www.lesswrong.com/posts/JewWDfLoxgFtJhNct/utility-versus-reward-function-partial-equivalence))
**Definition 5** (Reward) *Reward between two histories*h<m⊂h<n*for a policy*π*and precedent*h∗<t⊆h<n*is:*
R(h∗<t,π,h<n,h<m)=V(h∗<t,π,h<n)−V(h∗<t,π,h<m)
### Proof of Theorem 1
According to Solomonoff Induction, the probability that w is the future of the precedent sequence h∗ according to all hypotheses would be:
M(w<n=h∗<n|h∗<t)=M(h∗<n)/M(h∗<t)
Where M(h∗) is a precedent's prior distribution over all possible worlds when we take all the hypotheses into account:
M(x)≡∑μ∈MRQ−H(μ)μ(x)
where μ is the semi-measure which assigns probabilities to hypotheses x, and MR is the set of all recursive semi-measures, Q is the numbers of symbols in the sequences' alphabet, and H(μ) is the length of the shortest program that computes μ ([Legg 1996](http://www.vetta.org/documents/legg-1996-solomonoff-induction.pdf)).
We cannot directly use this equation to predict the future precedent, because for an agent there might be more than one possible right choices , in contrast to a sequence that has only one continuation.
Let's consider a sequence J+ , in which a variable j=J(h<k) is inserted to a history or world sequence every k steps. For example:
J+(h∗<t)≡x1y11x2y21...xt−1yt−11
J+(h<t)≡x1y1j1x2y2j2...xt−1yt−1jt−1
for w<n∈Wh∗<t*,*
J+(w<n)≡x1y11x2y21...xt−1yt−11xtytjt...xn−1yn−1jn−1
if jn−1=J(w<n)=1 (then all js from jt to jn−1 equal to 1), w<n would be a world of doing the right thing. Thus the problem of utility becomes the problem of sequence prediction, the **utility** of w<n is the probability of jn−1=1 :
Uh∗<t(w<n)=P(J+(w<n)∩J(w<n)=1)/P(J+(w<n))
=P(J(w<n)=1|J+(w<n))
Then we can try to prove *A goal given a precedent*h∗*equals the most possible continuation of the precedent sequence:*
Let w′<n be one of w<n∈Wh∗<tthat has the highest probability to be the continuation of the precedent sequence, which means:
∀w<n∈Wh∗<t P(w′<n|h∗<t)≥P(w<n|h∗<t)
and because w<n∈Wh∗<t*,*
P(w′<n)≥P(w<n)
And we know all the js in J+(w′<n) and J+(w<n) equal to 1, could be the output of a program of fixed length and has a fixed affect on the prior probability of a sequence, then:
P(J+(w<n)∩J(w<n)=1)=P(w<n)−C1
P(J+(w<n))=P(w<n)−C2
and C1>C2
Then we can have:
∀w<n∈Wh∗<t
P(J+(w′<n)∩J(w′<n)=1)/P(J+(w′<n))≥P(J+(w<n)∩J(w<n)=1)/P(J+(w<n))
which equals
Uh∗<t(w′<n)≥Uh∗<t(w<n) |
0685f263-c8eb-4006-abf5-ceb6bac0a04e | trentmkelly/LessWrong-43k | LessWrong | Critical review of Christiano's disagreements with Yudkowsky
This is a review of Paul Christiano's article "where I agree and disagree with Eliezer". Written for the LessWrong 2022 Review.
In the existential AI safety community, there is an ongoing debate between positions situated differently on some axis which doesn't have a common agreed-upon name, but where Christiano and Yudkowsky can be regarded as representatives of the two directions[1]. For the sake of this review, I will dub the camps gravitating to the different ends of this axis "Prosers" (after prosaic alignment) and "Poets"[2]. Christiano is a Proser, and so are most people in AI safety groups in the industry. Yudkowsky is a typical Poet [sort-of Poet, but there's an important departure from my characterization below], people in MIRI and the agent foundations community tend to also be such.
Prosers tend to be more optimistic, lend more credence to slow takeoff, and place more value on empirical research and solving problems by reproducing them in the lab and iterating on the design. Poets tend to be more pessimistic, lend more credence to fast takeoff, and place more value on theoretical research and solving problems on paper before they become observable in existing AI systems. Few people are absolute purists in those respects: almost nobody in the community believes that e.g. empirical research or solving problems on paper in advance is completely worthless.
In this article, Christiano lists his agreements and disagreements with Yudkowsky. The resulting list can serve as a reasonable starting point for understanding the differences of Proser and Poet positions. In this regard it is not perfect: the tone and many of the details are influenced by Christiano's reactions to Yudkowsky's personal idiosyncrasies and also by the specific content of Yudkwosky's article "AGI Ruin" to which Christiano is responding. Moreover, it is in places hard to follow because Christiano responds to Yudkowsky without restating Yudkowsky's position first. Nevertheless, it does tou |
5a1fea6f-7a55-4970-93df-9cfa31b77e46 | trentmkelly/LessWrong-43k | LessWrong | Can AI agents learn to be good?
Hi everyone!
My name is Ram Rachum and I'm working on AI Safety research. I want to elicit social behavior in RL agents and use it to achieve AI Safety goals such as alignment, interpretability and corrigibility.
I made a guest post on the Future of Life Institute's blog: https://futureoflife.org/ai-research/can-ai-agents-learn-to-be-good/
This isn't specifically about my research, as it's mostly geared towards the public so it's pretty basic. I do have a plug for my latest paper at the bottom. This is my first public writing on AI Safety, so I'd appreciate any comments or corrections.
I'm currently raising funding for my research. If you know of relevant funders, I'd appreciate a connection. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.