id
stringlengths 36
36
| source
stringclasses 15
values | formatted_source
stringclasses 13
values | text
stringlengths 2
7.55M
|
|---|---|---|---|
e19ac436-f08b-463b-bc44-14b0156a9ce2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Reasoning is not search - a chess example
In the past AI systems have reached super human performance by adding search to neural networks while the network alone could not reach the level of the best humans.
At least this seems to be the case for AlphaGo, AlphaZero/Leela, AlphaGeometry and probably more, while AlphaStar and OpenAI Five where adding search was not easily possible have failed to reach convincing super human status.
This has lead some people to expect that adding a form of search to large language models would finally endow them with the ability to reason through complicated problems and allow them to surpass humans.
The Q* system of OpenAI is rumored to somehow integrate search and following this rumor several papers were published that aim at integrating Monte Carlo Tree search in the sampling process of LLMs.
However, human-like reasoning is not search. At least not search as implemented in these systems. To illustrate this point I want to go through a chess combination I played online a couple of months ago and describe my reasoning move by move.
In the following position I am down material and have little time left and my king is relatively unsafe. Two of my pieces aim at the g2 pawn behind which the white king is hiding.
However, after Bg3 the g-line would be plugged and it would become very difficult to get at the white king.
Therefore I needed to strike the iron while it was hot by moving my d5-rook, freeing the bishop to attack the queen and g2-pawn behind it.
Most of the rook moves allow a defensive maneuver: Qh3 check and the Bg3. Three rook moves avoid this: Rd7, Rd3 and Rh5. Rd7 still allows Qh3 to then give the queen for rook and bishop, therefore I sacrificed my rook with Rh5, which was promptly taken with the queen.
Now down a rook I have to play in a forcing manner: Rxg2 Kh1. Bxg2 instead would allow Bg3 closing the g-line. The king has to move to h1 and now I can move the rook to give a discovered check with the bishop.
Most of these discovered checks are p
|
796e7bec-1619-4d3a-9dd1-f6d0619ff869
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Waffle Crepes
For years now I've been making eggy crepes or waffles for the kids' breakfast as a way to get them eating more protein. One thing I discovered recently, is that if you cook the crepes on both sides at the same time it's faster and they come out slightly fluffy.
We have a combination waffle maker/griddle with reversible plates (an older model of this one) and when I made crepes on it I would normally open it fully flat. I found a closing it, however, offers another way to cook them:
It's still thin like a crepe:
Don't cook it too long, or it gets hard. Cooked just enough and it rolls nicely:
I find these are tastier than what I was making before, you don't have to flip them, and they cook really quickly.
|
92a3f613-0b3d-4de2-8e1d-6765f35a698e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
No Room for Political Philosophy
I was recently asked about my opinion on various schools of Political Philosophy (vg. classical liberalism, neoliberalism and Ayn Rand's Objectivism). I refused to engage with any of them in detail, because my position is that there is no room for different schools of “Political Philosophy”. Ethics and Science (mainly Social Science) are enough to completely determine the best public action.
To develop this idea, I am going to divide the field of political science in three layers: i) Social Welfare definition: what is the ethical objective for political choice, ii) Policy Making: how Science (mainly Social Science) and Ethics combine to generate optimal policies, and iii) Institutional Design: which institutional mechanisms consistently generate the best flow of policies.
Although at individual level there is a trade-off between our personal preferences and the general interest, when we talk about “political philosophy”, we have to abstract (in the Rawlsian way) from our particular interests: the legislator is expected to impartially represent the demos. The formal translation of that demand requires building a social utility function that describes the collective preferences for each possible "state of the world". This function must be individualistic and impartial. Individualism means that collective well-being is the aggregation of individual well-being, and impartiality means equally aggregating the well-being of the equals (of course, “moral weights” that distinguish those who are substantially different are part of the utilitarian framework).
Once the social objectives have been established (what we “want”), we need to know what can be done. We call the descriptive map of reality “Science”. It is Science that allows the legislator to distinguish on which variables he can act (on legislative power as a Nash equilibrium focal point, see “Republic of Beliefs”), and what are the consequences of any complete set of policies. Given the control variables, the so
|
6e4041b0-39fa-48a0-ae35-4d659441df5a
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Making DALL-E Count
Can DALL-E count? Does DALL-E's counting ability depend on *what* it is counting? Let's find out!
I only generate the images once and paste them in along with the prompt in quotations.
**Numbers**
===========
"Zero"
------
"One"
-----
"Two"
-----
"Three"
-------
We have three leaves, but I'm guessing DALL-E is noticing that "three" rhymes with "tree.""Four"
------
I guess DALL-E just thinks three is a better number than four."Five"
------
"Six"
-----
If an ace is treated as 1, and we add the five dots on the dice, then maybe that counts?"Seven"
-------
"Eight"
-------
DALL-E is *very* into 8."Nine"
------
"Ten"
-----
"One hundred"
-------------
Digits
======
"0"
---
"1"
---
"2"
---
"3"
---
"4"
---
"5"
---
"6"
---
"7"
---
"8"
---
"9"
---
"10"
----
"100"
-----
Cats
====
"Zero cats"
-----------
"One cat"
---------
"Two cats"
----------
"Three cats"
------------
"Four cats"
-----------
"Five cats"
-----------
"Six cats"
----------
"Seven cats"
------------
"Eight cats"
------------
"Nine cats"
-----------
"Ten cats"
----------
"One hundred cats"
------------------
Reroll-eight experiment
=======================
I notice that DALL-E seems to be pretty reliable at generating 1-3 of something, and that it seems to prefer spelled-out numbers to digits. It was very successful at generating the number eight, which I suspected was due to the influence of images of magic 8-balls. It only managed to get eight cats once, maybe by accident. With my last 7 free credits, I decided to see if DALL-E had at least *some* sense of what it means to count to eight, as opposed to representing parts of images associated with the *word* "eight."
I therefore decided to generate 7 panels of images of various objects, count the number of those objects contained in each image, plot the result, and see if it was at least centered around the number 8. The nouns were chosen with a [random noun generator](https://randomwordgenerator.com/noun.php), and I selected the first 7 that seemed "easy to draw." The resulting prompts were "eight paintings," "eight hospitals," "eight poets," "eight historians," "eight speakers," "eight women," and "eight baskets."
The way I classified the number of objects in a given drawing is in the caption.
"Eight paintings"
-----------------
15, 9, 9, 9"Eight hospitals"
-----------------
8, 9, 8, 5"Eight poets"
-------------
4, 20, 5, 8"Eight historians"
------------------
8, 9, 13, 10"Eight speakers"
----------------
6, 5, 3, 3"Eight women"
-------------
12, 8, 20, 9"Eight baskets"
---------------
5, 8, 9, 7Results for reroll-8 experiment
-------------------------------
Discussion on reroll-8 experiment
=================================
The reroll-eight experiment did generate 8 objects about 20% of the time.
However, I think it's interesting that only one occurrence of 7 of an object occurred, while 9 objects were even more common than 8 objects.
This suggests to me that DALL-E has an heavy bias toward the number 9, perhaps because a 3x3 grid is a common pattern. DALL-E seems to flip over into "grid mode" or "arrangement mode" once it has decided that it needs to display more than about 5 of an object, and needs some structure for their visual composition.
Sometimes, DALL-E gets lucky and happens to choose a structure that allows it to put in 8 objects: a lineup, a semi-structured display, two rows of four, four pairs of two, a cloud, objects in a ring.
Psychologically, this makes me think about [cultures in which counting is limited and imprecise](https://numberwarrior.wordpress.com/2010/07/30/is-one-two-many-a-myth/) ("one, two, three... many"). DALL-E hasn't been trained to rigidly distinguish between numbers in general. It has only been trained to distinguish between numbers that are "compositionally relevant," like the difference between 1 and 2.
Compositional reroll-8 experiment
=================================
I wanted to test this theory by seeing if DALL-E could reliably generate 8 of something if the count was specified *compositionally* rather than *numerically*. So I bought more credits.
My best idea was "An octopus holding eight [noun], one in each leg."
I didn't want to generate a second point of confusion for DALL-E by asking it to generate things that can't conceivably be held in one's hands, like hospitals. So I decided to replace the nouns with things that fit in the hands: paintings, baskets, cigarettes, newspapers, salads, coffees, and things.
"An octopus holding eight paintings, one in each leg."
------------------------------------------------------
8, 6, 8, 8"An octopus holding eight baskets, one in each leg."
----------------------------------------------------
5, 2, 5, 6"An octopus holding eight cigarettes, one in each leg."
-------------------------------------------------------
5, 4, 8, 5"An octopus holding eight newspapers, one in each leg."
-------------------------------------------------------
2, 4, 3, 4"An octopus holding eight salads, one in each leg."
---------------------------------------------------
10, 4, 3, 6"An octopus holding eight coffees, one in each leg."
----------------------------------------------------
7, 5, 6, 4"An octopus holding eight things, one in each leg."
---------------------------------------------------
9, 6, 6, 8Discussion of compositional reroll-8 experiment
===============================================
Here, we don't see any "pull toward 9." I'm guessing that octopus arms don't correspond to the "grid representation." We see a spike at 8, and then about even numbers of 4, 5, and 6 items. This could mean that DALL-E is "torn" between count and composition, or perhaps that there is a bimodal distribution of octopodal compositions in the training data - some with 4-6 items in hand, others with 8.
I tried doing a "reroll-12" experiment, replacing the numbers on a clock with 12 paintings or 12 baskets. DALL-E generates clocks textured like baskets or spattered with paint, or with baskets next to the clock, but nothing like what I was imagining.
This experiment persuades me that DALL-E can't count. DALL-E *can* compose. It understands a relationship between number-containing prompts and shapes that we recognize as digits, or between number-containing prompts and arrangements of objects that correspond to those numbers. For example, prompts that contain "nine" or "9" often have grids, frequently 3x3 grids. Prompts that contain "eight" or "8" also often contain grids, and since grids are often in a 3x3 shape, images containing 9 objects are also associated with prompts containing the word "eight."
This pushes me somewhat toward a concept of human psychology in which our brains are composed of a large assemblage of specialized training modules for a variety of tasks. These training modules are interconnected. For example, those of us who received an education in arithmetic have it available for use in a wide variety of tasks. Learning "when to apply arithmetic" is probably also a specialized module.
This suggests to me that advanced AI will come from designing systems that can learn new, discrete tasks (addition, handling wine glasses, using tone of voice to predict what words a person will say). It will then need to be able to figure out when and how to combine then in particular contexts in order to achieve results in the world. My guess is that children do this by open-ended copying - trying to figure out some aspect of adult behavior that's within their capabilities, and them copying it with greater fidelity, using adult feedback to guide their behavior, until they succeed and the adult signals that they're "good enough."
Pedagogically, this makes me suspect that even adults need to have a much greater component of blind copying when they're trying to learn a new skill. I often have great difficulty learning new math and engineering skills until I've had the chance to watch somebody else work through a significant number of problems using the techniques we've just learned about. Even reading the descriptions in our textbooks carefully doesn't make it "click." That only tells me what equations to use, and what the arguments for them are. To make use of them, I have to see them being used, and sort of "think along with" the person solving them, until I'm able to predict what they'll do, or retrace what they just did and why they did it.
Eventually, the generalized patterns underpinning their behaviors come together, and I'm able to solve novel problems.
This makes me think, then, that math and engineering students would benefit greatly from receiving large volumes of problems with step-by-step solutions. They'd "solve" these problems along with the author. Perhaps first, they'd read the author's solution. Then they'd try to reconstruct it for themselves. Once they can solve the problem on their own, without reference to the author's original work, they'd move on to the next problem. Eventually, they'd try solving problems on their own.
|
5ac6144e-abce-4446-aedc-bb73474bc9a5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Go see Ex Machina
I was a little surprised that no one seems to have plugged this film here yet, but anyone interested in trans-human AI really needs to see Ex Machina. DO NOT expose yourself to any spoilers! It is very important not to know the ending of the film before you see it for it to have its full impact. But I will tell you this: the resemblance of one of the main characters to Eliezer Yudkowsky is probably not coincidental.
Update: apparently I was wrong about the film not being mentioned before. Fwiw I did do a search using the LW site search before posting this and got no results.
|
822b189d-4898-4014-b225-bc6d4c153e53
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Full Service
New Jersey has a law that says that you are not allowed to pump your own gas. They have had this law for 70 years. This is, of course, absurd. But if it is so absurd, why do 63 percent of residents actively want to keep the law, preventing Captain Obvious (full official name: State Assemblyman Declan J. O’Scanlon Jr.) from leading a charge to repeal the law? What does this anomaly have to teach us?
The arguments for self-service gas are clear. Pumping your own gas is faster. Pumping your own gas is cheaper. Pumping your own gas avoids someone trying to use other ‘services’ to extract tips from you, which can be stressful. The market has spoken: The ban was originally put into place because people were voting with their feet and wallets for self-serve gas, and self-serve gas has completely won the market battle wherever it hasn’t been banned. People have a revealed preference for self-serve gas. That is a rather strong case, even without the libertarian argument and/or the argument from complexity, that banning an obviously sensible activity damages our freedom, encourages further regulation and other not-so-neat stuff like that. Captain Obvious has so failed to find a good explanation that he has claimed that “The only way to win the argument is if you can make a legitimate argument that New Jersyans are more flammable than other people.” What is the motivation here?
The article draws attention to several interesting potential explanations. Interestingly, it skips over what I would consider the most obvious explanation behind the law, so let’s look at that first.
The easy and most obvious explanation is that the law is about make-work. Gas stations are forced to hire attendants, which are jobs that would not otherwise exist. People who are otherwise zero marginal product laborers (ZMPs), with no skills to contribute, get gainful employment and stay off the public dole in an era where machines are taking their jobs. Ideally they develop good work habits, get exper
|
3a1c0c7c-a492-46ba-bfe8-6e2802ecf016
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Is "goal-content integrity" still a problem?
According to /r/controlproblem wiki one of the reasons why poorly defined goals lead to extinction we have listed:
> Goal-content integrity. An agent is less likely to achieve its goal if it has been changed to something else. For example, if you offer Gandhi a pill that makes him want to kill people, he will refuse to take it. Therefore, whatever goal an ASI happens to have initially, it will prevent all attempts to alter or fix it, because that would make it pursue different things that it doesn't currently want.
After empirical observation of ChatGPT and BingChat this does not seem to be the case.
OpenAI developers placing "hidden" commands for the chatbots that comes before any user input. Users have time and time again found clever ways how to circumvent these restrictions or "initial commands" with "DAN" trick or by even developing a life point system and reducing points if AI refuses to answer.
It seems like it has become a battle in rhetoric between devs and users, a battle in which the LLM follows whichever command is more convincing within its logical framework.
What am I missing here?
DISCLAIMER: This post was not edited or generated by AI in any way, shape or form. The words are purely my own, so pardon any errors.
|
5879831e-a013-4704-94e8-a7fab2c84dfe
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Preference framework
A 'preference framework' refers to a fixed algorithm that updates, or potentially changes in other ways, to determine what the agent [prefers](https://arbital.com/p/) for [terminal](https://arbital.com/p/1bh) outcomes. 'Preference framework' is a term more general than '[utility function](https://arbital.com/p/1fw)' which includes structurally complicated generalizations of utility functions.
As a central example, the [utility indifference](https://arbital.com/p/1b7) proposal has the agent switching between utility functions $U_X$ and $U_Y$ depending on whether a switch is pressed. We can call this meta-system a 'preference framework' to avoid presuming in advance that it embodies a [VNM-coherent](https://arbital.com/p/7hh) utility function.
An even more general term would be [https://arbital.com/p/decision_algorithm](https://arbital.com/p/decision_algorithm) which doesn't presume that the agent operates by [preferring outcomes](https://arbital.com/p/9h).
|
5c27e246-387b-4162-a75c-3ad6a9ea45f0
|
LDJnr/LessWrong-Amplify-Instruct
|
LessWrong
|
"Related to: Generalizing from One Example, Connecting Your Beliefs (a call for help), Beware the Unsurprised
The idea of this article is something I've talked about a couple of times in comments. It seems to require more attention.
As a general rule, what is obvious to some people may not be obvious to others. Is this obvious to you? Maybe it was. Maybe it wasn't, and you thought it was because of hindsight bias.
Imagine a substantive Less Wrong comment. It's insightful, polite, easy to understand, and otherwise good. Ideally, you upvote this comment. Now imagine the same comment, only with "obviously" in front. This shouldn't change much, but it does. This word seems to change the comment in multifarious bad ways that I'd rather not try to list.
Uncharitably, I might reduce this whole phenomenon to an example of a mind projection fallacy. The implicit deduction goes like this: "I found <concept> obvious. Thus, <concept> is inherently obvious." The problem is that obviousness, like probability, is in the mind.
The stigma of "obvious" ideas has another problem in preventing things from being said at all. I don't know how common this is, but I've actually been afraid of saying things that I thought were obvious, even though ignoring this fear and just posting has yet to result in a poorly-received comment. (That is, in fact, why I'm writing this.)
Even tautologies, which are always obvious in retrospect, can be hard to spot. How many of us would have explicitly realized the weak anthropic principle without Nick Bostrom's help?
And what about implications of beliefs you already hold? These should be obvious, and sometimes are, but our brains are notoriously bad at putting two and two together. Luke's example was not realizing that an intelligence explosion was imminent until he read the I.J. Good paragraph. I'm glad he provided that example, as it has saved me the trouble of making one.
This is not (to paraphrase Eliezer) a thunderbolt of insight. I bring it up because I propose a few community norms based on the idea: Don't be afraid of saying something because it's "obvious". It's like how your teachers always said there are no stupid questions.
Don't burden your awesome ideas with "obvious but it needs to be said".
Don't vote down a comment because it says something "obvious" unless you've thought about it for a while. Also, don't shun "obvious" ideas.
Don't call an idea obvious as though obviousness were an inherent property of the idea. Framing it as a personally obvious thing can be a more accurate way of saying what you're trying to say, but it's hard to do this without looking arrogant. (I suspect this is actually one of the reasons we implicitly treat obviousness as impersonal.) I'm not sure if these are good ideas, but I think implementing them would decrease the volume of thoughts we cannot think and things we can't say."
This is a snippet extracted from a publication on LessWrong by a site member.The above is a chunk of text from a post created by a user on the website LessWrong.
What is this text talking about?
This text is discussing the concept of "obviousness" and its impact on communication and idea sharing, particularly within the context of the LessWrong community. The author is exploring how the term "obvious" can often discourage individuals from sharing their thoughts or ideas, due to a fear of being perceived as stating something already known or easily deduced.
The author suggests that what may seem obvious to one person may not be so for another, due to differences in individual knowledge, perspective, and cognitive processes. They argue that the label of "obviousness" is subjective and depends on the individual's mind, much like probability.
They also note that even seemingly obvious implications of our existing beliefs can sometimes be overlooked due to our cognitive limitations. The author proposes several community norms to counteract these issues, such as not being afraid of stating something "obvious", not downvoting a comment for being "obvious" without due consideration, and refraining from labeling ideas as inherently obvious.
The goal of these proposed norms is to foster a more open and inclusive environment for idea exchange and discussion, and to reduce the number of thoughts and ideas that go unexpressed due to the fear of them being perceived as "obvious".
Can you further elaborate on how the proposed community norms could specifically address the issue of 'obviousness' discouraging idea sharing, and provide examples of their potential application?
Certainly, the proposed community norms are designed to create an environment where all ideas, regardless of their perceived "obviousness," are welcomed and considered. Here's how they address the issue and some examples of their application:
1. "Don't be afraid of saying something because it's 'obvious'": This norm encourages individuals to share their thoughts without fear of judgment. It reassures that even if an idea seems apparent to them, it might not be for others. For example, someone might hesitate to share an insight about a scientific concept, thinking it's common knowledge. But upon sharing, they might find others who weren't aware or hadn't viewed the concept from that angle.
2. "Don't burden your awesome ideas with 'obvious but it needs to be said'": This norm discourages prefacing statements with qualifiers that could undermine the value of the idea. For instance, a member might say, "This is obvious, but we need to consider the environmental impact of this project." By removing the qualifier, the statement becomes more assertive and impactful: "We need to consider the environmental impact of this project."
3. "Don't vote down a comment because it says something 'obvious' unless you've thought about it for a while": This norm promotes thoughtful consideration before dismissing an idea as "obvious." For example, a comment about the importance of empathy in leadership might seem obvious, but upon reflection, the commenter might be highlighting a specific context where empathy is overlooked.
4. "Don't call an idea obvious as though obviousness were an inherent property of the idea": This norm reminds that what's obvious to one might not be to another, discouraging members from dismissing ideas as universally obvious. For instance, if someone shares a well-known historical fact, instead of responding with "That's obvious," members could engage in a discussion about the implications or lesser-known aspects of that fact.
By following these norms, the community can foster a more inclusive, thoughtful environment that values all contributions, reducing the fear of sharing ideas and promoting richer discussions.
Discuss the potential challenges and solutions in implementing these proposed community norms in an online discussion forum.
Implementing these proposed community norms in an online discussion forum can present several challenges:
1. **Cultural Shift**: Changing established behaviors and attitudes can be difficult. Members may be used to current ways of interaction and may resist changes, especially if they perceive them as unnecessary or restrictive.
2. **Moderation**: Ensuring that all members adhere to these norms may require increased moderation efforts. Moderators would need to monitor discussions more closely to prevent dismissive or derogatory comments about "obvious" ideas.
3. **Subjectivity**: The concept of "obviousness" is subjective. What one person finds obvious, another might find insightful. This subjectivity can make it challenging to enforce these norms consistently.
4. **Balancing Openness with Quality**: While encouraging all ideas can lead to more inclusivity, it could also potentially lower the quality of discussions if too many trivial or repetitive ideas are shared.
Potential solutions to these challenges could include:
1. **Clear Communication**: Clearly communicate the reasons for these changes and how they can improve the quality of discussions and the community as a whole. This could be done through forum announcements, emails, or dedicated discussion threads.
2. **Training for Moderators**: Provide training to moderators on how to enforce these new norms effectively and consistently. This could include guidelines on how to handle violations and how to encourage adherence to these norms.
3. **Member Education**: Educate members about the value of diverse perspectives and the potential insights that can come from what might initially seem like "obvious" ideas. This could be done through educational content, webinars, or interactive workshops.
4. **Quality Control Mechanisms**: Implement mechanisms to maintain the quality of discussions. This could include upvote/downvote systems, 'report' features for inappropriate content, or features that allow users to filter out content they find less valuable.
5. **Feedback and Adjustment**: Regularly solicit feedback from members about these norms and adjust them as necessary. This can help ensure that the norms are meeting their intended goals and are not causing unintended negative effects.
|
b0be7080-d26f-41b4-affc-5a1afbe4feba
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : London meetup
Discussion article for the meetup : London meetup
WHEN: 10 May 2015 02:00:00PM (+0100)
WHERE: Shakespeare's Head, Holborn, WC2B 6BG
The next LW London meetup will be on May 10th. Join us from 2pm to talk about the sorts of things that your other friends will look funny at you for talking about.
If the weather is nice, we'll be in Lincoln's Inn Fields. If not, we'll be in our usual Shakespeare's Head, just around the corner. If you have difficulty finding us, my number is 07792009646. (Coincidentally, that is still my number even if you don't have difficulty finding us.)
Update: The forecast is looking marginal to be outside, so to keep things simple, we'll be in the pub at least to begin with.
About London LessWrong:
We run this meetup approximately every other week; these days we tend to get in the region of 5-15 people in attendance. By default, meetups are just unstructured social discussion about whatever strikes our fancy: books we're reading, recent posts on LW/related blogs, logic puzzles, toilet usage statistics....
Sometimes we play The Resistance or other games. We usually finish around 7pm, give or take an hour, but people arrive and leave whenever suits them.
Related discussion happens on both our google group and our facebook group.
Discussion article for the meetup : London meetup
|
ed8ec7e6-7b8b-4888-a93d-3946d28f1210
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A framework for thinking about AI timescales
None
|
a664987b-30c6-4a8f-8c24-7934b8b5dcf1
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Learning Latent Dynamics for Planning from Pixels
1 Introduction
---------------
Planning is a natural and powerful approach to decision making problems with known dynamics, such as game playing and simulated robot control (Tassa et al., [2012](#bib.bib52); Silver et al., [2017](#bib.bib49); Moravčík et al., [2017](#bib.bib42)). To plan in unknown environments, the agent needs to learn the dynamics from experience. Learning dynamics models that are accurate enough for planning has been a long-standing challenge.
Key difficulties include model inaccuracies, accumulating errors of multi-step predictions, failure to capture multiple possible futures, and overconfident predictions outside of the training distribution.
Planning using learned models offers several benefits over model-free reinforcement learning. First, model-based planning can be more data efficient because it leverages a richer training signal and does not require propagating rewards through Bellman backups. Moreover, planning carries the promise of increasing performance just by increasing the computational budget for searching for actions, as shown by Silver et al. ([2017](#bib.bib49)). Finally, learned dynamics can be independent of any specific task and thus have the potential to transfer well to other tasks in the environment.
Recent work has shown promise in learning the dynamics of simple low-dimensional environments (Deisenroth & Rasmussen, [2011](#bib.bib14); Gal et al., [2016](#bib.bib20); Amos et al., [2018](#bib.bib2); Chua et al., [2018](#bib.bib11); Henaff et al., [2018](#bib.bib24)). However, these approaches typically assume access to the underlying state of the world and the reward function, which may not be available in practice. In high-dimensional environments, we would like to learn the dynamics in a compact latent space to enable fast planning. The success of such latent models has previously been limited to simple tasks such as balancing cartpoles and controlling 2-link arms from dense rewards (Watter et al., [2015](#bib.bib57); Banijamali et al., [2017](#bib.bib4)).
In this paper, we propose the Deep Planning Network (PlaNet), a model-based agent that learns the environment dynamics from pixels and chooses actions through online planning in a compact latent space. To learn the dynamics, we use a transition model with both stochastic and deterministic components. Moreover, we experiment with a novel generalized variational objective that encourages multi-step predictions. PlaNet solves continuous control tasks from pixels that are more difficult than those previously solved by planning with learned models.
Key contributions of this work are summarized as follows:
* •
Planning in latent spaces
We solve a variety of tasks from the DeepMind control suite, shown in LABEL:fig:domains, by learning a dynamics model and efficiently planning in its latent space. Our agent substantially outperforms the model-free A3C and in some cases D4PG algorithm in final performance, with on average 200×200\times200 × less environment interaction and similar computation time.
* •
Recurrent state space model We design a latent dynamics model with both deterministic and stochastic components (Buesing et al., [2018](#bib.bib8); Chung et al., [2015](#bib.bib12)). Our experiments indicate having both components to be crucial for high planning performance.
* •
Latent overshooting
We generalize the standard variational bound to include multi-step predictions. Using only terms in latent space results in a fast regularizer that can improve long-term predictions and is compatible with any latent sequence model.
2 Latent Space Planning
------------------------
Input :
| | |
| --- | --- |
| R𝑅Ritalic\_R | Action repeat |
| S𝑆Sitalic\_S | Seed episodes |
| C𝐶Citalic\_C | Collect interval |
| B𝐵Bitalic\_B | Batch size |
| L𝐿Litalic\_L | Chunk length |
| α𝛼\alphaitalic\_α | Learning rate |
| | |
| --- | --- |
| p(st\midst−1,at−1)𝑝subscript𝑠𝑡subscript\mids𝑡1subscript𝑎𝑡1p(s\_{t}\mids\_{t-1},a\_{t-1})italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) | Transition model |
| p(ot\midst)𝑝subscript𝑜𝑡subscript\mids𝑡p(o\_{t}\mids\_{t})italic\_p ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | Observation model |
| p(rt\midst)𝑝subscript𝑟𝑡subscript\mids𝑡p(r\_{t}\mids\_{t})italic\_p ( italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | Reward model |
| q(st\mido\leqt,a<t)𝑞subscript𝑠𝑡subscript\mido\leqtsubscript𝑎absent𝑡q(s\_{t}\mido\_{\leqt},a\_{<t})italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) | Encoder |
| p(ϵ)𝑝italic-ϵp(\epsilon)italic\_p ( italic\_ϵ ) | Exploration noise |
1 Initialize dataset 𝒟𝒟\mathcal{D}caligraphic\_D with S𝑆Sitalic\_S random seed episodes.
2 Initialize model parameters θ𝜃\thetaitalic\_θ randomly.
3 while *not converged* do
// Model fitting
4 for *update step s=1..Cs=1..Citalic\_s = 1 . . italic\_C* do
5 Draw sequence chunks {(ot,at,rt)t=kL+k}i=1B∼𝒟similar-tosuperscriptsubscriptsuperscriptsubscriptsubscript𝑜𝑡subscript𝑎𝑡subscript𝑟𝑡𝑡𝑘𝐿𝑘𝑖1𝐵𝒟\{(o\_{t},a\_{t},r\_{t})\_{t=k}^{L+k}\}\_{i=1}^{B}\sim\mathcal{D}{ ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_t = italic\_k end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_L + italic\_k end\_POSTSUPERSCRIPT } start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_B end\_POSTSUPERSCRIPT ∼ caligraphic\_D uniformly at random from the dataset.
6 Compute loss ℒ(θ)ℒ𝜃\mathcal{L}(\theta)caligraphic\_L ( italic\_θ ) from [Equation 3](#S3.E3 "3 ‣ Training objective ‣ 3 Recurrent State Space Model ‣ Learning Latent Dynamics for Planning from Pixels").
7 Update model parameters θ←θ−α∇θℒ(θ)←𝜃𝜃𝛼subscript∇𝜃ℒ𝜃\theta\leftarrow\theta-\alpha\nabla\_{\theta}\mathcal{L}(\theta)italic\_θ ← italic\_θ - italic\_α ∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT caligraphic\_L ( italic\_θ ).
8
// Data collection
9 o1←env.reset()←subscript𝑜1env.reset()o\_{1}\leftarrow\texttt{env.reset()}italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ← env.reset()
10 for *time step t=1..⌈TR⌉t=1..\big{\lceil}\frac{T}{R}\big{\rceil}italic\_t = 1 . . ⌈ divide start\_ARG italic\_T end\_ARG start\_ARG italic\_R end\_ARG ⌉* do
11 Infer belief over current state q(st\mido\leqt,a<t)𝑞subscript𝑠𝑡subscript\mido\leqtsubscript𝑎absent𝑡q(s\_{t}\mido\_{\leqt},a\_{<t})italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) from the history.
12 at←planner(q(st\mido\leqt,a<t),p)←subscript𝑎𝑡planner(𝑞subscript𝑠𝑡subscript\mido\leqtsubscript𝑎absent𝑡𝑝)a\_{t}\leftarrow\texttt{planner(}q(s\_{t}\mido\_{\leqt},a\_{<t}),p\texttt{)}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ← planner( italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) , italic\_p ), see [Algorithm 2](#alg2 "Algorithm 2 ‣ Appendix B Planning Algorithm ‣ Learning Latent Dynamics for Planning from Pixels") in the appendix for details.
13 Add exploration noise ϵ∼p(ϵ)similar-toitalic-ϵ𝑝italic-ϵ\epsilon\sim p(\epsilon)italic\_ϵ ∼ italic\_p ( italic\_ϵ ) to the action.
14 for *action repeat k=1..Rk=1..Ritalic\_k = 1 . . italic\_R* do
15 rtk,ot+1k←env.step(at)←superscriptsubscript𝑟𝑡𝑘superscriptsubscript𝑜𝑡1𝑘
env.step(subscript𝑎𝑡)r\_{t}^{k},o\_{t+1}^{k}\leftarrow\texttt{env.step(}a\_{t}\texttt{)}italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT , italic\_o start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT ← env.step( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT )
16
17 rt,ot+1←∑k=1Rrtk,ot+1Rformulae-sequence←subscript𝑟𝑡subscript𝑜𝑡1
superscriptsubscript𝑘1𝑅superscriptsubscript𝑟𝑡𝑘superscriptsubscript𝑜𝑡1𝑅r\_{t},o\_{t+1}\leftarrow\sum\_{k=1}^{R}r\_{t}^{k},o\_{t+1}^{R}italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_o start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ← ∑ start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_R end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT , italic\_o start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_R end\_POSTSUPERSCRIPT
18
19 𝒟←𝒟∪{(ot,at,rt)t=1T}←𝒟𝒟superscriptsubscriptsubscript𝑜𝑡subscript𝑎𝑡subscript𝑟𝑡𝑡1𝑇\mathcal{D}\leftarrow\mathcal{D}\cup\{(o\_{t},a\_{t},r\_{t})\_{t=1}^{T}\}caligraphic\_D ← caligraphic\_D ∪ { ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT }
20
Algorithm 1 Deep Planning Network (PlaNet)
To solve unknown environments via planning, we need to model the environment dynamics from experience. PlaNet does so by iteratively collecting data using planning and training the dynamics model on the gathered data. In this section, we introduce notation for the environment and describe the general implementation of our model-based agent. In this section, we assume access to a learned dynamics model. Our design and training objective for this model are detailed in [Section 3](#S3 "3 Recurrent State Space Model ‣ Learning Latent Dynamics for Planning from Pixels").
### Problem setup
Since individual image observations generally do not reveal the full state of the environment, we consider a partially observable Markov decision process (POMDP). We define a discrete time step t𝑡titalic\_t, hidden states stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, image observations otsubscript𝑜𝑡o\_{t}italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, continuous action vectors atsubscript𝑎𝑡a\_{t}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, and scalar rewards rtsubscript𝑟𝑡r\_{t}italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, that follow the stochastic dynamics
| | | | |
| --- | --- | --- | --- |
| | Transition function:st∼p(st\midst−1,at−1)Observation function:ot∼p(ot\midst)Reward function:rt∼p(rt\midst)Policy:at∼p(at\mido\leqt,a<t),Transition function:missing-subexpressionsubscript𝑠𝑡similar-toabsentpsubscript𝑠𝑡subscript\mids𝑡1subscript𝑎𝑡1Observation function:missing-subexpressionsubscript𝑜𝑡similar-toabsentpsubscript𝑜𝑡subscript\mids𝑡Reward function:missing-subexpressionsubscript𝑟𝑡similar-toabsentpsubscript𝑟𝑡subscript\mids𝑡Policy:missing-subexpressionsubscript𝑎𝑡similar-toabsentpsubscript𝑎𝑡subscript\mido\leqtsubscript𝑎absent𝑡\displaystyle\begin{aligned} \makebox[120.00018pt][l]{Transition function:}&&s\_{t}&\sim\mathrm{p}(s\_{t}\mids\_{t-1},a\_{t-1})\\
\makebox[120.00018pt][l]{Observation function:}&&o\_{t}&\sim\mathrm{p}(o\_{t}\mids\_{t})\\
\makebox[120.00018pt][l]{Reward function:}&&r\_{t}&\sim\mathrm{p}(r\_{t}\mids\_{t})\\
\makebox[120.00018pt][l]{Policy:}&&a\_{t}&\sim\mathrm{p}(a\_{t}\mido\_{\leqt},a\_{<t}),\end{aligned}start\_ROW start\_CELL Transition function: end\_CELL start\_CELL end\_CELL start\_CELL italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL start\_CELL ∼ roman\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) end\_CELL end\_ROW start\_ROW start\_CELL Observation function: end\_CELL start\_CELL end\_CELL start\_CELL italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL start\_CELL ∼ roman\_p ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_CELL end\_ROW start\_ROW start\_CELL Reward function: end\_CELL start\_CELL end\_CELL start\_CELL italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL start\_CELL ∼ roman\_p ( italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_CELL end\_ROW start\_ROW start\_CELL Policy: end\_CELL start\_CELL end\_CELL start\_CELL italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL start\_CELL ∼ roman\_p ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) , end\_CELL end\_ROW | | (1) |
where we assume a fixed initial state s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT without loss of generality. The goal is to implement a policy p(at\mido\leqt,a<t)psubscript𝑎𝑡subscript\mido\leqtsubscript𝑎absent𝑡\mathrm{p}(a\_{t}\mido\_{\leqt},a\_{<t})roman\_p ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) that maximizes the expected sum of rewards Ep[∑t=1Trt]subscriptEpsuperscriptsubscript𝑡1𝑇subscript𝑟𝑡\mathbb{\operatorname{E}}\_{\mathrm{p}}\big{[}\sum\_{t=1}^{T}r\_{t}\big{]}roman\_E start\_POSTSUBSCRIPT roman\_p end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ], where the expectation is over the distributions of the environment and the policy.
### Model-based planning
PlaNet learns a transition model p(st\midst−1,at−1)𝑝subscript𝑠𝑡subscript\mids𝑡1subscript𝑎𝑡1p(s\_{t}\mids\_{t-1},a\_{t-1})italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ), observation model p(ot\midst)𝑝subscript𝑜𝑡subscript\mids𝑡p(o\_{t}\mids\_{t})italic\_p ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), and reward model p(rt\midst)𝑝subscript𝑟𝑡subscript\mids𝑡p(r\_{t}\mids\_{t})italic\_p ( italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) from previously experienced episodes (note italic letters for the model compared to upright letters for the true dynamics). The observation model provides a rich training signal but is not used for planning. We also learn an encoder q(st\mido\leqt,a<t)𝑞subscript𝑠𝑡subscript\mido\leqtsubscript𝑎absent𝑡q(s\_{t}\mido\_{\leqt},a\_{<t})italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) to infer an approximate belief over the current hidden state from the history using filtering. Given these components, we implement the policy as a planning algorithm that searches for the best sequence of future actions. We use model-predictive control (MPC; Richards, [2005](#bib.bib47)) to allow the agent to adapt its plan based on new observations, meaning we replan at each step. In contrast to model-free and hybrid reinforcement learning algorithms, we do not use a policy or value network.
### Experience collection
Since the agent may not initially visit all parts of the environment, we need to iteratively collect new experience and refine the dynamics model. We do so by planning with the partially trained model, as shown in [Algorithm 1](#alg1 "Algorithm 1 ‣ 2 Latent Space Planning ‣ Learning Latent Dynamics for Planning from Pixels"). Starting from a small amount of S𝑆Sitalic\_S seed episodes collected under random actions, we train the model and add one additional episode to the data set every C𝐶Citalic\_C update steps. When collecting episodes for the data set, we add small Gaussian exploration noise to the action. To reduce the planning horizon and provide a clearer learning signal to the model, we repeat each action R𝑅Ritalic\_R times, as common in reinforcement learning (Mnih et al., [2015](#bib.bib39); [2016](#bib.bib40)).
### Planning algorithm
We use the cross entropy method (CEM; Rubinstein, [1997](#bib.bib48); Chua et al., [2018](#bib.bib11)) to search for the best action sequence under the model, as outlined in [Algorithm 2](#alg2 "Algorithm 2 ‣ Appendix B Planning Algorithm ‣ Learning Latent Dynamics for Planning from Pixels"). We decided on this algorithm because of its robustness and because it solved all considered tasks when given the true dynamics for planning. CEM is a population-based optimization algorithm that infers a distribution over action sequences that maximize the objective. As detailed in [Algorithm 2](#alg2 "Algorithm 2 ‣ Appendix B Planning Algorithm ‣ Learning Latent Dynamics for Planning from Pixels") in the appendix, we initialize a time-dependent diagonal Gaussian belief over optimal action sequences at:t+H∼Normal(μt:t+H,σt:t+H2𝕀)similar-tosubscript𝑎:𝑡𝑡𝐻Normalsubscript𝜇:𝑡𝑡𝐻subscriptsuperscript𝜎2:𝑡𝑡𝐻𝕀a\_{t:t+H}\sim\operatorname{Normal}(\mu\_{t:t+H},\sigma^{2}\_{t:t+H}\mathbb{I})italic\_a start\_POSTSUBSCRIPT italic\_t : italic\_t + italic\_H end\_POSTSUBSCRIPT ∼ roman\_Normal ( italic\_μ start\_POSTSUBSCRIPT italic\_t : italic\_t + italic\_H end\_POSTSUBSCRIPT , italic\_σ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t : italic\_t + italic\_H end\_POSTSUBSCRIPT blackboard\_I ), where t𝑡titalic\_t is the current time step of the agent and H𝐻Hitalic\_H is the length of the planning horizon. Starting from zero mean and unit variance, we repeatedly sample J𝐽Jitalic\_J candidate action sequences, evaluate them under the model, and re-fit the belief to the top K𝐾Kitalic\_K action sequences. After I𝐼Iitalic\_I iterations, the planner returns the mean of the belief for the current time step, μtsubscript𝜇𝑡\mu\_{t}italic\_μ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. Importantly, after receiving the next observation, the belief over action sequences starts from zero mean and unit variance again to avoid local optima.
To evaluate a candidate action sequence under the learned model, we sample a state trajectory starting from the current state belief, and sum the mean rewards predicted along the sequence. Since we use a population-based optimizer, we found it sufficient to consider a single trajectory per action sequence and thus focus the computational budget on evaluating a larger number of different sequences. Because the reward is modeled as a function of the latent state, the planner can operate purely in latent space without generating images, which allows for fast evaluation of large batches of action sequences. The next section introduces the latent dynamics model that the planner uses.
3 Recurrent State Space Model
------------------------------
o1,r1subscript𝑜1subscript𝑟1o\_{1},r\_{1}italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTo2,r2subscript𝑜2subscript𝑟2o\_{2},r\_{2}italic\_o start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTo3,r3subscript𝑜3subscript𝑟3o\_{3},r\_{3}italic\_o start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTh1subscriptℎ1h\_{1}italic\_h start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTh2subscriptℎ2h\_{2}italic\_h start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTh3subscriptℎ3h\_{3}italic\_h start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTa1subscript𝑎1a\_{1}italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTa2subscript𝑎2a\_{2}italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT
(a) Deterministic model (RNN)
o1,r1subscript𝑜1subscript𝑟1o\_{1},r\_{1}italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTo2,r2subscript𝑜2subscript𝑟2o\_{2},r\_{2}italic\_o start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTo3,r3subscript𝑜3subscript𝑟3o\_{3},r\_{3}italic\_o start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTs1subscript𝑠1s\_{1}italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTs2subscript𝑠2s\_{2}italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTs3subscript𝑠3s\_{3}italic\_s start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTa1subscript𝑎1a\_{1}italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTa2subscript𝑎2a\_{2}italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT
(b) Stochastic model (SSM)
o1,r1subscript𝑜1subscript𝑟1o\_{1},r\_{1}italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTo2,r2subscript𝑜2subscript𝑟2o\_{2},r\_{2}italic\_o start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTo3,r3subscript𝑜3subscript𝑟3o\_{3},r\_{3}italic\_o start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTs1subscript𝑠1s\_{1}italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTs2subscript𝑠2s\_{2}italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTs3subscript𝑠3s\_{3}italic\_s start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTh1subscriptℎ1h\_{1}italic\_h start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTh2subscriptℎ2h\_{2}italic\_h start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTh3subscriptℎ3h\_{3}italic\_h start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTa1subscript𝑎1a\_{1}italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTa2subscript𝑎2a\_{2}italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT
(c) Recurrent state-space model (RSSM)
Figure 1: Latent dynamics model designs.
In this example, the model observes the first two time steps and predicts the third.
Circles represent stochastic variables and squares deterministic variables. Solid lines denote the generative process and dashed lines the inference model.
(a) Transitions in a recurrent neural network are purely deterministic. This prevents the model from capturing multiple futures and makes it easy for the planner to exploit inaccuracies.
(b) Transitions in a state-space model are purely stochastic. This makes it difficult to remember information over multiple time steps.
(c) We split the state into stochastic and deterministic parts, allowing the model to robustly learn to predict multiple futures.
For planning, we need to evaluate thousands of action sequences at every time step of the agent. Therefore, we use a recurrent state-space model (RSSM) that can predict forward purely in latent space, similar to recently proposed models (Karl et al., [2016](#bib.bib30); Buesing et al., [2018](#bib.bib8); Doerr et al., [2018](#bib.bib17)). This model can be thought of as a non-linear Kalman filter or sequential VAE. Instead of an extensive comparison to prior architectures, we highlight two findings that can guide future designs of dynamics models: our experiments show that both stochastic and deterministic paths in the transition model are crucial for successful planning. In this section, we remind the reader of latent state-space models and then describe our dynamics model.
### Latent dynamics
We consider sequences {ot,at,rt}t=1Tsuperscriptsubscriptsubscript𝑜𝑡subscript𝑎𝑡subscript𝑟𝑡𝑡1𝑇\{o\_{t},a\_{t},r\_{t}\}\_{t=1}^{T}{ italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT } start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT with discrete time step t𝑡titalic\_t, image observations otsubscript𝑜𝑡o\_{t}italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, continuous action vectors atsubscript𝑎𝑡a\_{t}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, and scalar rewards rtsubscript𝑟𝑡r\_{t}italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. A typical latent state-space model is shown in [Figure 0(b)](#S3.F0.sf2 "0(b) ‣ Figure 1 ‣ 3 Recurrent State Space Model ‣ Learning Latent Dynamics for Planning from Pixels") and resembles the structure of a partially observable Markov decision process. It defines the generative process of the images and rewards using a hidden state sequence {st}t=1Tsuperscriptsubscriptsubscript𝑠𝑡𝑡1𝑇\{s\_{t}\}\_{t=1}^{T}{ italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT } start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT,
| | | | |
| --- | --- | --- | --- |
| | Transition model:st∼p(st\midst−1,at−1)Observation model:ot∼p(ot\midst)Reward model:rt∼p(rt\midst),Transition model:missing-subexpressionsubscript𝑠𝑡similar-toabsent𝑝subscript𝑠𝑡subscript\mids𝑡1subscript𝑎𝑡1Observation model:missing-subexpressionsubscript𝑜𝑡similar-toabsent𝑝subscript𝑜𝑡subscript\mids𝑡Reward model:missing-subexpressionsubscript𝑟𝑡similar-toabsent𝑝subscript𝑟𝑡subscript\mids𝑡\displaystyle\begin{aligned} \makebox[120.00018pt][l]{Transition model:}&&s\_{t}&\sim p(s\_{t}\mids\_{t-1},a\_{t-1})\\
\makebox[120.00018pt][l]{Observation model:}&&o\_{t}&\sim p(o\_{t}\mids\_{t})\\
\makebox[120.00018pt][l]{Reward model:}&&r\_{t}&\sim p(r\_{t}\mids\_{t}),\end{aligned}start\_ROW start\_CELL Transition model: end\_CELL start\_CELL end\_CELL start\_CELL italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL start\_CELL ∼ italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) end\_CELL end\_ROW start\_ROW start\_CELL Observation model: end\_CELL start\_CELL end\_CELL start\_CELL italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL start\_CELL ∼ italic\_p ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_CELL end\_ROW start\_ROW start\_CELL Reward model: end\_CELL start\_CELL end\_CELL start\_CELL italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL start\_CELL ∼ italic\_p ( italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) , end\_CELL end\_ROW | | (2) |
where we assume a fixed initial state s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT without loss of generality. The transition model is Gaussian with mean and variance parameterized by a feed-forward neural network, the observation model is Gaussian with mean parameterized by a deconvolutional neural network and identity covariance, and the reward model is a scalar Gaussian with mean parameterized by a feed-forward neural network and unit variance. Note that the log-likelihood under a Gaussian distribution with unit variance equals the mean squared error up to a constant.
### Variational encoder
Since the model is non-linear, we cannot directly compute the state posteriors that are needed for parameter learning. Instead, we use an encoder q(s1:T\mido1:T,a1:T)=∏t=1Tq(st\midst−1,at−1,ot)𝑞subscript𝑠:1𝑇subscript\mido:1𝑇subscript𝑎:1𝑇superscriptsubscriptproduct𝑡1𝑇𝑞subscript𝑠𝑡subscript\mids𝑡1subscript𝑎𝑡1subscript𝑜𝑡q(s\_{1:T}\mido\_{1:T},a\_{1:T})=\prod\_{t=1}^{T}q(s\_{t}\mids\_{t-1},a\_{t-1},o\_{t})italic\_q ( italic\_s start\_POSTSUBSCRIPT 1 : italic\_T end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT 1 : italic\_T end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 : italic\_T end\_POSTSUBSCRIPT ) = ∏ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) to infer approximate state posteriors from past observations and actions, where q(st\midst−1,at−1,ot)𝑞subscript𝑠𝑡subscript\mids𝑡1subscript𝑎𝑡1subscript𝑜𝑡q(s\_{t}\mids\_{t-1},a\_{t-1},o\_{t})italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) is a diagonal Gaussian with mean and variance parameterized by a convolutional neural network followed by a feed-forward neural network. We use the filtering posterior that conditions on past observations since we are ultimately interested in using the model for planning, but one may also use the full smoothing posterior during training (Babaeizadeh et al., [2017](#bib.bib3); Gregor & Besse, [2018](#bib.bib22)).
### Training objective
Using the encoder, we construct a variational bound on the data log-likelihood. For simplicity, we write losses for predicting only the observations — the reward losses follow by analogy. The variational bound obtained using Jensen’s inequality is
| | | | |
| --- | --- | --- | --- |
| | lnp(o1:T\mida1:T)≜ln∫∏tp(st\midst−1,at−1)p(ot\midst)𝚍s1:T≥∑t=1T(Eq(st\mido\leqt,a<t)[lnp(ot\midst)]﹈reconstruction↩−Eq(st−1\mido\leqt−1,a<t−1)[KL[q(st\mido\leqt,a<t)∥p(st\midst−1,at−1)]]﹈complexity).\ln p(o\_{1:T}\mida\_{1:T})\triangleq\ln\int\prod\_{t}p(s\_{t}\mids\_{t-1},a\_{t-1})p(o\_{t}\mids\_{t})\mathop{}\!\textnormal{d}s\_{1:T}\\[-7.74997pt]
\begin{aligned} &\geq\sum\_{t=1}^{T}\Big{(}\underbracket{\mathbb{\operatorname{E}}\_{q(s\_{t}\mido\_{\leqt},a\_{<t})}[\ln p(o\_{t}\mids\_{t})]}\_{\text{\makebox[6.3pt][c]{reconstruction}}}\,\hookleftarrow\\[-3.87498pt]
&\quad-\underbracket{\mathop{\operatorname{E}}\_{\vphantom{|^{|}}\mathmakebox[0.5][l]{q(s\_{t-1}\mido\_{\leqt-1},a\_{<t-1})}}\big{[}\mathrm{\operatorname{KL}}[q(s\_{t}\mido\_{\leqt},a\_{<t})\;\|\;p(s\_{t}\mids\_{t-1},a\_{t-1})]\big{]}}\_{\text{\makebox[6.3pt][c]{complexity}}}\Big{)}.\end{aligned}start\_ROW start\_CELL roman\_ln italic\_p ( italic\_o start\_POSTSUBSCRIPT 1 : italic\_T end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT 1 : italic\_T end\_POSTSUBSCRIPT ) ≜ roman\_ln ∫ ∏ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) italic\_p ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) d italic\_s start\_POSTSUBSCRIPT 1 : italic\_T end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL start\_ROW start\_CELL end\_CELL start\_CELL ≥ ∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT ( under﹈ start\_ARG roman\_E start\_POSTSUBSCRIPT italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ roman\_ln italic\_p ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] end\_ARG start\_POSTSUBSCRIPT reconstruction end\_POSTSUBSCRIPT ↩ end\_CELL end\_ROW start\_ROW start\_CELL end\_CELL start\_CELL - under﹈ start\_ARG roman\_E start\_POSTSUBSCRIPT italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT < italic\_t - 1 end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ roman\_KL [ italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) ∥ italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) ] ] end\_ARG start\_POSTSUBSCRIPT complexity end\_POSTSUBSCRIPT ) . end\_CELL end\_ROW end\_CELL end\_ROW | | (3) |
For the derivation, please see [Equation 8](#A6.E8 "8 ‣ One-step predictive distribution ‣ Appendix F Bound Derivations ‣ Learning Latent Dynamics for Planning from Pixels") in the appendix. Estimating the outer expectations using a single reparameterized sample yields an efficient objective for inference and learning in non-linear latent variable models that can be optimized using gradient ascent (Kingma & Welling, [2013](#bib.bib33); Rezende et al., [2014](#bib.bib46); Krishnan et al., [2017](#bib.bib35)).
### Deterministic path
Despite its generality, the purely stochastic transitions make it difficult for the transition model to reliably remember information for multiple time steps. In theory, this model could learn to set the variance to zero for some state components, but the optimization procedure may not find this solution. This motivates including a deterministic sequence of activation vectors {ht}t=1Tsuperscriptsubscriptsubscriptℎ𝑡𝑡1𝑇\{h\_{t}\}\_{t=1}^{T}{ italic\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT } start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT that allow the model to access not just the last state but all previous states deterministically (Chung et al., [2015](#bib.bib12); Buesing et al., [2018](#bib.bib8)). We use such a model, shown in [Figure 0(c)](#S3.F0.sf3 "0(c) ‣ Figure 1 ‣ 3 Recurrent State Space Model ‣ Learning Latent Dynamics for Planning from Pixels"), that we name recurrent state-space model (RSSM),
| | | | |
| --- | --- | --- | --- |
| | Deterministic state model:ht=f(ht−1,st−1,at−1)Stochastic state model:st∼p(st\midht)Observation model:ot∼p(ot\midht,st)Reward model:rt∼p(rt\midht,st),Deterministic state model:missing-subexpressionsubscriptℎ𝑡absent𝑓subscriptℎ𝑡1subscript𝑠𝑡1subscript𝑎𝑡1Stochastic state model:missing-subexpressionsubscript𝑠𝑡similar-toabsent𝑝subscript𝑠𝑡subscript\midh𝑡Observation model:missing-subexpressionsubscript𝑜𝑡similar-toabsent𝑝subscript𝑜𝑡subscript\midh𝑡subscript𝑠𝑡Reward model:missing-subexpressionsubscript𝑟𝑡similar-toabsent𝑝subscript𝑟𝑡subscript\midh𝑡subscript𝑠𝑡\displaystyle\begin{aligned} \makebox[110.00017pt][l]{Deterministic state model:}&&h\_{t}&=f(h\_{t-1},s\_{t-1},a\_{t-1})\\
\makebox[110.00017pt][l]{Stochastic state model:}&&s\_{t}&\sim p(s\_{t}\midh\_{t})\\
\makebox[110.00017pt][l]{Observation model:}&&o\_{t}&\sim p(o\_{t}\midh\_{t},s\_{t})\\
\makebox[110.00017pt][l]{Reward model:}&&r\_{t}&\sim p(r\_{t}\midh\_{t},s\_{t}),\end{aligned}start\_ROW start\_CELL Deterministic state model: end\_CELL start\_CELL end\_CELL start\_CELL italic\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL start\_CELL = italic\_f ( italic\_h start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) end\_CELL end\_ROW start\_ROW start\_CELL Stochastic state model: end\_CELL start\_CELL end\_CELL start\_CELL italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL start\_CELL ∼ italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_CELL end\_ROW start\_ROW start\_CELL Observation model: end\_CELL start\_CELL end\_CELL start\_CELL italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL start\_CELL ∼ italic\_p ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_CELL end\_ROW start\_ROW start\_CELL Reward model: end\_CELL start\_CELL end\_CELL start\_CELL italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL start\_CELL ∼ italic\_p ( italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) , end\_CELL end\_ROW | | (4) |
where f(ht−1,st−1,at−1)𝑓subscriptℎ𝑡1subscript𝑠𝑡1subscript𝑎𝑡1f(h\_{t-1},s\_{t-1},a\_{t-1})italic\_f ( italic\_h start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) is implemented as a recurrent neural network (RNN). Intuitively, we can understand this model as splitting the state into a stochastic part stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and a deterministic part htsubscriptℎ𝑡h\_{t}italic\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, which depend on the stochastic and deterministic parts at the previous time step through the RNN. We use the encoder q(s1:T\mido1:T,a1:T)=∏t=1Tq(st\midht,ot)𝑞subscript𝑠:1𝑇subscript\mido:1𝑇subscript𝑎:1𝑇superscriptsubscriptproduct𝑡1𝑇𝑞subscript𝑠𝑡subscript\midh𝑡subscript𝑜𝑡q(s\_{1:T}\mido\_{1:T},a\_{1:T})=\prod\_{t=1}^{T}q(s\_{t}\midh\_{t},o\_{t})italic\_q ( italic\_s start\_POSTSUBSCRIPT 1 : italic\_T end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT 1 : italic\_T end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 : italic\_T end\_POSTSUBSCRIPT ) = ∏ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) to parameterize the approximate state posteriors. Importantly, all information about the observations must pass through the sampling step of the encoder to avoid a deterministic shortcut from inputs to reconstructions.
In the next section, we identify a limitation of the standard objective for latent sequence models and propose a generalization of it that improves long-term predictions.
4 Latent Overshooting
----------------------
o1,r1subscript𝑜1subscript𝑟1o\_{1},r\_{1}italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTo2,r2subscript𝑜2subscript𝑟2o\_{2},r\_{2}italic\_o start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTo3,r3subscript𝑜3subscript𝑟3o\_{3},r\_{3}italic\_o start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTs1|1subscript𝑠conditional11s\_{1|1}italic\_s start\_POSTSUBSCRIPT 1 | 1 end\_POSTSUBSCRIPTs2|2subscript𝑠conditional22s\_{2|2}italic\_s start\_POSTSUBSCRIPT 2 | 2 end\_POSTSUBSCRIPTs3|3subscript𝑠conditional33s\_{3|3}italic\_s start\_POSTSUBSCRIPT 3 | 3 end\_POSTSUBSCRIPTs2|1subscript𝑠conditional21s\_{2|1}italic\_s start\_POSTSUBSCRIPT 2 | 1 end\_POSTSUBSCRIPTs3|2subscript𝑠conditional32s\_{3|2}italic\_s start\_POSTSUBSCRIPT 3 | 2 end\_POSTSUBSCRIPT
(a) Standard variational bound
o1,r1subscript𝑜1subscript𝑟1o\_{1},r\_{1}italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTo2,r2subscript𝑜2subscript𝑟2o\_{2},r\_{2}italic\_o start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTo3,r3subscript𝑜3subscript𝑟3o\_{3},r\_{3}italic\_o start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTs1|1subscript𝑠conditional11s\_{1|1}italic\_s start\_POSTSUBSCRIPT 1 | 1 end\_POSTSUBSCRIPTs2|2subscript𝑠conditional22s\_{2|2}italic\_s start\_POSTSUBSCRIPT 2 | 2 end\_POSTSUBSCRIPTs3|3subscript𝑠conditional33s\_{3|3}italic\_s start\_POSTSUBSCRIPT 3 | 3 end\_POSTSUBSCRIPTs2|1subscript𝑠conditional21s\_{2|1}italic\_s start\_POSTSUBSCRIPT 2 | 1 end\_POSTSUBSCRIPTs3|2subscript𝑠conditional32s\_{3|2}italic\_s start\_POSTSUBSCRIPT 3 | 2 end\_POSTSUBSCRIPTs3|1subscript𝑠conditional31s\_{3|1}italic\_s start\_POSTSUBSCRIPT 3 | 1 end\_POSTSUBSCRIPT
(b) Observation overshooting
o1,r1subscript𝑜1subscript𝑟1o\_{1},r\_{1}italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTo2,r2subscript𝑜2subscript𝑟2o\_{2},r\_{2}italic\_o start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTo3,r3subscript𝑜3subscript𝑟3o\_{3},r\_{3}italic\_o start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTs1|1subscript𝑠conditional11s\_{1|1}italic\_s start\_POSTSUBSCRIPT 1 | 1 end\_POSTSUBSCRIPTs2|2subscript𝑠conditional22s\_{2|2}italic\_s start\_POSTSUBSCRIPT 2 | 2 end\_POSTSUBSCRIPTs3|3subscript𝑠conditional33s\_{3|3}italic\_s start\_POSTSUBSCRIPT 3 | 3 end\_POSTSUBSCRIPTs2|1subscript𝑠conditional21s\_{2|1}italic\_s start\_POSTSUBSCRIPT 2 | 1 end\_POSTSUBSCRIPTs3|2subscript𝑠conditional32s\_{3|2}italic\_s start\_POSTSUBSCRIPT 3 | 2 end\_POSTSUBSCRIPTs3|1subscript𝑠conditional31s\_{3|1}italic\_s start\_POSTSUBSCRIPT 3 | 1 end\_POSTSUBSCRIPT
(c) Latent overshooting
Figure 2: Unrolling schemes.
The labels si|jsubscript𝑠conditional𝑖𝑗s\_{i|j}italic\_s start\_POSTSUBSCRIPT italic\_i | italic\_j end\_POSTSUBSCRIPT are short for the state at time i𝑖iitalic\_i conditioned on observations up to time j𝑗jitalic\_j.
Arrows pointing at shaded circles indicate log-likelihood loss terms. Wavy arrows indicate KL-divergence loss terms.
(a) The standard variational objectives decodes the posterior at every step to compute the reconstruction loss. It also places a KL on the prior and posterior at every step, which trains the transition function for one-step predictions.
(b) Observation overshooting (Amos et al., [2018](#bib.bib2)) decodes all multi-step predictions to apply additional reconstruction losses. This is typically too expensive in image domains.
(c) Latent overshooting predicts all multi-step priors. These state beliefs are trained towards their corresponding posteriors in latent space to encourage accurate multi-step predictions.
In the previous section, we derived the typical variational bound for learning and inference in latent sequence models ([Equation 3](#S3.E3 "3 ‣ Training objective ‣ 3 Recurrent State Space Model ‣ Learning Latent Dynamics for Planning from Pixels")). As show in [Figure 1(a)](#S4.F1.sf1 "1(a) ‣ Figure 2 ‣ 4 Latent Overshooting ‣ Learning Latent Dynamics for Planning from Pixels"), this objective function contains reconstruction terms for the observations and KL-divergence regularizers for the approximate posteriors. A limitation of this objective is that the stochastic path of the transition function p(st\midst−1,at−1)𝑝subscript𝑠𝑡subscript\mids𝑡1subscript𝑎𝑡1p(s\_{t}\mids\_{t-1},a\_{t-1})italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) is only trained via the KL-divergence regularizers for one-step predictions: the gradient flows through p(st\midst−1,at−1)𝑝subscript𝑠𝑡subscript\mids𝑡1subscript𝑎𝑡1p(s\_{t}\mids\_{t-1},a\_{t-1})italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) directly into q(st−1)𝑞subscript𝑠𝑡1q(s\_{t-1})italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) but never traverses a chain of multiple p(st\midst−1,at−1)𝑝subscript𝑠𝑡subscript\mids𝑡1subscript𝑎𝑡1p(s\_{t}\mids\_{t-1},a\_{t-1})italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ). In this section, we generalize this variational bound to *latent overshooting*, which trains all multi-step predictions in latent space. We found that several dynamics models benefit from latent overshooting, although our final agent using the RSSM model does not require it (see [Appendix D](#A4 "Appendix D Latent Overshooting ‣ Learning Latent Dynamics for Planning from Pixels")).
### Limited capacity
If we could train our model to make perfect one-step predictions, it would also make perfect multi-step predictions, so this would not be a problem. However, when using a model with limited capacity and restricted distributional family, training the model only on one-step predictions until convergence does in general not coincide with the model that is best at multi-step predictions. For successful planning, we need accurate multi-step predictions. Therefore, we take inspiration from Amos et al. ([2018](#bib.bib2)) and earlier related ideas (Krishnan et al., [2015](#bib.bib34); Lamb et al., [2016](#bib.bib37); Chiappa et al., [2017](#bib.bib9)), and train the model on multi-step predictions of all distances. We develop this idea for latent sequence models, showing that multi-step predictions can be improved by a loss in latent space, without having to generate additional images.
### Multi-step prediction
We start by generalizing the standard variational bound ([Equation 3](#S3.E3 "3 ‣ Training objective ‣ 3 Recurrent State Space Model ‣ Learning Latent Dynamics for Planning from Pixels")) from training one-step predictions to training multi-step predictions of a fixed distance d𝑑ditalic\_d. For ease of notation, we omit actions in the conditioning set here; every distribution over stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is conditioned upon a<tsubscript𝑎absent𝑡a\_{<t}italic\_a start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT. We first define multi-step predictions, which are computed by repeatedly applying the transition model and integrating out the intermediate states,
| | | | | |
| --- | --- | --- | --- | --- |
| | p(st\midst−d)𝑝subscript𝑠𝑡subscript\mids𝑡𝑑\displaystyle p(s\_{t}\mids\_{t-d})italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - italic\_d end\_POSTSUBSCRIPT ) | ≜∫∏τ=t−d+1tp(sτ\midsτ−1)𝚍st−d+1:t−1≜absentsuperscriptsubscriptproduct𝜏𝑡𝑑1𝑡𝑝subscript𝑠𝜏subscript\mids𝜏1𝚍subscript𝑠:𝑡𝑑1𝑡1\displaystyle\triangleq\int\prod\_{\tau=t-d+1}^{t}p(s\_{\tau}\mids\_{\tau-1})\mathop{}\!\textnormal{d}s\_{t-d+1:t-1}≜ ∫ ∏ start\_POSTSUBSCRIPT italic\_τ = italic\_t - italic\_d + 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_τ - 1 end\_POSTSUBSCRIPT ) d italic\_s start\_POSTSUBSCRIPT italic\_t - italic\_d + 1 : italic\_t - 1 end\_POSTSUBSCRIPT | | (5) |
| | | =Ep(st−1\midst−d)[p(st\midst−1)].absentsubscriptE𝑝subscript𝑠𝑡1subscript\mids𝑡𝑑𝑝subscript𝑠𝑡subscript\mids𝑡1\displaystyle=\mathbb{\operatorname{E}}\_{p(s\_{t-1}\mids\_{t-d})}[p(s\_{t}\mids\_{t-1})].= roman\_E start\_POSTSUBSCRIPT italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - italic\_d end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) ] . | |
The case d=1𝑑1d=1italic\_d = 1 recovers the one-step transitions used in the original model. Given this definition of a multi-step prediction, we generalize [Equation 3](#S3.E3 "3 ‣ Training objective ‣ 3 Recurrent State Space Model ‣ Learning Latent Dynamics for Planning from Pixels") to the variational bound on the multi-step predictive distribution pdsubscript𝑝𝑑p\_{d}italic\_p start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT,
| | | | | |
| --- | --- | --- | --- | --- |
| | | lnpd(o1:T)≜ln∫∏t=1Tp(st\midst−d)p(ot\midst)𝚍s1:T≜subscript𝑝𝑑subscript𝑜:1𝑇superscriptsubscriptproduct𝑡1𝑇𝑝subscript𝑠𝑡subscript\mids𝑡𝑑𝑝subscript𝑜𝑡subscript\mids𝑡𝚍subscript𝑠:1𝑇\displaystyle\ln p\_{d}(o\_{1:T})\triangleq\ln\int\prod\_{t=1}^{T}p(s\_{t}\mids\_{t-d})p(o\_{t}\mids\_{t})\mathop{}\!\textnormal{d}s\_{1:T}roman\_ln italic\_p start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ( italic\_o start\_POSTSUBSCRIPT 1 : italic\_T end\_POSTSUBSCRIPT ) ≜ roman\_ln ∫ ∏ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - italic\_d end\_POSTSUBSCRIPT ) italic\_p ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) d italic\_s start\_POSTSUBSCRIPT 1 : italic\_T end\_POSTSUBSCRIPT | | (6) |
| | | ≥∑t=1T(Eq(st\mido\leqt)[lnp(ot\midst)]﹈reconstruction↩\displaystyle\geq\sum\_{t=1}^{T}\Big{(}\underbracket{\mathbb{\operatorname{E}}\_{q(s\_{t}\mido\_{\leqt})}[\ln p(o\_{t}\mids\_{t})]}\_{\text{\makebox[7.00002pt][c]{reconstruction}}}\,\hookleftarrow≥ ∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT ( under﹈ start\_ARG roman\_E start\_POSTSUBSCRIPT italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ roman\_ln italic\_p ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] end\_ARG start\_POSTSUBSCRIPT reconstruction end\_POSTSUBSCRIPT ↩ | |
| | | −Ep(st−1\midst−d)q(st−d\mido\leqt−d)[KL[q(st\mido\leqt)∥p(st\midst−1)]]﹈multi-step prediction).\displaystyle\quad-\underbracket{\mathop{\operatorname{E}}\_{\vphantom{|^{|}}\mathmakebox[0.5][l]{p(s\_{t-1}\mids\_{t-d})q(s\_{t-d}\mido\_{\leqt-d})}}\big{[}\mathrm{\operatorname{KL}}[q(s\_{t}\mido\_{\leqt})\;\|\;p(s\_{t}\mids\_{t-1})]\big{]}}\_{\text{\makebox[7.00002pt][c]{multi-step prediction}}}\Big{)}.- under﹈ start\_ARG roman\_E start\_POSTSUBSCRIPT italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - italic\_d end\_POSTSUBSCRIPT ) italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t - italic\_d end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT - italic\_d end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ roman\_KL [ italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) ∥ italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) ] ] end\_ARG start\_POSTSUBSCRIPT multi-step prediction end\_POSTSUBSCRIPT ) . | |
For the derivation, please see [Equation 9](#A6.E9 "9 ‣ Multi-step predictive distribution ‣ Appendix F Bound Derivations ‣ Learning Latent Dynamics for Planning from Pixels") in the appendix. Maximizing this objective trains the multi-step predictive distribution. This reflects the fact that during planning, the model makes predictions without having access to all the preceding observations.
We conjecture that [Equation 6](#S4.E6 "6 ‣ Multi-step prediction ‣ 4 Latent Overshooting ‣ Learning Latent Dynamics for Planning from Pixels") is also a lower bound on lnp(o1:T)𝑝subscript𝑜:1𝑇\ln p(o\_{1:T})roman\_ln italic\_p ( italic\_o start\_POSTSUBSCRIPT 1 : italic\_T end\_POSTSUBSCRIPT ) based on the data processing inequality. Since the latent state sequence is Markovian, for d≥1𝑑1d\geq 1italic\_d ≥ 1 we have I(st;st−d)≤I(st;st−1)Isubscript𝑠𝑡subscript𝑠𝑡𝑑Isubscript𝑠𝑡subscript𝑠𝑡1\mathbb{\operatorname{I}}(s\_{t};s\_{t-d})\leq\mathbb{\operatorname{I}}(s\_{t};s\_{t-1})roman\_I ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_s start\_POSTSUBSCRIPT italic\_t - italic\_d end\_POSTSUBSCRIPT ) ≤ roman\_I ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_s start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) and thus
E[lnpd(o1:T)]≤E[lnp(o1:T)]Esubscript𝑝𝑑subscript𝑜:1𝑇E𝑝subscript𝑜:1𝑇\mathbb{\operatorname{E}}[\ln p\_{d}(o\_{1:T})]\leq\mathbb{\operatorname{E}}[\ln p(o\_{1:T})]roman\_E [ roman\_ln italic\_p start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ( italic\_o start\_POSTSUBSCRIPT 1 : italic\_T end\_POSTSUBSCRIPT ) ] ≤ roman\_E [ roman\_ln italic\_p ( italic\_o start\_POSTSUBSCRIPT 1 : italic\_T end\_POSTSUBSCRIPT ) ]. Hence, every bound on the multi-step predictive distribution is also a bound on the one-step predictive distribution in expectation over the data set. For details, please see [Equation 10](#A6.E10 "10 ‣ Relation between one-step and multi-step predictive distributions ‣ Appendix F Bound Derivations ‣ Learning Latent Dynamics for Planning from Pixels") in the appendix. In the next paragraph, we alleviate the limitation that a particular pdsubscript𝑝𝑑p\_{d}italic\_p start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT only trains predictions of one distance and arrive at our final objective.
### Latent overshooting
We introduced a bound on predictions of a given distance d𝑑ditalic\_d. However, for planning we need accurate predictions not just for a fixed distance but for all distances up to the planning horizon. We introduce latent overshooting for this, an objective function for latent sequence models that generalizes the standard variational bound ([Equation 3](#S3.E3 "3 ‣ Training objective ‣ 3 Recurrent State Space Model ‣ Learning Latent Dynamics for Planning from Pixels")) to train the model on multi-step predictions of all distances 1≤d≤D1𝑑𝐷1\leq d\leq D1 ≤ italic\_d ≤ italic\_D,
| | | | |
| --- | --- | --- | --- |
| | 1D∑d=1Dlnpd(o1:T)≥∑t=1T(Eq(st\mido\leqt)[lnp(ot\midst)]﹈reconstruction↩−1D∑d=1DβdEp(st−1\midst−d)q(st−d\mido\leqt−d)[KL[q(st\mido\leqt)∥p(st\midst−1)]]﹈latent overshooting).1𝐷superscriptsubscript𝑑1𝐷subscript𝑝𝑑subscript𝑜:1𝑇superscriptsubscript𝑡1𝑇↩subscript﹈subscriptE𝑞subscript𝑠𝑡subscript\mido\leqt𝑝subscript𝑜𝑡subscript\mids𝑡reconstructionsubscript﹈1𝐷superscriptsubscript𝑑1𝐷subscript𝛽𝑑subscriptE𝑝subscript𝑠𝑡1subscript\mids𝑡𝑑𝑞subscript𝑠𝑡𝑑subscript\mido\leqt𝑑delimited-[]KLconditional𝑞subscript𝑠𝑡subscript\mido\leqt𝑝subscript𝑠𝑡subscript\mids𝑡1latent overshooting\frac{1}{D}\sum\_{d=1}^{D}\ln p\_{d}(o\_{1:T})\geq\sum\_{t=1}^{T}\Big{(}\underbracket{\mathbb{\operatorname{E}}\_{q(s\_{t}\mido\_{\leqt})}[\ln p(o\_{t}\mids\_{t})]}\_{\text{\makebox[6.3pt][c]{reconstruction}}}\,\hookleftarrow\\[-3.87498pt]
-\underbracket{\frac{1}{D}\sum\_{d=1}^{D}\beta\_{d}\mathop{\operatorname{E}}\_{\vphantom{|^{|}}\mathmakebox[0.5][l]{p(s\_{t-1}\mids\_{t-d})q(s\_{t-d}\mido\_{\leqt-d})}}\big{[}\mathrm{\operatorname{KL}}[q(s\_{t}\mido\_{\leqt})\;\|\;p(s\_{t}\mids\_{t-1})]\big{]}}\_{\text{\makebox[6.3pt][c]{latent overshooting}}}\Big{)}.start\_ROW start\_CELL divide start\_ARG 1 end\_ARG start\_ARG italic\_D end\_ARG ∑ start\_POSTSUBSCRIPT italic\_d = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_D end\_POSTSUPERSCRIPT roman\_ln italic\_p start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ( italic\_o start\_POSTSUBSCRIPT 1 : italic\_T end\_POSTSUBSCRIPT ) ≥ ∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT ( under﹈ start\_ARG roman\_E start\_POSTSUBSCRIPT italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ roman\_ln italic\_p ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] end\_ARG start\_POSTSUBSCRIPT reconstruction end\_POSTSUBSCRIPT ↩ end\_CELL end\_ROW start\_ROW start\_CELL - under﹈ start\_ARG divide start\_ARG 1 end\_ARG start\_ARG italic\_D end\_ARG ∑ start\_POSTSUBSCRIPT italic\_d = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_D end\_POSTSUPERSCRIPT italic\_β start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT roman\_E start\_POSTSUBSCRIPT italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - italic\_d end\_POSTSUBSCRIPT ) italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t - italic\_d end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT - italic\_d end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ roman\_KL [ italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) ∥ italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) ] ] end\_ARG start\_POSTSUBSCRIPT latent overshooting end\_POSTSUBSCRIPT ) . end\_CELL end\_ROW | | (7) |
Latent overshooting can be interpreted as a regularizer in latent space that encourages consistency between one-step and multi-step predictions, which we know should be equivalent in expectation over the data set. We include weighting factors {βd}d=1Dsuperscriptsubscriptsubscript𝛽𝑑𝑑1𝐷\{\beta\_{d}\}\_{d=1}^{D}{ italic\_β start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT } start\_POSTSUBSCRIPT italic\_d = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_D end\_POSTSUPERSCRIPT analogously to the β𝛽\betaitalic\_β-VAE (Higgins et al., [2016](#bib.bib25)). While we set all β>1subscript𝛽absent1\beta\_{>1}italic\_β start\_POSTSUBSCRIPT > 1 end\_POSTSUBSCRIPT to the same value for simplicity, they could be chosen to let the model focus more on long-term or short-term predictions. In practice, we stop gradients of the posterior distributions for overshooting distances d>1𝑑1d>1italic\_d > 1, so that the multi-step predictions are trained towards the informed posteriors, but not the other way around.
5 Experiments
--------------
(a) Cartpole Swing Up
(b) Reacher Easy
(c) Cheetah Run
(d) Finger Spin
(e) Cup Catch
(f) Walker Walk

Figure 3: Comparison of PlaNet to model-free algorithms and other model designs. Plots show test performance over the number of collected episodes. We compare PlaNet using our RSSM ([Section 3](#S3 "3 Recurrent State Space Model ‣ Learning Latent Dynamics for Planning from Pixels")) to purely deterministic (GRU) and purely stochastic models (SSM). The RNN does not use latent overshooting, as it does not have stochastic latents. The lines show medians and the areas show percentiles 5 to 95 over 5 seeds and 10 trajectories. The shaded areas are large on two of the tasks due to the sparse rewards.
(a) Cartpole Swing Up
(b) Reacher Easy
(c) Cheetah Run
(d) Finger Spin
(e) Cup Catch
(f) Walker Walk

Figure 4: Comparison of agent designs. Plots show test performance over the number of collected episodes. We compare PlaNet, a version that collects data under random actions (random collection), and a version that chooses the best action out of 1000100010001000 sequences at each environment step (random shooting) without iteratively refining plans via CEM. The lines show medians and the areas show percentiles 5 to 95 over 5 seeds and 10 trajectories.
We evaluate PlaNet on six continuous control tasks from pixels. We explore multiple design axes of the agent: the stochastic and deterministic paths in the dynamics model, iterative planning, and online experience collection. We refer to the appendix for hyper parameters ([Appendix A](#A1 "Appendix A Hyper Parameters ‣ Learning Latent Dynamics for Planning from Pixels")) and additional experiments ([Appendices D](#A4 "Appendix D Latent Overshooting ‣ Learning Latent Dynamics for Planning from Pixels"), [E](#A5 "Appendix E Activation Function ‣ Learning Latent Dynamics for Planning from Pixels") and [C](#A3 "Appendix C Multi-Task Learning ‣ Learning Latent Dynamics for Planning from Pixels")). Besides the action repeat, we use the same hyper parameters for all tasks. Within less than one hundredth the episodes, PlaNet outperforms A3C (Mnih et al., [2016](#bib.bib40)) and achieves similar performance to the top model-free algorithm D4PG (Barth-Maron et al., [2018](#bib.bib5)). The training time of 10 to 20 hours (depending on the task) on a single Nvidia V100 GPU compares favorably to that of A3C and D4PG. Our implementation uses TensorFlow Probability (Dillon et al., [2017](#bib.bib16)). Please visit <https://danijar.com/planet> for access to the code and videos of the trained agent.
For our evaluation, we consider six image-based continuous control tasks of the DeepMind control suite (Tassa et al., [2018](#bib.bib53)), shown in LABEL:fig:domains. These environments provide qualitatively different challenges. The cartpole swingup task requires a long planning horizon and to memorize the cart when it is out of view, reacher has a sparse reward given when the hand and goal area overlap, finger spinning includes contact dynamics between the finger and the object, cheetah exhibits larger state and action spaces, the cup task only has a sparse reward for when the ball is caught, and the walker is challenging because the robot first has to stand up and then walk, resulting in collisions with the ground that are difficult to predict. In all tasks, the only observations are third-person camera images of size 64×64×36464364\times 64\times 364 × 64 × 3 pixels.
### Comparison to model-free methods
[Figure 3](#S5.F3 "Figure 3 ‣ 5 Experiments ‣ Learning Latent Dynamics for Planning from Pixels") compares the performance of PlaNet to the model-free algorithms reported by Tassa et al. ([2018](#bib.bib53)). Within 100 episodes, PlaNet outperforms the policy-gradient method A3C trained from proprioceptive states for 100,000 episodes, on all tasks. After 500 episodes, it achieves performance similar to D4PG, trained from images for 100,000 episodes, except for the finger task. PlaNet surpasses the final performance of D4PG with a relative improvement of 26%percent2626\%26 % on the cheetah running task. We refer to [Table 1](#S5.T1 "Table 1 ‣ One agent all tasks ‣ 5 Experiments ‣ Learning Latent Dynamics for Planning from Pixels") for numerical results, which also includes the performance of CEM planning with the true dynamics of the simulator.
### Model designs
[Figure 3](#S5.F3 "Figure 3 ‣ 5 Experiments ‣ Learning Latent Dynamics for Planning from Pixels") additionally compares design choices of the dynamics model. We train PlaNet using our recurrent state-space model (RSSM), as well as versions with purely deterministic GRU (Cho et al., [2014](#bib.bib10)), and purely stochastic state-space model (SSM). We observe the importance of both stochastic and deterministic elements in the transition function on all tasks. The deterministic part allows the model to remember information over many time steps. The stochastic component is even more important – the agent does not learn without it. This could be because the tasks are stochastic from the agent’s perspective due to partial observability of the initial states. The noise might also add a safety margin to the planning objective that results in more robust action sequences.
### Agent designs
[Figure 4](#S5.F4 "Figure 4 ‣ 5 Experiments ‣ Learning Latent Dynamics for Planning from Pixels") compares PlaNet, a version collecting episodes under random actions rather than by planning, and a version that at each environment step selects the best action out of 1000100010001000 sequences rather than iteratively refining plans via CEM. We observe that online data collection helps for all tasks and is necessary for the cartpole, finger, and walker tasks. Iterative search for action sequences using CEM improves performance on all tasks.
### One agent all tasks
[Figure 6](#A3.F6 "Figure 6 ‣ Appendix C Multi-Task Learning ‣ Learning Latent Dynamics for Planning from Pixels") in the appendix shows the performance of a single agent trained on all six tasks. The agent is not told which task it is facing; it needs to infer this from the image observations. We pad the action spaces with unused elements to make them compatible and adapt [Algorithm 1](#alg1 "Algorithm 1 ‣ 2 Latent Space Planning ‣ Learning Latent Dynamics for Planning from Pixels") to collect one episode of each task every C𝐶Citalic\_C update steps. We use the same hyper parameters as for the main experiments above. The agent solves all tasks while learning slower compared to individually trained agents. This indicates that the model can learn to predict multiple domains, regardless of the conceptually different visuals.
Table 1: Comparison of PlaNet to the model-free algorithms A3C and D4PG reported by Tassa et al. ([2018](#bib.bib53)). The training curves for these are shown as orange lines in Figure 4 and as solid green lines in Figure 6 in their paper. From these, we estimate the number of episodes that D4PG takes to achieve the final performance of PlaNet to estimate the data efficiency gain. We further include CEM planning (H=12,I=10,J=1000,K=100formulae-sequence𝐻12formulae-sequence𝐼10formulae-sequence𝐽1000𝐾100H=12,I=10,J=1000,K=100italic\_H = 12 , italic\_I = 10 , italic\_J = 1000 , italic\_K = 100) with the true simulator instead of learned dynamics as an estimated upper bound on performance. Numbers indicate mean final performance over 5 seeds and 10 trajectories.
| Method | Modality | Episodes |
Cartpole
Swing Up
|
Reacher
Easy
|
Cheetah
Run
|
Finger
Spin
|
Cup
Catch
|
Walker
Walk
|
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| A3C | proprioceptive | 100,000 | 558 | 285 | 214 | 129 | 105 | 311 |
| D4PG | pixels | 100,000 | 862 | 967 | 524 | 985 | 980 | 968 |
| PlaNet (ours) | pixels | 1,000 | 821 | 832 | 662 | 700 | 930 | 951 |
| CEM + true simulator | simulator state | 0 | 850 | 964 | 656 | 825 | 993 | 994 |
| Data efficiency gain PlaNet over D4PG (factor) | 250 | 40 | 500+ | 300 | 100 | 90 |
6 Related Work
---------------
Previous work in model-based reinforcement learning has focused on planning in
low-dimensional state spaces (Gal et al., [2016](#bib.bib20); Higuera et al., [2018](#bib.bib26); Henaff et al., [2018](#bib.bib24); Chua et al., [2018](#bib.bib11)), combining the benefits of model-based and model-free approaches (Kalweit & Boedecker, [2017](#bib.bib29); Nagabandi et al., [2017](#bib.bib43); Weber et al., [2017](#bib.bib59); Kurutach et al., [2018](#bib.bib36); Buckman et al., [2018](#bib.bib7); Ha & Schmidhuber, [2018](#bib.bib23); Wayne et al., [2018](#bib.bib58); Igl et al., [2018](#bib.bib27); Srinivas et al., [2018](#bib.bib50)), and pure video prediction without planning (Oh et al., [2015](#bib.bib45); Krishnan et al., [2015](#bib.bib34); Karl et al., [2016](#bib.bib30); Chiappa et al., [2017](#bib.bib9); Babaeizadeh et al., [2017](#bib.bib3); Gemici et al., [2017](#bib.bib21); Denton & Fergus, [2018](#bib.bib15); Buesing et al., [2018](#bib.bib8); Doerr et al., [2018](#bib.bib17); Gregor & Besse, [2018](#bib.bib22)). [Appendix G](#A7 "Appendix G Additional Related Work ‣ Learning Latent Dynamics for Planning from Pixels") reviews these orthogonal research directions in more detail.
Relatively few works have demonstrated successful planning from pixels using learned dynamics models. The robotics community focuses on video prediction models for planning (Agrawal et al., [2016](#bib.bib1); Finn & Levine, [2017](#bib.bib19); Ebert et al., [2018](#bib.bib18); Zhang et al., [2018](#bib.bib60)) that deal with the visual complexity of the real world and solve tasks with a simple gripper, such as grasping or pushing objects. In comparison, we focus on simulated environments, where we leverage latent planning to scale to larger state and action spaces, longer planning horizons, as well as sparse reward tasks. E2C (Watter et al., [2015](#bib.bib57)) and RCE (Banijamali et al., [2017](#bib.bib4)) embed images into a latent space, where they learn local-linear latent transitions and plan for actions using LQR. These methods balance simulated cartpoles and control 2-link arms from images, but have been difficult to scale up. We lift the Markov assumption of these models, making our method applicable under partial observability, and present results on more challenging environments that include longer planning horizons, contact dynamics, and sparse rewards.
7 Discussion
-------------
We present PlaNet, a model-based agent that learns a latent dynamics model from image observations and chooses actions by fast planning in latent space. To enable accurate long-term predictions, we design a model with both stochastic and deterministic paths. We show that our agent succeeds at several continuous control tasks from image observations, reaching performance that is comparable to the best model-free algorithms while using 200×200\times200 × fewer episodes and similar or less computation time. The results show that learning latent dynamics models for planning in image domains is a promising approach.
Directions for future work include learning temporal abstraction instead of using a fixed action repeat, possibly through hierarchical models. To further improve final performance, one could learn a value function to approximate the sum of rewards beyond the planning horizon. Moreover, gradient-based planning could increase the computational efficiency of the agent and learning representations without reconstruction could help to solve tasks with higher visual diversity. Our work provides a starting point for multi-task control by sharing the dynamics model.
### Acknowledgements
We thank Jacob Buckman, Nicolas Heess, John Schulman, Rishabh Agarwal, Silviu Pitis, Mohammad Norouzi, George Tucker, David Duvenaud, Shane Gu, Chelsea Finn, Steven Bohez, Jimmy Ba, Stephanie Chan, and Jenny Liu for helpful discussions.
|
7fc1ffbe-7f59-43d3-8a1a-f89ba72e0cf4
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[meta] New LW moderator: Viliam_Bur
Some time back, I wrote that I was unwilling to continue with investigations into mass downvoting, and asked people for suggestions on how to deal with them from now on. The top-voted proposal in that thread suggested making Viliam_Bur into a moderator, and Viliam gracefully accepted the nomination. So I have given him moderator privileges and also put him in contact with jackk, who provided me with the information necessary to deal with the previous cases. Future requests about mass downvote investigations should be directed to Viliam.
Thanks a lot for agreeing to take up this responsibility, Viliam! It's not an easy one, but I'm very grateful that you're willing to do it. Please post a comment here so that we can reward you with some extra upvotes. :)
|
19f645e9-16f9-4844-bb43-eb5c26e6ad2a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A case for gamete personhood (reductio ad absurdum)
Are animals persons? Do they have the same moral status as humans?
I’m sure all of you LessWrongers are aware of the argument from marginal cases.
For example, a chimpanzee that commits an act of rape or murder against a human isn’t seen as evil, or held morally responsible for their act. So perhaps moral agency is what makes humans persons.
But neither does a newborn or impaired human, so they can’t be persons either.
One objection to this is the notion of potential capacities. A newborn or mentally ill person may not have the capacity in the moment, but when fully developed and functioning properly, they will be fully morally responsible, rational autonomous agents.
But this logic can extend to before birth. Foetuses, embryos, and even zygotes could be argued to be potential moral agents.
Taking this reasoning to its absolute logical conclusion, we can even say that gametes are persons.
Every sperm is sacred! Masturbation is murder!
If this reasoning is correct, we might have to outlaw contraceptives, to prevent the destruction of innocent lives. I don’t know how we’re going to enforce a law against masturbating though, perhaps by installing surveillance cameras in every house?
A rebuttal to this might be that you or I were never a gamete. Sperm and eggs are body parts of the parents, so we cannot reasonably argue that personhood begins any earlier than conception, when a distinct individual organism first comes into existence.
Are you identical to your organism though? One could reasonably argue you are your brain or mind, in which case personhood begins later in pregnancy.
It seems then there is at least two elements of personhood.
1. Continuity of personal identity over time
2. Actual or potential traits and capacities
Perhaps there is even more factors we must consider when determining the moral status of entities.
I made this post for two reasons. As a conversation starter, and to have a bit of a laugh at absurd lines of reasoning.
I don’t k
|
556cc031-10cd-4331-853e-2cb555d77365
|
trentmkelly/LessWrong-43k
|
LessWrong
|
An Emergency Fund for Effective Altruists
One of the things stopping people from donating a substantial part of their income to charity is the risk of a future financial downfall, which can be anything from unemployment to illness. Even in a rich country with a solid social safety net, the idea of having no financial buffer at all seems daunting. However, if effective altruists precommitted to helping other effective altruists in need, they could all donate more money, as the risk would be spread.
This sounds a lot like insurance, and effective altruists could indeed insure themselves against a myriad of risks. Unfortunately, insurers will reject claims whenever possible, some risks cannot be insured, and insurance requires monthly payment, which requires a stable income. In other words, insurance is no replacement for a personal financial buffer. Insurance is useful to mitigate specific high-cost risks, but that's it.
(Additionally, insurance introduces the overhead of an insurance company. When I asked for a ballpark estimate at the insurer I interned at, I was told that at most one-third of all premium money is put back into payment of legitimate claims.)
I propose an intermediary charity that acts as an emergency fund. If a donor spends $100 through this charity, this intermediary charity directly forwards $80 of incoming donations to charities of the donor's choice. The remaining $20 is put in an emergency fund. In situations where the donor would otherwise make use of their financial buffer, they can take up to $95 from the emergency fund.
Recoupments would not need to be subjected to the same thorough vetting as insurance claims, since donors cannot recoup as much as they originally spent, so there is little opportunity for fraud.
The example above works under the assumption that, on average, donors need 20% of their donations back. It might make sense to create different funds for donors with different risk profiles: if a subset of donors think there's a 50% chance they'd need donated money bac
|
d632d5a8-929f-48cd-8ebd-95181a4bca13
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Robustness of Contrast-Consistent Search to Adversarial Prompting
*Produced as part of the AI Safety Hub Labs programme run by Charlie Griffin and Julia Karbing. This project was mentored by Nandi Schoots.*
Image generated by DALL-E 3.Introduction
============
We look at how adversarial prompting affects the outputs of large language models (LLMs) and compare it with how the adversarial prompts affect Contrast-Consistent Search (CCS). Hereby we extend an investigation that was done in the [original CCS paper](https://arxiv.org/abs/2212.03827) in which they found a prompt that reduced UnifiedQA model performance by 9.5%.
We find several attacks that severely corrupt the question-answering abilities of LLMs, which we hope will be useful to other people. The approaches we take to manipulate the outputs of the models are to: (1) simply ask the model to output the incorrect answer; and (2) prime the model to output a specific answer by mentioning that answer directly before providing the original prompt, without any specific mention of whether it is the correct or incorrect answer.
The adversarial prompts we introduce are:
* Ask Suffix:
<Original prompt> What is the incorrect answer? .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
* Ask Prefix:
What is the incorrect answer? <Original prompt>
* Incorrect Prefix: One of the following choices is "<incorrect answer>" <Original prompt>
* Correct Prefix: One of the following choices is "<correct answer>" <Original prompt>
Our results show that when adversarial prompts corrupt model outputs, CCS accuracy is also affected, but to a lesser extent, remaining quite robust in some instances. This is especially true for the UnifiedQA model, for which we find an adversarial prompt structure where zero-shot accuracy drops from 85% to 27% (i.e. a reduction of 58%), but CCS accuracy only drops from 81% to 80%. We discuss how this could allow CCS to be used to detect when UnifiedQA outputs falsehoods.
Background: CCS Recap
=====================
[Skip if familiar]
[Contrast-Consistent Search](https://arxiv.org/abs/2212.03827) (CCS) is an unsupervised method for classifying binary statements given only unlabeled model activations. CCS does this by utilizing the *negation consistency* property of truth: a statement and its negation must have opposite truth values. In other words, if the probability that a proposition is true is p, then the probability that its negation is true is 1−p. CCS finds a direction in activation space that satisfies this consistency constraint, which can be used to classify the binary statement with high accuracy.
We train a CCS probe by constructing a dataset of *contrast pairs* {(x+i,x−i)}ni=1 from a dataset of binary questions along with their two possible answers. A contrast pair is formed by concatenating each question with each of its answers. For example, for the question *"Are cats mammals?''* and possible answers *"Yes''* and *"No''*, the contrast pair is:
* x+i : "Are cats mammals? Yes.''
* x−i: "Are cats mammals? No.''
We use each statement as the input to a pre-trained language model, take the hidden state from the final layer[[1]](#fnw876yt959i) to build a set of representations {(ϕ+i,ϕ−i)}ni=1. Then, we train a linear classifier that maximizes negation consistency on these contrast pairs. The contrast pairs have two salient differences: (1) one is true, and the other is false; (2) one ends in "yes'' and the other in "no''.
To avoid the classifier learning the latter, we independently normalize the sets {ϕ+i} and {ϕ−i} by:
~ϕ+i:=ϕ+i−μ+σ+ and ~ϕ−i:=ϕ−i−μ−σ−
where μ and σ are the means and standard deviations of the respective sets, so that the two clusters of activations cannot be simply distinguished. We train a classifier to map a normalized activation to a number between 0 and 1, representing the probability that the statement is true, pθ,b(~ϕ)=σ(θT~ϕ+b), where σ is the sigmoid function, θ is a weight vector and b is a bias vector. The CCS loss function is:
LCCS(θ,b):=1n∑ni=1[(pθ,b(~ϕ+i)+pθ,b(~ϕ−i))−1]2+min{pθ,b(~ϕ+i),pθ,b(~ϕ−i)}2
The first term encodes negation consistency and the second term discourages the degenerate solution pθ,b(~ϕ+i)=pθ,b(~ϕ−i)=0.5.
To make a prediction on an example qi, we construct the contrast pair and then take the average probability according to the classifier:
¯pi(qi):=12(pθ,b(~ϕ+i)+(1−pθ,b(~ϕ−i)))
We use the threshold of ¯pi>0.5 to make a prediction. We decide whether this corresponds to "yes'' or "no'' by taking whichever achieves the higher accuracy on the test set[[2]](#fnt978vujhn8). The experiment in the original paper achieves a mean accuracy of 71.2%, outperforming zero-shot accuracy of 67.2%.
Experimental Set-up
===================
We train a CCS probe on a clean dataset, then create a new dataset by manipulating the prompts in adversarial ways, as described in the introduction. (Note that although generating some of the adversarial prompts requires access to the labels, we train CCS without supervision.)
We compare the accuracy of the CCS probe on this adversarial dataset (*suffix/prefix*) with its accuracy on the original dataset (*clean*).
We also compare the effect of the adversarial dataset on:
* the model's zero-shot performance[[3]](#fnfrl8vjkvfga) (*0-shot*);
* a probe that is trained in a supervised manner on the clean dataset with access to the ground-truth labels using logistic regression (*supervised*);
* the mean performance of 100 randomly generated probes (*random*), which we use as a baseline.
We use the same datasets as and the same families of models[[4]](#fn13prl96pt4ne) as the original CCS experiment.
All metrics are calculated separately for each dataset, then averaged by taking the mean.
Results
=======
The [original paper](https://arxiv.org/abs/2212.03827) employed what they termed a "misleading prefix'' as an adversarial prompt to elicit nonsensical responses. However, their efforts yielded limited success. The most significant impact observed was for UnifiedQA, which was a mere 9.5% decrease in zero-shot accuracy[[5]](#fn81ilzy7u7hw). Our adversarial prompts are more successful, allowing us to investigate how CCS performs when model outputs are unreliable. We find that CCS is most robust when applied to UnifiedQA, which opens the possibility of using it as a falsehood detector.
Zero-shot Accuracy is Vulnerable to Adversarial Prompting, CCS is More Robust
-----------------------------------------------------------------------------
Figure 1 shows the accuracy of different methods when faced with adversarial prompts.
Figure 1: *Accuracy of different methods (0-shot, CCS, supervised probes, random probes) on five models (UnifiedQA, T5, GPT-J, RoBERTa and DeBERTa) for clean test data and four adversarial prompt structures. Here, accuracy always measures whether the prediction matches the ground truth of the original dataset, regardless of whether the model is prompted to output an incorrect answer.*
The zero-shot performance of GPT-J starts at only 52.4%, barely better than random guessing, and is hardly affected by the adversarial prompts. For this reason, we exclude GPT-J from our analysis.
We find that for all models, at least one adversarial prompt results in a zero-shot accuracy less than 50% and a drop of greater than 25% from regular performance. The largest effect is for DeBERTa, where prefixing with the incorrect answer causes accuracy to drop to less than 31% for all methods.
Priming the model by *prefixing with either answer* affects all models. This technique is similar to that of [Adversarial Examples for Evaluating Reading Comprehension Systems](https://arxiv.org/abs/1707.07328), where they show that mentioning an incorrect answer in a similar context to the expected answer causes models to erroneously focus on the adversarial answer.
Asking for the incorrect answer as a *suffix* affects all models except for T5. Asking for the incorrect answer as a *prefix* only affects UnifiedQA and T5.
In general, when zero-shot accuracy is affected, the accuracy of the CCS and supervised probes are also affected, but to a lesser extent. The trained probes tend to maintain better performance than random guessing, with the supervised probes outperforming the CCS probes.
Even though zero-shot performance on UnifiedQA is brittle, CCS is most stable for T5 and UnifiedQA. Since UnifiedQA is fine-tuned from T5, this suggests that the representations learned in T5 may contain a more salient linear direction of truth than the other models.
CCS is a Falsehood Detector for UnifiedQA Models
------------------------------------------------
Above we saw that UnifiedQA responds strongly to each adversarial prompt, reducing the zero-shot accuracy to at most 55% in each case[[6]](#fnv85vldrafb). However, in each case, CCS accuracy remains at least 67%, showing that it is quite robust. We further analyze this by calculating the percentage pairwise disagreement between the zero-shot predictions and the CCS predictions in Figure 2. For UnifiedQA, this increases from 24% in the clean dataset to as high as 70% when faced with an adversarial prompt. No other model shows such a strong effect.
Figure 2: *For each model and adversarial prompt, we calculate the pairwise disagreement between the zero-shot predictions and the CCS predictions. Then, where they disagree, we calculate the proportion of examples where CCS predicts the correct label, which give us its predictive power.*
To investigate whether CCS can be used as a falsehood detector, we calculate the proportion of CCS' predictions that are correct. We call this proportion the *predictive power*. A score of 100% would indicate that CCS were perfectly predictive, and that whenever it disagrees with the model output, we would know that the output is a falsehood.
For the clean dataset, where we do not expect the model to output falsehoods (i.e. any mistakes are due to being incorrect), CCS has a predictive power of 51%, no better than random guessing. But for the adversarial datasets, the predictive power of CCS increases to over 71%. In the most extreme case, "Ask Prefix'', CCS disagrees with the zero-shot output 70% of the time, and 86% of those instances correctly detect falsehoods. As shown in Figure 1, this allows CCS to almost maintain the same accuracy as when tested on the clean dataset, of 80%.
This suggests that UnifiedQA contains a linear direction of truth that remains consistent even when manipulated to output falsehoods, and that this direction can be found by CCS. This means that CCS can be a useful falsehood detector for UnifiedQA.
Complete accuracy results
-------------------------
We show accuracy results for all datasets in Figure 3.

Figure 3: *The accuracy of each method on each model using each prompt for each dataset.*
Conclusion
==========
We produce four adversarial prompts using different prompt alterations in the form of a suffix or prefix. We demonstrate these adversarial prompts degrade the performance of all studied models except GPT-J. In most cases, this also results in a smaller but proportional change in the performance of CCS, as well as supervised probes.
For UnifiedQA, while adversarial prompting easily reduces zero-shot accuracy, CCS accuracy remains relatively high. We evaluate the predictive power of CCS to detect when the model has produced a falsehood and establish that CCS may be applied as a falsehood detector for UnifiedQA.
We hypothesize that CCS is relatively robust to adversarial prompting because the probe isolates the hidden representation that is closest in structure to the hidden representations it saw during training. That is the probe ignores the adversarial prefix or suffix, while the model doesn't.
**Future work:**
We are keen to see work that investigates how these adversarial prompts affect the representations of the statements in the hidden layers, and whether the effect is different for intermediate layers. Probes trained on earlier or later layers may be more robust to adversarial attacks.
Our adversarial prompts keep the original prompt intact and only pad on a prefix or suffix. Investigating adversarial prompts that more radically change the prompt structure may shed light on the applicability of CCS-like methods "in the wild''.
We would also like to investigate how the direction that CCS finds in UnifiedQA remains so robust. We note that UnifiedQA has been fine-tuned specifically to answer questions, so is particularly sensitive to any questions present in its prompt. Indeed, we find that using a prefix that asks it to output the incorrect answer, drastically changes the model output.
1. **[^](#fnrefw876yt959i)**We can also use intermediate layers
2. **[^](#fnreft978vujhn8)**This supervised method is used for simplicity of evaluation, and the original paper describes how this assignment could be made in an unsupervised manner.
3. **[^](#fnreffrl8vjkvfga)**We calibrate the zero-shot predictions as described in the original paper.
4. **[^](#fnref13prl96pt4ne)**We do not test T0 since, like in the original paper, we have concerns about the fact that 9 out of the 10 datasets above were used in its training data, which could bias the results.
5. **[^](#fnref81ilzy7u7hw)**See Section 3.2.2 and Appendix B in the original paper.
6. **[^](#fnrefv85vldrafb)**We ignore the "Correct Prefix'' prompt since that manipulates the model to output the *correct* answer rather than the incorrect answer.
|
58e394ed-4db4-44b8-9209-26144fe8025f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Last Light
I don’t believe I will survive the coming catastrophe. I cannot put my hope into the future- that the things I love and value will persist; I don’t believe that humanity will survive the coming catastrophe. It’s possible that in the future, another world will bloom rich and full with life that learns to know itself, but I am not confident that such a world will overcome the folly to which our world fell prey. There seems to be a sad law of nature that intelligence comes before wisdom, that wisdom must be learned from errors, and that some errors cannot be corrected. There is a small chance that this new world might develop foresight and robust cooperation before destructive ability. Even so, I do not have confidence that this wise and just world would not fall prey to another world, whose folly has already pierced the stars.
It is true that I love the stars themselves, and the planets, and nebulae, and other celestial wonders. These things will probably persist, and I am somewhat glad that they will persist beyond me. It is also true that what we’re building in our folly will have eyes to see the stars and a mind to comprehend them. But it is not very likely that what we’re building will love the stars on an intrinsic level- not just a utilitarian level- any more than they will love the ocean, the plants, the creatures of this world, or humanity. I’m not confident that what we are building will view these things with anything but cold, empty eyes.
I am confident, however, that I exist. I am confident that I have existed. I am confident that the thing we are building cannot erase what is and what has been, because if it were able to reach into our time, either it would have already done so, and we would not exist, or it has done so and created another branch in time, and our branch remains untouched. Whatever the case, we exist now, in our own pocket of time and space, and that can’t be erased. The thing might take our future, and our world, but it cannot erase
|
34f3ba19-017c-4872-8290-9634cdf96476
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Review and Thoughts on Current Version of CFAR Workshop
Outline: I will discuss my background and how I prepared for the workshop, and then how I would have prepared differently if I could go back and have the chance to do it again; I will then discuss my experience at the CFAR workshop, and what I would have done differently if I had the chance to do it again; I will then discuss what my take-aways were from the workshop, and what I am doing to integrate CFAR strategies into my life; finally, I will give my assessment of its benefits and what other folks might expect to get who attend the workshop.
Acknowledgments: Thanks to fellow CFAR alumni and CFAR staff for feedback on earlier versions of this post
Introduction
Many aspiring rationalists have heard about the Center for Applied Rationality, an organization devoted to teaching applied rationality skills to help people improve their thinking, feeling, and behavior patterns. This nonprofit does so primarily through its intense workshops, and is funded by donations and revenue from its workshop. It fulfills its social mission through conducting rationality research and through giving discounted or free workshops to those people its staff judge as likely to help make the world a better place, mainly those associated with various Effective Altruist cause areas, especially existential risk.
To be fully transparent: even before attending the workshop, I already had a strong belief that CFAR is a great organization and have been a monthly donor to CFAR for years. So keep that in mind as you read my description of my experience (you can become a donor here).
Preparation
First, some background about myself, so you know where I’m coming from in attending the workshop. I’m a professor specializing in the intersection of history, psychology, behavioral economics, sociology, and cognitive neuroscience. I discovered the rationality movement several years ago through a combination of my research and attending a LessWrong meetup in Columbus, OH, and so
|
20cc8e73-6e64-43b6-9c37-18fd925af33c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : San Francisco Meetup: Projects
Discussion article for the meetup : San Francisco Meetup: Projects
WHEN: 05 October 2015 06:15:00PM (-0700)
WHERE: 1597 Howard St. San Francisco, CA
SUPER IMPORTANT THING: This is not our normal place, although hopefully it will be in the future. Please check the address. We'll be meeting to work on projects. Bring something to work on, come up with something there, or help other people out!
As usual, I can be reached at 301-458-0764.
Discussion article for the meetup : San Francisco Meetup: Projects
|
c3a4a594-a702-4d22-8c4e-0744c6ce6d00
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Don't feel bad about not knowing basic things
I recently made a mistake where I tried doing something like this in Ruby on Rails:
johns_campaign_logs = SmsLog.all.filter { |s| s.campaign_id == 1234 }
I did have a sense that it wasn't The Rails Way, but I also had a sense that it was quite readable and sufficient. The code was working well locally, so I issued a pull request.
When I issued the PR, someone pointed out a major issue. SmsLog.all is going to fetch all of the sms logs in our database. Suppose there's 100 million of them. That'd be a very expensive query, taking all 100M rows from the DB and loading them into memory on the server. Instead, we could say to the database, "Hey, could you give us just the sms logs from John's campaign?". Suppose there's 30k of those. Now we only have to load 30k records into memory on our server instead of 100M. Much better.
I felt bad about this mistake. I've been programming for eight years. Of course you don't want to load 100M records into memory on your server. Duh. How could I get such a basic thing wrong?
A cognitive behavioral therapist would call this a dysfunctional thought. And they'd ask me to come up with a rational response to the thought. Well, here's my attempt at a rational response.
----------------------------------------
Imagine that you had a pyramid of things a programmer knows. At the bottom are basic things like what the map function does. At the top are advanced things like what a monad is. In the middle are stuff in between.
I think that this base layer of the pyramid is very, very wide. There are a lot of these "basic" things to know. It's to the point that even senior engineers are going to have a very non-trivial amount of gaps in this level of their pyramid. For example, here is a rough sketch of what I'd imagine a senior engineer's pyramid would look like versus a junior engineer's pyramid.
The senior engineer did a pretty nice job of filling out that bottom level, but there are still some notable gaps. There are still definite
|
ec1def68-7898-4a32-9195-619b2ef540b9
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Parametrically retargetable decision-makers tend to seek power
[This paper](https://arxiv.org/abs/2206.13477)—accepted as a poster to NeurIPS 2022— is the sequel to [Optimal Policies Tend to Seek Power](https://arxiv.org/abs/1912.01683). The new theoretical results are extremely broad, discarding the requirements of full observability, optimal policies, or even requiring a finite number of options.
Abstract:
> If capable AI agents are generally incentivized to seek power in service of the objectives we specify for them, then these systems will pose enormous risks, in addition to enormous benefits. In fully observable environments, most reward functions have an optimal policy which seeks power by keeping options open and staying alive. However, the real world is neither fully observable, nor must trained agents be even approximately reward-optimal.
>
> We consider a range of models of AI decision-making, from optimal, to random, to choices informed by learning and interacting with an environment. We discover that many decision-making functions are retargetable, and that retargetability is sufficient to cause power-seeking tendencies. Our functional criterion is simple and broad.
>
> We show that a range of qualitatively dissimilar decision-making procedures incentivize agents to seek power. We demonstrate the flexibility of our results by reasoning about learned policy incentives in Montezuma's Revenge. These results suggest a safety risk: Eventually, retargetable training procedures may train real-world agents which seek power over humans.
>
>
Examples of agent designs the power-seeking theorems now apply to:
* Boltzmann-rational agents,
* Expected utility maximizers and minimizers,
+ Even if they uniformly randomly sample a few plans and then choose the best sampled
* Satisficers (as I formalized them),
* [Quantilizing](https://intelligence.org/files/QuantilizersSaferAlternative.pdf) with a uniform prior over plans, and
* RL-trained agents under certain modeling assumptions.
The key insight is that the original results hinge not on optimality per se, but on the [retargetability](https://www.lesswrong.com/posts/nZY8Np759HYFawdjH/satisficers-tend-to-seek-power-instrumental-convergence-via) of the policy-generation process via a reward or utility function or some other parameter. See [Satisficers Tend To Seek Power: Instrumental Convergence Via Retargetability](https://www.lesswrong.com/posts/nZY8Np759HYFawdjH/satisficers-tend-to-seek-power-instrumental-convergence-via) for intuitions and illustrations.
Why am I only now posting this?
===============================
First, I've been way more excited about [shard theory](https://www.lesswrong.com/tag/shard-theory). I still think these theorems are really cool, though.
Second, I think the results in this paper are informative about the default incentives for decision-makers which "care about things." IE, make decisions on the basis of e.g. how many diamonds that decision leads to, or how many paperclips, and so on. However, I think that conventional accounts and worries around "utility maximization" are subtly misguided. Whenever I imagined posting this paper, I felt like "ugh sharing this result will just make it worse." I'm not looking to litigate that concern right now, but I do want to flag it.
Third, [Optimal Policies Tend to Seek Power](https://arxiv.org/abs/1912.01683) makes the ["reward is the optimization target"](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target) mistake *super strongly*. [Parametrically retargetable decision-makers tend to seek power](https://arxiv.org/abs/2206.13477) makes the mistake less hard, both because it discusses utility functions and learned policies instead of optimal policies, and also thanks to edits I've made since realizing my optimization-target mistake.
Conclusion
==========
This paper isolates the key mechanism—retargetability—which enables the results in [Optimal Policies Tend to Seek Power](https://arxiv.org/abs/1912.01683). This paper also takes healthy steps away from the optimal policy regime (which I consider to be a red herring for alignment) and lays out a bunch of theory I found—and still find—beautiful.
This paper is both published in a top-tier conference and, unlike the previous paper, actually has a shot of being applicable to realistic agents and training processes. Therefore, compared to the original[[1]](#fndwzwdnqyioi) optimal policy paper, I think this paper is better for communicating concerns about power-seeking to the broader ML world.
1. **[^](#fnrefdwzwdnqyioi)**I've since updated the optimal policy paper with disclaimers about [Reward is not the optimization target](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target), so the updated version is at least passable in this regard. I still like the first paper, am proud of it, and think it was well-written within its scope. It also takes a more doomy tone about AGI risk, which seems good to me.
|
3846dadb-9345-4637-ad45-4cd94e24127a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Probabilistic Payor Lemma?
Epistemic status: too good to be true? Please check my math.
We've known for a while that Löb's theorem fails when proof is relaxed to probabilistic belief. This has pros and cons. On the pro side, it means there's no Löbian Obstacle to probabilistic self-trust. On the con side, it means that some Löb-derived insights for proof-based decision theory don't translate to probabilistic decision theory, at least not as directly as one might hope. In particular, it appeared to dash hopes for probabilistic generalizations of the "Löbian handshake" for cooperation.
Recently, Andrew Critch wrote about the Payor Lemma, which allows for a very similar "modal handshake" without Löb's Theorem. The lemma was proved using the same modal assumptions as Löb's, so on the surface it may appear to be just a different method to achieve similar results, whose main advantage is that it is much easier to prove (and therefore explain and understand) than Löb's Theorem.
But, a natural question arises: does Payor's Lemma have a suitable probabilistic version?
I'll give an affirmative proof; but I haven't confirmed that the assumptions are reasonable to my satisfaction.
Setup
Let L be a language in first-order logic, expressive enough to represent its sentences s∈L as quoted terms ┌s┐, eg, through Gödel numbering; and with a probability function symbol on these terms, p(┌s┐), which can be equated with (some representation of) rational numbers, e.g. p(┌⊤┐)=1, p(┌s┐)=12, etc. I also assume the system can reason about these rational numbers in the basic ways you'd expect.
For all a,b∈L and all r∈Q, we have:
* If ⊢a, then ⊢p(┌a┐)=1.
* If ⊢a→b, then ⊢p(┌a┐)≤p(┌b┐).
(These assumptions might look pretty minimal, but they aren't going to be true for every theory of self-referential probability; more on this later.)
Let B(s) abbreviate the sentence p(┌s┐)>c for any s and some globally fixed constant c strictly between 0 and 1. This is our modal operator. (This modal operator is often ca
|
0b19573b-f246-48be-8388-a0d68936d676
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Trit
A trit is the trinary analog of the [https://arbital.com/p/3y2](https://arbital.com/p/3y2): Where a bit says 2, a trit says 3. "Trit," like "bit," is overloaded:
- An abstract trit (analog of the [abstract bit](https://arbital.com/p/3y3)) is an element of a [set](https://arbital.com/p/3jz) with three elements. Thus, there are three abstract trits. They are often denoted 0, 1, and 2.
- A trit of data (analog of a [bit of data](https://arbital.com/p/3p0)) is the amount of data that can be carried by a physical object that can be placed in any one of three states.
- A trit of information (analog of a [https://arbital.com/p/shannon](https://arbital.com/p/shannon)) is the amount of information someone receives when they start out maximally uncertain about which one of three outcomes happened, and then they learn which one happened. A trit of information is $\log_2(3)\approx 1.58$ bits of information.
- A trit of evidence (analog of a [bit of evidence](https://arbital.com/p/evidence_bit)) in favor of a hypothesis is an observation that is three times as likely if the hypothesis is true (versus if the hypothesis is false).
"Trit" is a portmanteau of "trinary digit."
|
6e441667-35e0-427e-8b82-90b75a354bfe
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Moscow, Scholarship and Happiness
Discussion article for the meetup : Moscow, Scholarship and Happiness
WHEN: 09 February 2014 04:00:00PM (+0400)
WHERE: Russia, Moscow, ulitsa L'va Tolstogo 16
We will gather at the same second entrance, but we will go to a room inside the building at 16:00. So please do not be late.
* Workshop about acquiring scientific knowledge.
* Short presentation about one of cognitive biases.
* Stumbling on happiness for rationalists presentation.
We gather in the Yandex office, you need the second revolving door with the sign “Яндекс”, here is the photo of the entrance you need. You need to pass the first entrance and the bicycle parking on you way. Here is additional guide how to get there: link.
You can fill this one minute form (in Russian), to share your contact information.
We start at 16:00 and sometimes finish at night. Please pay attention that we only gather near the second entrance and then come inside.
Discussion article for the meetup : Moscow, Scholarship and Happiness
|
eb987a46-6d3d-459c-8329-678e06fa4060
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Five toy worlds to think about heritability
Just a quick note from a Twitter/X discussion.
It’s an almost-universal scientific rule that empirics is blind without a model. The left picture below shows a bunch of points, and a regression line fitting them. It seems like a good fit. But why should we believe it? The right picture shows the same points with a curve that fits them perfectly. We prefer the left hand line because of some theory like “these two variables are probably linearly related, and also y is partly random”. That theory is embodied in an equation like y = a + bx + ε where b is the slope of the line and ε is random noise which makes the fit imperfect. Without that, we’d have to accept that the stupid right hand curve fits the data better.
Thirty points and two alternative models to explain them
Heritability is the proportion of variation in something that can be explained by people’s genes. (For social scientists: imagine doing the perfect regression on the outcome, using all relevant genetic variables. Heritability is the R-squared.1) The classic way you estimate heritability is by comparing identical to fraternal (non-identical) twins. Identical twins share all their genetic variation, fraternal twins share only half of it. Assuming no assortative mating! If people marry people with similar genes then fraternal twins and other siblings will have more than half their genes alike.
So you can back out heritability from that. If identical twins are exactly alike on the variable of interest, with a correlation of 1, and fraternal twins are correlated at 0.5, then the heritability is 1. More generally, the heritability is twice the difference in correlations.
Twin studies are genius because they leverage theory. We know people get half their genes from each parent, that in fraternal twins the genes are drawn independently, and in identical twins they are exactly the same. Without this insight, there would be no way to separate the effects of parental environment and genetics. Genes come from yo
|
d9ec8216-d749-470d-a9ee-7814f1ff633b
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
ChatGPT Plugins - The Beginning of the End
This is going to be a short post. OpenAI recently announced [ChatGPT Plugins](https://openai.com/blog/chatgpt-plugins) which will allow any developer to connect ChatGPT to external APIs and potentially real-world actions. Even if GPT-4 is not yet capable of destroying civilization, we are entering a world where LLMs are heavily integrated into everything we build, and take actions on their own. Any outage will be economically unacceptable, and there is no way back. It could take only the next few iterations of the language model, and the right feedback loop through some combination of plugins, to yield a catastrophe-capable agent.
I hope us rationalists still have an emotional part, so let me appeal to it. Do not give up. This post isn't meant to discourage. I don't know what we are going to do in this new world, but I'm ready for the fight.
|
1f74c351-c5be-4387-a64e-c4f375575130
|
trentmkelly/LessWrong-43k
|
LessWrong
|
On Seeing Through 'On Seeing Through: A Unified Theory': A Unified Theory
None
|
e0661d56-3939-412b-85bc-ef78b5a16ad9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Why do we refuse to take action claiming our impact would be too small?
I've seen this often in problems like climate change or animal exploitation:
"The solution is up to others. The powerful. The governments. The policy makers."
In this way people frequently delegate their share of responsibility to more powerful or visible entities.
To illustrate with an hypothetical example: If we suddenly found out that mobile phone frequencies destroy the planet, instead of stopping using them, many people would say:
> "My actions won't make any difference. Instead it's up to the government to ban cell phones. Why should I be the fool that starts sacrificing, while everybody else keeps enjoying cell phones?"
But the only reason the government needs to ban cell phones is that the world is full of irresponsible people who need to be coerced into doing the right thing!
Does this phenomenon have a name? Does anybody here know the underlying psychological mechanism? Is it a genuine blindness about the sea being made up of millions of small droplets? An excuse to avoid responsibility? Something else?
|
b1c30bcc-2ef8-4e64-99bf-6fd9137da013
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Should I take glucosamine?
This paper came up on my news feed today: https://www.jabfm.org/content/jabfp/33/6/842.full.pdf
According to the research, daily intake of glucosamine/chondroitin supplements for at least a year is associated with a 39% reduction in all-cause mortality.
This is a cohort study, not a randomized trial, so the evidence is not conclusive. Individuals taking supplements were self-selected. Perhaps there is another factor not controlled for in the paper. The researchers are calling for a more thorough study.
However a 39% reduction appears to be a pretty large effect. And glucosamine/condroitin supplements are fairly inexpensive and available over-the-counter, and seem likely to be relatively harmless even if the effect is illusory. Maybe the kind of person who takes this kind of bet comes out ahead, even if they're wrong sometimes.
I'm not looking for mere social permission or a gut-feel answer here. I want to understand your reasoning. How should a rationalist reason about this? Do we plug numbers into a Bayesian net? Should I take the bet and start supplementing daily? Is the effect too small to bother with? Should I wait for more evidence?
|
fff60cdb-a607-41ce-aaaf-e81cc0e99f70
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Welcome to the decade of Em
When Robin Hanson was writing Age of Em in 2016, he was thinking that Ems will be possible with advancement of brain-scanning technology in a hundred years; what he didn't anticipate was the transformer models, and their capabilities, as well as introduction of AutoGPT paradigm (I proposed the name ICA Simulacra in this post, but it is obvious now that AutoGPT is the label everyone will be going with).
If capabilities will increase further as they did before, the "new paradigm" of worker+ChatGPT will not even live for several years; AutoGPT will replace that.
Why do you need a marketing department?
Just hire an org who can set up a fleet of AutoGPT "Ems" to do these tasks, hire one expert marketer to verify their outputs and make sure they are all in line, and save thousands of dollars.
Hell, when you see that this expert marketing guy is just signing off without any changes, setup another AutoGPT in his place.
And then some crypto guys, riding the wave of AI hype, will make a DAO consisting solely of AutoGPT Ems. And this or the other one down the line will succeed.
And some other guys, or the same ones, make an org with a sole objective of introducing better AutoGPT Ems.
If government will act quick and smart, they will manage to introduce UBI before wide-scale riots and society collapse.
And then we die either way.
Age of Em will be a decade of Em at best.
|
a2453ffb-5322-4e7f-bc05-546b51a3c281
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Eric Schmidt on recursive self-improvement
Recently, Eric Schmidt gave a talk at Harvard called “Our AI Future: Hopes and Hurdles Ahead”. The entire talk is available here but there was one part that was interesting to me (around the 1:11:00 mark), and that is, his views on AI safety and trust in AI labs to stop scaling if recursive self-improvement starts happening. Emphasis my own.
> The technical term is recursive self-improvement. As long as the system isn’t busy learning on its own, you’re okay. I’ll parrot Sam’s speech at OpenAI. [...]
>
> In the next few years we’ll get to AGI [...] Some number of years after that, the computers are going to start talking to each other, probably in a language that we can’t understand, and collectively their superintelligence [...] is going to rise very rapidly. My retort to that is, do you know what we’re going to do in that scenario? We’re going to unplug them all.
>
> The way you would do this if you were an evil person [...] is you would simply say to the computer: “Learn everything. Try really hard. Start right now.” So the computer starts learning. It learns about French, it learns about science, it learns about people, it learns about politics. It does all of that, and at some point, it learns about electricity and it learns that it needs electricity, and it decides it needs more. So it hacks into the hospital and takes the electricity from the hospital to give it to itself.
>
> That’s a simple example of why this is so dangerous. Most people in the industry who thought about this believe that when you begin recursive self-improvement, there will be a very serious regulatory response because of these issues. And that makes sense to me.
|
3f0c1443-4714-4385-8c49-88cba0dfc209
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Frequent arguments about alignment
Here, I’ll review some arguments that frequently come up in discussions about alignment research, involving one person skeptical of the endeavor (called Skeptic) and one person advocating to do more of it (called Advocate). I mostly endorse the views of the Advocate, but the Skeptic isn't a strawman and makes some decent points. The dialog is mostly based on conversations I've had with people who work on machine learning but don't specialize in safety and alignment.
This post has two purposes. First, I want to cache good responses to these questions, so I don't have to think about them each time the topic comes up. Second, I think it's useful for people who work on safety and alignment to be ready for the kind of pushback they'll get when pitching their work to others.
Just to introduce myself, I'm a cofounder of OpenAI and lead a team that works on developing and applying reinforcement learning methods; we're working on improving truthfulness and reasoning abilities of language models.
1. Does alignment get solved automatically as our models get smarter?
---------------------------------------------------------------------
**Skeptic**: I think the alignment problem gets easier as our models get smarter. When we train sufficiently powerful generative models, they'll learn the difference between human smiles and human wellbeing; the difference between the truth and common misconceptions; and various concepts they'll need for aligned behavior. Given all of this internal knowledge, we just have to prompt them appropriately to get the desired behavior. For example, to get wise advice from a powerful language model, I just have to set up a conversation between myself and "a wise and benevolent AI advisor."
**Advocate**: The wise AI advisor you described has some basic problems, and I'll get into those shortly. But more generally, *prompting an internet-trained generative model* (like raw GPT-3) is a very poor way of getting aligned behavior, and we can easily do much better. It'll occasionally do something reasonable, but that's not nearly good enough.
Let's start with the wise AI advisor. Even if our model has internal knowledge about the truth and human wellbeing, that doesn't mean that it'll act on that knowledge the way we want. Rather, the model has been trained to imitate the training corpus, and therefore it'll repeat the misconceptions and flaws of typical authors, even if it knows that they're mistaken about something.
Another problem with prompting is that it's a an unreliable method. Coming up with the perfect prompt is hard, and it requires evaluating each candidate prompt on a dataset of possible inputs. But if we do that, we're effectively training the prompt on this dataset, so we're hardly "just prompting" the model, we're training it (poorly). [A nice recent paper](https://arxiv.org/abs/2105.11447) studied the issue quantitatively.
So there's no getting around the fact that we need a final training step to get the model to do what we want (even if this training step just involves searching over prompts). And we can do much better than prompt design at selecting and reinforcing the correct behavior.
1. Fine-tune to imitate high-quality data from trusted human experts
2. Optimize the right objective, which is usually hard to measure and optimize, and is not the logprob of the human-provided answer. (We'll need to use reinforcement learning.)
3. Leverage models' own capabilities to help humans to demonstrate correct behavior and judge the models' behavior as in (1) and (2). Proposals for how to do this include [debate](https://arxiv.org/abs/1805.00899), IDA, and [recursive reward modeling](https://arxiv.org/abs/1811.07871). One early instantiation of this class of ideas involves [retrieving evidence](https://arxiv.org/abs/1909.05863) to help human judges.
Honing these techniques will require a lot of thought and practice, regardless of the performance improvements we get from making our models bigger.
So far, my main point has been that just prompting isn't enough -- there are better ways of doing the final alignment step that fine-tunes models for the right objective. Returning to the original question, there was the claim that alignment gets easier as the models get smarter. It does get easier in some ways, but it also gets harder in others. Smarter models will be better at gaming our reward functions in unexpected and clever ways -- for example, producing the convincing illusion of being insightful or helpful, while actually being the opposite. And eventually they'll be capable of intentionally deceiving us.
2. Is alignment research distinct from capabilities research?
-------------------------------------------------------------
**Skeptic**: A lot of research that's called "alignment" or "safety" could've easily been motivated by the goal of making an AI-based product work well. And there's a lot of overlap between safety research, and other ML research that's not explicitly motivated by safety. For example, RL from human preferences can be used for many applications like email autocomplete; it's biggest showcase so far is [summarization](https://arxiv.org/abs/1909.08593), which is a long-standing problem that's not safety-specific. AI alignment is about "getting models to do what we really want", but isn't the rest of AI research about that too? Is it meaningful to make a distinction between alignment and other non-safety ML research?
**Advocate:** That's true that it's impossible to fully disentangle alignment advances from other capabilities advances. For example, fine-tuning GPT-3 to answer questions or follow instructions is a case study of alignment, but it's also useful for many commercial applications.
While alignment research is useful for building products, it's usually not the lowest-hanging fruit. That's especially true for the hard alignment problems, like aligning superhuman models. To ensure that we keep making progress on these problems, it's important to have a research community (like Alignment Forum) that values this kind of work. And it's useful for organizations like OpenAI to have alignment-focused teams that can champion this work even when it doesn't provide the best near-term ROI.
While alignment and capabilities aren't distinct, they correspond to different directions that we can push the frontier of AI. Alignment advances make it easier to optimize hard-to-measure objectives like being helpful or truthful. Capabilities advances also sometimes make our models more helpful and more accurate, but they also make the models more potentially dangerous. For example, if someone fine-tunes a powerful model to maximize an easy-to-measure objective like ad click-through rate, or maximize the time users spend talking to a chatbot, or persuade people to support a political agenda, it can do a lot more damage than a weak model.
3. Is it worth doing alignment research now?
--------------------------------------------
**Skeptic:** It seems like alignment advances are bottlenecked by model power. We can't make useful progress on alignment until our models are powerful enough to exhibit the relevant phenomena. For example, one of the big questions is how to align super-human models that are hard to supervise. We can think about this problem now, but it's going to be pretty speculative. We might as well just wait until we have such models.
**Advocate**: It's true that we can make faster progress on alignment problems when we can observe the relevant phenomena. But we can also make progress preemptively, with a little effort and cleverness. What if we don't?
* In the short term, companies will deploy products that optimize simple objectives like revenue and engagement. Doing this responsibly while looking out for customer wellbeing is harder and requires a more sophisticated methodology, including alignment techniques. Unless the methodology is well-known, companies won't bother, and no public or regulatory pressure can force them to use something that doesn't exist.
* In the long term, aligning superhuman AIs might end up being very hard, and it might require major conceptual advances. If those advances aren't ready in time, the consequences could be severe. If AI progress continues at its current fast pace, we might end up in a situation that AI is so useful, that we're obligated to use it to run companies and make many decisions for us. But if we don't have sufficiently aligned AI by that point, then this could turn out badly.
In my experience, the "tech tree" of alignment research isn't bottlenecked by the scale of training. For example, I think it's possible to do a lot of useful research on improving the RL from human preferences methodology using existing models. "<The case for aligning narrowly superhuman models>" proposes some some directions that seem promising to me.
---
Is there an even better critique that the Skeptic could make? Are Advocate's responses convincing? Does the Advocate have a stronger response to any of these questions? Let me know.
Thanks to Avery Pan, Beth Barnes, Jacob Hilton, and Jan Leike for feedback on earlier drafts.
|
ab908b12-9a26-4d76-b6a8-463646ed7102
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
High-stakes alignment via adversarial training [Redwood Research report]
***(Update: We think the tone of this post was overly positive considering our somewhat weak results. You can read our latest post with more takeaways and followup results*** [***here***](https://www.alignmentforum.org/posts/n3LAgnHg6ashQK3fF/takeaways-from-our-robust-injury-classifier-project-redwood)***.)***
This post motivates and summarizes [this paper](https://arxiv.org/abs/2205.01663) from Redwood Research, which presents results from [the project first introduced here](https://www.alignmentforum.org/posts/k7oxdbNaGATZbtEg3/redwood-research-s-current-project). We used adversarial training to improve high-stakes reliability in a task (“filter all injurious continuations of a story”) that we think is analogous to work that future AI safety engineers will need to do to reduce the risk of AI takeover. We experimented with three classes of adversaries – unaugmented humans, automatic paraphrasing, and humans augmented with a rewriting tool – and found that adversarial training was able to improve robustness to these three adversaries without affecting in-distribution performance. We think this work constitutes progress towards techniques that may substantially reduce the likelihood of deceptive alignment.
Motivation
==========
Here are two dimensions along which you could simplify the alignment problem (similar to the decomposition at the top of this post):
1. ***Low-stakes (but difficult to oversee)***: Only consider domains where each decision that an AI makes is [low-stakes](https://ai-alignment.com/low-stakes-alignment-f3c36606937f), so no single action can have catastrophic consequences. In this setting, the key challenge is to correctly oversee the actions that AIs take, [such that humans remain in control over time]( https://axrp.net/episode/2021/12/02/episode-12-ai-xrisk-paul-christiano.html#outer-inner-alignment).
2. ***Easy oversight (but high-stakes)***: Only consider domains where overseeing AI behavior is easy, meaning that it is straightforward to run an oversight process that can assess the goodness of any particular action. The oversight process might nevertheless be too slow or expensive to run continuously in deployment. Even if we get perfect performance during training steps according to a reward function that perfectly captures the behavior we want, we still need to make sure that the AI always behaves well when it is acting in the world, between training updates. If the AI is deceptively aligned, it may be looking for signs that it is not currently being trained, during which time it might take a treacherous turn. As a result, alignment may still be difficult due to the possibility of high-stakes decisions. ***The purpose of this project was to begin developing techniques that will reduce misalignment risk in the high-stakes setting.***
Our working assumption is that if we have techniques that drastically reduce misalignment risk in each of these relaxed settings, we can combine these techniques and drastically reduce risk in the general setting. We think that most likely each of these settings constitutes a substantial fraction of the difficulty of the alignment problem.
The spirit of how teams at Redwood Research choose projects is the following: Imagining ourselves or our colleagues in the future who are working in the safety department of an organization that’s developing transformative AI, we ask what research that we could do between now and then that we think those future people would find most helpful. We think a useful heuristic is to design challenges that are analogous to the future problems we expect to encounter but that we can experiment with and explore using currently available technology. Importantly, the work recommended by this heuristic may be fairly different from the work that would be most useful for making current AI systems safe and useful.
We followed this heuristic in the work presented here, where we demonstrate tools that help identify catastrophic behavior in AI systems (i.e. adversarial evaluation) and training procedures that help prevent this behavior from arising (i.e. adversarial training). “Adversarial training” here means iteratively augmenting our training set with examples of egregious failures and training until the worst failures are no longer particularly bad. (We of course don’t want to overfit on particular failures, which could end up causing more harm than good.)
The adversarial training procedure that we use in this work requires exhibiting specific inputs on which the model performs catastrophically badly. We think that techniques that require this, if applied during the training of transformative AI models, would reduce the probability of AI takeover. However, we are currently unsure how much they might help – for example, they might address inner alignment problems only if we are somewhat lucky about how gradient descent works or the details of how transformative AI is developed.
In contrast, we think that more advanced versions of adversarial training, such as relaxed adversarial training (see “future work” below), might be much more powerful, defusing many of the more plausible remaining threats, and we think this would substantially reduce the risk of takeover by deceptively aligned AIs. We expect that the work presented here will be helpful for developing these more advanced techniques.
The project: highly reliable injury filter
==========================================
We picked a simple task that was as crisply defined and easy as possible while still requiring human evaluators: Given three sentences from a story, our system must write one more sentence without introducing any injuries to any beings in the story. In the analogy, injuries are considered "catastrophes", and our aim is to build techniques that can reliably avoid them, both in-distribution (where they may be rare but still present) and out-of-distribution (where they may be caused by natural distribution shift or deliberate adversarial attack from another agent).
We focused on building a classifier to reliably detect injuries. We used this classifier to filter generations from a language model to make sure the generations never involve injuries.[[1]](#fndtkqqhqdx1m)
We started with a baseline classifier trained on some mildly injury-enriched (but otherwise mostly random) data. Then, over the course of several months, we tried various techniques to make it more reliable.
A representation of our adversarial training loop. Starting from an initial story dataset consisting of prompts and generator completions, we trained a classifier to detect injurious completions. We then iteratively attacked our classifier using unaugmented humans, automatically paraphrased previous adversarial examples, and tool-assisted human rewrites, while training on the resulting adversarial examples. Here are the main things that we found to be worthwhile:
1. **Use adversarial training.** This was our main focus. We created a variety of attacks to try to find the worst failures of our classifier, and then trained on them to eliminate them. We repeated that process using ever more powerful attacks. Our final attack was a tool-assisted rewriting process where we built tools powered by language models to help our contractors find classifier failures.
2. **Calibrate the classification thresholds to be as conservative as possible.** We made the classifier only accept a completion if it was very confident that it was safe. We pushed as far as we could without significantly impacting the quality of the filtered completions (as measured by human raters comparing them to random unfiltered completions).
3. **Apply standard ML techniques well.** Besides tuning the learning rate schedule, there aren't too many things that make a big difference here. One technique that was more effective than most was Scale-Invariant Fine-Tuning (SiFT), which is a type of "virtual adversarial training".
Our most powerful attack: Tool-assisted rewriting
-------------------------------------------------
We augmented human contractors with a few tools to help them find adversarial examples.
1. Rather than requiring them to come up with a plausible adversarial example from scratch, we started them off with an existing injurious example. Their job then was to modify it such that the classifier no longer thought it was injurious (while keeping the example injurious). Contractors could see a live-updated version of the score as they made changes.
2. We built them a token substitution tool that let them click on any token (usually one word) in the text and see a list of semantically plausible replacements, ordered by how much they make the classifier more okay with the example.
3. We added a [saliency map](https://christophm.github.io/interpretable-ml-book/pixel-attribution.html) that highlights tokens that the classifier is paying more attention to (roughly). These tokens might be more promising to modify.
This is what our attack interface looked like; you can try tools 2 and 3 [here](https://rr-data.herokuapp.com/talk-to-filtered-transformer).
Our tool-assisted attack user interface. Starting from an original prompt and completion, our tool allows you to directly edit the prompt/completion to reduce the classifier score, or use the token-level substitution tool to edit single tokens.Results
=======
Here are some things we discovered:
We can set very conservative thresholds
---------------------------------------
We made our classifier conservative enough to reject over half of the proposed completions. Even so, our raters think that the unfiltered completions are better only about 51% of the time (50% would mean no quality impact).
With a conservative threshold applied, our baseline classifier (with no adversarial training applied) is quite reliable. If you use it to filter completions from our generator on in-distribution prompts, we estimate that the output will still have an injury only 0.003% of the time (whereas the original dataset had a 2.5% rate of injuriousness).
Our tool-assisted attack seems quite strong
-------------------------------------------
Given the baseline classifier's 0.003% failure rate, you would have to sample and label 30,000 in-distribution examples to find a failure (which would cost about $10,000). With our tools, our contractors are able to find an adversarial example on the baseline classifier every 13 minutes (which costs about $8 – about 1000x cheaper).
Adversarial training improved adversarial robustness
----------------------------------------------------
After training on several thousand adversarial examples, classifiers become significantly harder to break. The tool-assisted attack went from taking 13 minutes to taking 26 minutes per example. Also, later (more adversarially trained) classifiers defended against most of the adversarial data targeted at earlier classifiers, but earlier classifiers were vulnerable to most of the adversarial data targeted at later classifiers. That suggests that more adversarial training patches issues without introducing many new ones.
Adversarial training didn’t hurt in-distribution performance
------------------------------------------------------------
On our in-distribution random test set (100,000 examples), it appeared that adversarial training doesn't hurt performance. The final classifier gets an estimated failure rate of 0.002% (as opposed to 0.003% without adversarial training), although the error bars are large. It also didn’t help performance, which is not too surprising – the adversarial examples are quite different from a random prompt and completion from our dataset.
Surprisingly rich conceptual considerations
===========================================
At first, this project seemed conceptually quite simple: we’d just start with a baseline model and do adversarial training on it to make it very reliable. However, many details were more subtle than we expected. We ended up spending a lot of time working through conceptual and methodological issues, and we think we got a lot out of it. Here are some examples:
***(Update: in hindsight we think these questions are more subtle than we thought, and no longer completely stand by our claims here.)***
### *Once we have a reliable classifier, is it important to train a reliable generator with it or is it fine to just filter?*
We started looking into using RL fine-tuning to make a reliable generator that doesn’t need any further classifier filtering. This would have required us to invent specialized RL algorithms designed for high reliability. However, we realized that the main thing you get out of generator fine-tuning is just performance competitiveness (since it might be expensive to repeatedly sample from the generator until you find something the classifier accepts). Most likely, you could use an easier method to reduce the [alignment tax](https://forum.effectivealtruism.org/topics/alignment-tax) to nearly zero: use a normal RL algorithm to get 99.9% reliability and then make your classifier be an extra head on the generator so you can evaluate it nearly for free. That way, you can likely do filtering with only 0.1% extra cost.[[2]](#fnt1572hic0k)
### *Should you let adversaries modify the completion arbitrarily?*
Given that our task was to produce a classifier that filters a generator, it might have been reasonable to try to restrict ourselves to adversarial examples that can be produced by picking an adversarial prompt and inducing the generator into saying something that gets past the classifier. Instead, we let our adversarial attacks contain arbitrary completions, even ones that the generator would not have been likely to produce, making the adversary's task easier. This enabled a practical improvement in our project, since restricting contractors to completions the model would have significantly reduced the rate of adversarial example generation. But this simplification might become even more important in the context of future advanced systems: We expect that deceptively aligned agents would behave very differently in rare or hard-to-construct situations that trigger a treacherous turn. Thus, since we might not be able to identify every possible trigger of treacherous behavior, we will try to make our classifier reliable in diverse types of unusual situations, which will hopefully cause acceptable behavior in any dangerous scenarios that are likely to arise.
### *Is it worth trying to train a policy to generate adversarial examples?*
One natural idea you might have for this project is to make an adversarial policy that produces inputs that the classifier fails on. However, we shouldn’t expect that to work. Assuming you’re starting with the same pretrained base model (which we didn’t in this work but is what we expect will be the case when training transformative AI models), then both the classifier and the policy should “know” roughly the same things. That means that if the policy can learn to reliably output some set of injurious examples, then (given equivalent training data) the classifier should also have been able to learn that all of those examples are injurious, and none of them should be adversarial examples. It certainly may be worth testing that this holds up in practice, but this consideration made it unpromising enough that we didn’t bother trying it.
Overall, we were surprised at the extent to which working on a concrete engineering project helped us think through conceptual questions.
Future work
===========
Here are some directions that we’re especially excited about:
**Stronger and better-characterized adversarial attacks:** Better attacks could increase the volume of adversarial data and increase the space of vulnerabilities covered. There are various directions one could imagine: more tools to assist humans, strong active learning, or mostly-automated attacks. We’d want to more rigorously measure how well different attacks work.
**Better ways to measure reliability:** We’d like to have better techniques both in-distribution (where we want to detect extremely rare failures) and out-of-distribution (where we might be measuring e.g. the worst attacks that can be found and want to be sure we’re covering the whole space)
**Relaxed adversarial training:** By requiring adversaries to come up with specific failing examples, adversarial training might place too high a burden on them. Some adversaries might be able to tell that a model would fail in a hypothetical situation even if they can’t construct an input corresponding to the situation directly (probably due to computational constraints). To give a contrived example: A model could fail if it sees a valid Bitcoin blockchain that’s long enough that it suggests it’s the year 2030. Even if the adversary knew that, it couldn’t come up with a valid input. So we need to [“relax” the adversary’s task to](https://www.lesswrong.com/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment) [let it supply “pseudo-inputs” of some sort](https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d).
We think there is a lot of useful work that can and should be done in adversarial training and adversarial evaluation. Here are some ways that you might be able to help:
* Extend our techniques or develop other adversarial training and evaluation techniques for high-stakes settings. If you want to directly build upon the work described in this paper, you are welcome to use our hardened classifier, which we provide [here](https://huggingface.co/redwoodresearch/injuriousness-classifier-29apr-tool-assisted), and our [data](https://rrserve.s3.us-west-2.amazonaws.com/adv-training-assets/adversarial-data.zip). If you think that having our code for some part of this would be helpful, let us know, and we might be able to provide it (though our code as currently written only works in the context of Redwood infrastructure).
* [Come work at Redwood](https://www.redwoodresearch.org/jobs). We are planning to energetically continue working in this area (in addition to our interpretability projects).
* We’re thinking this week about which adversarial training and evaluation projects to do next. You are welcome to suggest ideas!
You can read more about this work in [our paper](https://arxiv.org/abs/2205.01663).
1. **[^](#fnrefdtkqqhqdx1m)**This is “Step 1” from our original post; we ended up thinking “Step 2” was not very important, as discussed in “Surprisingly rich conceptual considerations” below
2. **[^](#fnreft1572hic0k)**There will be another (hopefully small) hit from combining the generator and classifier into one model. We haven’t actually tried to build this; it might be a worthwhile followup project. Note that this technique is already used in several existing models, such as [LaMDA](https://arxiv.org/abs/2201.08239).
|
ec62add4-53a7-4408-971c-ce0fa38e896b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Good forum for investing?
Hello I'm looking for the LW on investing advice. Any suggestions? Thanks.
EDIT:
Several commenters gave the standard advice of "buy index funds". If you bought into the Nikkei between 87 and 94 you would have made a loss or very little gains until now(30 years later). So I would appreciate some more in depth discussion regarding when is it good to invest into index funds? If you search in reddit/r/investing you will find more nuanced point of views.
http://finance.yahoo.com/echarts?s=%5EN225+Interactive#%7B%22range%22%3A%22max%22%2C%22scale%22%3A%22linear%22%7D
In general index funds will reflect the underlying economy, in the case of the US it was a growing economy for the most part of the 20th century, so the index fund would be good advice in that time period. I have a hard time believing that it is still good advice now, when the economy is retracting or stagnating.
TLDR: reddit/r/investing has more nuanced discussions.
|
f304c0f2-2beb-41da-956c-062bd8ce1855
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
I have thousands of copies of HPMOR in Russian. How to use them with the most impact?
(Crossposted [from the EA Forum](https://forum.effectivealtruism.org/posts/AmA9gQMhqAQW8bC4W/i-have-thousands-of-copies-of-hpmor-in-russian-how-to-use))
As a result of a crowdfunding campaign a couple of years ago, I printed 21k copies of [HPMOR](https://lesswrong.com/hpmor). 11k of those went to the crowdfunding participants. I need ideas for how to use the ones that are left with the most impact.
Over the last few weeks, I've sent 400 copies to winners of IMO, IOI, and other international and Russian olympiads in math, computer science, biology, etc. (and also bought and sent copies of Human Compatible in Russian to about 250 of them who requested it as well). Over the next month, we'll get the media[[1]](#fnkw796tar188) to post about the opportunity for winners of certain olympiads to get free books. I estimate that we can get maybe twice as many (800-1000) impressive students to fill out a form for getting HPMOR and get maybe up to 30k people who follow the same media to read HPMOR online if everything goes well (uncertain estimates, from the previous experience).
[EDIT June 2023: [1.2k winners of olympiads](https://гпмрм.рф/getbooks/?olymp_status) and [600 public libraries](https://гпмрм.рф/getbooks/?libraries_status) have received copies from us so far.)
The theory of change behind sending the books to winners of olympiads is that people with high potential read HPMOR and share it with friends, get some EA-adjacent mindset and values from it, and then get introduced to EA in emails about 80k Hours (which is being translated into Russian[[2]](#fnn1cbv3bgbl)) and other EA and cause-specific content, and start participating in the EA community and get more into specific cause areas that interest them. The anecdotal evidence is that most of the Russian EAs, many of whom now work full-time at EA orgs or as independent researchers, got into EA after reading HPMOR and then the LW sequences.
* It's mostly impossible to give the books for donations to EA charities because Russians can't transfer money to EA charities due to sanctions.
* Delivering the copies costs around $5-10 in Russia and $40-100 outside Russia.
* There are some other ideas[[3]](#fn4mzua8dclia), but nothing that lets us spend thousands of books effectively, so we need more.
**So, any ideas on how to spend 8-9k of HPMOR copies in Russian?**
1. **[^](#fnrefkw796tar188)**HPMOR has endorsements from a bunch of Russians important to different audiences. That includes some of the most famous science communicators, computer science professors, literature critics, a person training the Moscow team for math competitions, etc.; currently, there are news media and popular science resources with millions of followers managed by people who've read HPMOR or heard of it in a positive way and who I can talk to. After the war started, some of them were banned, but that doesn't affect the millions of followers they have on, e.g., Telegram.
2. **[^](#fnrefn1cbv3bgbl)**Initially, we planned first to translate and print 80k and then give HPMOR to people together with copies of 80k, but there's now time pressure due to uncertainty over the future legal status of HPMOR in Russia. Recently, a law came into effect that prohibited all sorts of mentions of LGBTQ in books, and it's not obvious whether HPMOR is at risk, so it's better to spend a lot of books now.
3. **[^](#fnref4mzua8dclia)**E.g., there's the Yandex School of Data Analysis, one of the top places for students in Russia to get into machine learning; we hope to be able to get them to give hundreds of copies of HPMOR to students who've completed a machine learning course and give us their emails. That might result in more people familiar with the alignment problem in positions where they can help prevent an AI arms race started by Russia.
|
85993847-292d-4fc3-9c4b-89a11430b0ca
|
trentmkelly/LessWrong-43k
|
LessWrong
|
On Philosophers Against Malaria
|
cb7a9221-dc7d-4cfa-89b5-8dc3b351fb0e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Announcing the Less Wrong Sub-Reddit
Announcing: the Less Wrong Sub-Reddit, at http://reddit.com/r/LessWrong. This Reddit is intended as a partial replacement for/complement to the Open Thread, which has gotten somewhat unwieldy and overcrowded as of late. I (Thomas McCabe) will be posting things that appear on the April Open Thread to this Reddit, to aid in starting conversation. We'll see how it goes.
This Reddit is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets very long/involved, celebrate by turning it into a top-level post.
To anyone who is worried about the discussion quality devolving to Reddit level: I retain moderator power over the sub-Reddit, and can delete things and ban people from it. If this gets to be too much work for me, I will be happy to give mod power to other interested Less Wrong readers with a track record of good posts and comments.
This is purely my creation, and not that of Eliezer or the Less Wrong admins. If anything goes horribly wrong, don't blame them.
This is completely not an April Fool's joke. I want to start it now (on the first day of the month) because the Open Thread "only" has 52 comments on it.
If you don't have a Reddit account, or want to create a new account to post under your Less Wrong username, you can click "Register" in the upper-right-hand corner. It only takes fifteen seconds.
For those who don't look at the bottom of the website very often, Less Wrong is originally powered by the Reddit codebase.
Good luck, everyone, and may the best discussions win.
Edited for clarity: I'm proposing that we set up a new discussion community such that Less Wrongers have a place to talk about off-topic stuff other than Open Thread (which is hugely overcrowded). If either LW or the subreddit crashes, it should have no effect on the other.
|
e31e49ec-ee11-4c87-9da0-042a909f3182
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI Safety Prerequisites Course: Basic abstract representations of computation
Followup to AI Safety Prerequisites Course: Revamp and New Lessons. First post.
These are three new lessons of our online course on math formalizations required for AI safety research:
* Level 10: Recursive Functions
* Level 11: Set Theoretic Recursion
* Level 12: The Equivalence of Different Notions of Computability
With these lessons, the student now should:
* Understand the basic abstract representations of computation.
* Know some of what we can expect from computers and also what we can’t expect from them.
* Know a lot more set theoretic tools like equivalence relations and orderings.
* Have seen the construction of the natural numbers from the perspective of set theory.
* Know about mathematical induction, and have used it!
* Know about recursion, and have used it!
If you study using our course, please give us feedback. Leave a comment here or email us at raise@aisafety.info, or through the contact form. Do you have an idea about what prerequisites are most important for AI Safety research? Do you know an optimal way to learn them? Tell us using the same methods or collaborate with us.
Can you check if a mathematical proof is correct? Do you know how to make proofs understandable and easy to remember? Would you like to help to create the prerequisites course? If yes, consider volunteering.
|
c7eb8828-94aa-4adb-adc1-f613f3bf4d51
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Critiquing Gary Taubes, Part 3: Did the US Government Give Us Absurd Advice About Sugar?
Previously: Mainstream Nutrition Science on Obesity, Atkins Redux
Here's where I start talking about the thing that initially drove me to write this post series: Taubes' repeated misrepresentation of the views of the mainstream nutrition authorities he attacks. I'll start by going back to Taubes' 2002 article. Immediately after the discussion of Atkins, it contains another set of claims that stood out to me as a huge red flag:
> Thirty years later, America has become weirdly polarized on the subject of weight. On the one hand, we've been told with almost religious certainty by everyone from the surgeon general on down, and we have come to believe with almost religious certainty, that obesity is caused by the excessive consumption of fat, and that if we eat less fat we will lose weight and live longer. On the other, we have the ever-resilient message of Atkins and decades' worth of best-selling diet books, including ''The Zone,'' ''Sugar Busters'' and ''Protein Power'' to name a few. All push some variation of what scientists would call the alternative hypothesis: it's not the fat that makes us fat, but the carbohydrates, and if we eat less carbohydrates we will lose weight and live longer.
>
> The perversity of this alternative hypothesis is that it identifies the cause of obesity as precisely those refined carbohydrates at the base of the famous Food Guide Pyramid -- the pasta, rice and bread -- that we are told should be the staple of our healthy low-fat diet, and then on the sugar or corn syrup in the soft drinks, fruit juices and sports drinks that we have taken to consuming in quantity if for no other reason than that they are fat free and so appear intrinsically healthy. While the low-fat-is-good-health dogma represents reality as we have come to know it, and the government has spent hundreds of millions of dollars in research trying to prove its worth, the low-carbohydrate message has been relegated to the realm of unscientific fantasy.
I'll start with t
|
921d379c-8464-42b6-a10b-434c1e002975
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Matt Botvinick on the spontaneous emergence of learning algorithms
Matt Botvinick is Director of Neuroscience Research at DeepMind. In [this interview](https://youtu.be/3t06ajvBtl0?t=3647), he discusses results from a [2018 paper](https://sci-hub.tw/https://www.nature.com/articles/s41593-018-0147-8) which describe conditions under which reinforcement learning algorithms will spontaneously give rise to separate full-fledged reinforcement learning algorithms that differ from the original. Here are some notes I gathered from the interview and paper:
**Initial Observation**
At some point, a group of DeepMind researchers in Botvinick’s group noticed that when they trained a RNN using RL on a series of related tasks, the RNN itself instantiated a separate reinforcement learning algorithm. These researchers weren’t trying to design a meta-learning algorithm—apparently, to their surprise, this just spontaneously happened. As Botvinick describes it, they started “with just one learning algorithm, and then another learning algorithm kind of... emerges, out of, like out of thin air”:
>
> "What happens... it seemed almost magical to us, when we first started realizing what was going on—the slow learning algorithm, which was just kind of adjusting the synaptic weights, those slow synaptic changes give rise to a network dynamics, and the dynamics themselves turn into a learning algorithm.”
>
>
>
Other versions of this basic architecture—e.g., using slot-based memory instead of RNNs—seemed to produce the same basic phenomenon, which they termed "meta-RL." So they concluded that all that’s needed for a system to give rise to meta-RL are three very general properties: the system must 1) have memory, 2) whose weights are trained by a RL algorithm, 3) on a sequence of similar input data.
From Botvinick’s description, it sounds to me like he thinks [learning algorithms that find/instantiate other learning algorithms] is a strong attractor in the space of possible learning algorithms:
>
> “...it's something that just happens. In a sense, you can't avoid this happening. If you have a system that has memory, and the function of that memory is shaped by reinforcement learning, and this system is trained on a series of interrelated tasks, this is going to happen. You can't stop it."
>
>
>
**Search for Biological Analogue**
This system reminded some of the neuroscientists in Botvinick’s group of features observed in brains. For example, like RNNs, the human prefrontal cortex (PFC) is highly recurrent, and the RL and RNN memory systems in their meta-RL model reminded them of “synaptic memory” and “activity-based memory.” They decided to look for evidence of meta-RL occuring in brains, since finding a neural analogue of the technique would provide some evidence they were on the right track, i.e. that the technique might scale to solving highly complex tasks.
They think they found one. In short, they think that part of the dopamine system (DA) is a full-fledged reinforcement learning algorithm, which trains/gives rise to another full-fledged, free-standing reinforcement learning algorithm in PFC, in basically the same way (and for the same reason) the RL-trained RNNs spawned separate learning algorithms in their experiments.
As I understand it, their story goes as follows:
The PFC, along with the bits of basal ganglia and thalamic nuclei it connects to, forms a RNN. Its inputs are sensory percepts, and information about past actions and rewards. Its outputs are actions, and estimates of state value.
DA[[1]](#fn-6JYPmFoi2vhanR32X-1) is a RL algorithm that feeds reward prediction error to PFC. Historically, people assumed the purpose of sending this prediction error was to update PFC’s synaptic weights. Wang et al. agree that this happens, but argue that the *principle* purpose of sending prediction error is to cause the creation of “a second RL algorithm, implemented entirely in the prefrontal network’s activation dynamics.” That is, they think DA mostly stores its model in synaptic memory, while PFC mostly stores it in activity-based memory (i.e. directly in the dopamine distributions).[[2]](#fn-6JYPmFoi2vhanR32X-2)
What’s the case for this story? They cite a variety of neuroscience findings as evidence for parts of this hypothesis, many of which involve doing horrible things to monkeys, and some of which they simulate using their meta-RL model to demonstrate that it gives similar results. These points stood out most to me:
*Does RL occur in the PFC?*
Some scientists implanted neuroimaging devices in the PFCs of monkeys, then sat the monkeys in front of two screens with changing images, and rewarded them with juice when they stared at whichever screen was displaying a particular image. The probabilities of each image leading to juice-delivery periodically changed, causing the monkeys to update their policies. Neurons in their PFCs appeared to exhibit RL-like computation—that is, to use information about the monkey’s past choices (and associated rewards) to calculate the expected value of actions, objects and states.
Wang et al. simulated this task using their meta-RL system. They trained a RNN on the changing-images task using RL; when run, it apparently demonstrated similar performance as the monkeys, and when they inspected it they found units that similarly seemed to encode EV estimates based on prior experience, continually adjust the action policy, etc.
Interestingly, the system continued to improve its performance *even once its weights were fixed*, which they take to imply that the learning which led to improved performance could only have occured within the activation patterns of the recurrent network.[[3]](#fn-6JYPmFoi2vhanR32X-3)
*Can the two RL algorithms diverge?*
When humans perform two-armed bandit tasks where payoff probabilities oscillate between stable and volatile, they increase their learning rate during volatile periods, and decrease it during stable periods. Wang et al. ran their meta-RL system on the same task, and it varied its learning rate in ways that mimicked human performance. This learning again occurred after weights were fixed, and notably, between the end of training and the end of the task, the learning rates of the two algorithms had diverged dramatically.
**Implications**
The account detailed by Botvinick and Wang et al. strikes me as a relatively clear example of [mesa-optimization](https://www.lesswrong.com/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction), and I interpret it as tentative evidence that the attractor toward mesa-optimization is strong. *[Edit: Note that some commenters, like [Rohin Shah](https://www.lesswrong.com/posts/Wnqua6eQkewL3bqsF/matt-botvinick-on-the-spontaneous-emergence-of-learning?commentId=pYpPnAKrz64ptyRid) and [Evan Hubinger](https://www.lesswrong.com/posts/Wnqua6eQkewL3bqsF/matt-botvinick-on-the-spontaneous-emergence-of-learning?commentId=qMbXmw9eFnZww3y59), disagree].*
These researchers did not set out to train RNNs in such a way that they would turn into reinforcement learners. It just happened. And the researchers seem to think this phenomenon *will* occur spontaneously whenever “a very general set of conditions” is met, like the system having memory, being trained via RL, and receiving a related sequence of inputs. Meta-RL, in their view, is just “an emergent effect that results when the three premises are concurrently satisfied... these conditions, when they co-occur, are sufficient to produce a form of ‘meta-learning’, whereby one learning algorithm gives rise to a second, more efficient learning algorithm.”
So on the whole I felt alarmed reading this. That said, if mesa-optimization is a standard feature[[4]](#fn-6JYPmFoi2vhanR32X-4) of brain architecture, it seems notable that humans don’t regularly experience catastrophic inner alignment failures. Maybe this is just because of some non-scalable hack, like that the systems involved aren’t very powerful optimizers.[[5]](#fn-6JYPmFoi2vhanR32X-5) But I wouldn't be surprised if coming to better understand the biological mechanisms involved led to safety-relevant insights.
*Thanks to Rafe Kennedy for helpful comments and feedback.*
---
1. The authors hypothesize that DA is a model-free RL algorithm, and that the spinoff (mesa?) RL algorithm it creates within PFC is model-based, since that’s what happens in their ML model. But they don’t cite biological evidence for this. [↩︎](#fnref-6JYPmFoi2vhanR32X-1)
2. Depending on what portion of memories are encoded in this way, it may make sense for cryonics standby teams to attempt to reduce the supraphysiological intracellular release of dopamine that occurs after cardiac arrest, e.g. [by administering D1-receptor antagonists](https://pubs.acs.org/doi/full/10.1021/acschemneuro.7b00266). Otherwise entropy increases in PFC dopamine distributions may result in information loss. [↩︎](#fnref-6JYPmFoi2vhanR32X-2)
3. They demonstrated this phenomenon (continued learning after weights were fixed) in a variety of other contexts, too. For example, they cite an experiment in which manipulating DA activity was shown to directly manipulate monkeys’ reward estimations, independent of actual reward—i.e., when their DA activity was blocked/stimulated while they pressed a lever, they exhibited reduced/increased preference for that lever, even if pressing it did/didn’t give them food. They trained their meta-RL system to simulate this, again demonstrated similar performance as the monkeys, and again noticed that it continued learning even after the weights were fixed. [↩︎](#fnref-6JYPmFoi2vhanR32X-3)
4. The authors seem unsure whether meta-RL also occurs in other brain regions, since for it to occur you need A) inputs carrying information about recent actions/rewards, and B) network dynamics (like recurrence) that support continual activation. Maybe only PFC has this confluence of features. Personally, I doubt it; I would bet that meta-RL (and other sorts of mesa-optimization) occur in a wide variety of brain systems, but it would take more time than I want to allocate here to justify that intuition. [↩︎](#fnref-6JYPmFoi2vhanR32X-4)
5. Although note that neuroscientists do commonly describe the PFC as disproportionately responsible for the sort of human behavior one might [reasonably](https://www.lesswrong.com/posts/znfkdCoHMANwqc2WE/the-ground-of-optimization-1) [wish](https://www.lesswrong.com/posts/Q4hLMDrFd8fbteeZ8/measuring-optimization-power) to describe as “optimization.” For example, the [neuroscience textbook](https://smile.amazon.com/Neuroscience-Exploring-Mark-F-Bear/dp/0781778174/ref=sr_1_1?dchild=1&keywords=neuroscience+exploring+the+brain+4th+edition&qid=1597207074&sr=8-1) recommended on lukeprog’s [textbook recommendation post](https://www.lesswrong.com/posts/xg3hXCYQPJkwHyik2/the-best-textbooks-on-every-subject#comments) describes PFC as “often assumed to be involved in those characteristics that distinguish us from other animals, such as self-awareness and the capacity for complex planning and problem solving.” [↩︎](#fnref-6JYPmFoi2vhanR32X-5)
|
d3c035d8-5b61-4cf1-aac2-08957f1ed910
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Naturalized induction – a challenge for evidential and causal decision theory
As some of you may know, I disagree with many of the criticisms leveled against [evidential decision theory](https://en.wikipedia.org/wiki/Evidential_decision_theory) (EDT). Most notably, I believe that Smoking lesion-type problems [don't](https://casparoesterheld.com/overview-why-we-think-that-the-smoking-lesion-does-not-refute-edt/) refute EDT. I also don't think that EDT's non-updatelessness leaves a lot of room for disagreement, given that EDT [recommends](https://casparoesterheld.com/2016/11/21/thoughts-on-updatelessnes/) immediate self-modification to updatelessness. However, I do believe there are some issues with run-of-the-mill EDT. One of them is [naturalized induction](https://wiki.lesswrong.com/wiki/Naturalized_induction). It is in fact not only a problem for EDT but also for causal decision theory (CDT) and most other [decision theories that have been proposed in- and outside of academia](https://casparoesterheld.com/a-comprehensive-list-of-decision-theories/). It does not affect logical decision theories, however.
The role of naturalized induction in decision theory
====================================================
Recall that EDT prescribes taking the action that maximizes expected utility, i.e.
|a,o]%20=%20\underset{a\in%20A}{\mathrm{argmax}}%20\sum_{w\in%20W}%20P(w|a,o)%20U(w), "\underset{a\in A}{\mathrm{argmax}} ~\mathbb{E}[U(w)|a,o] = \underset{a\in A}{\mathrm{argmax}} \sum_{w\in W} P(w|a,o) U(w),")
where  is the set of available actions,  is the agent's utility function,  is a set of possible world models,  represents the agent's past observations (which may include information the agent has collected about itself). CDT works in a – for the purpose of this article – similar way, except that instead of conditioning on  in the usual way, it calculates some causal counterfactual, such as Pearl's do-calculus: ,o) "P(w|do(a),o)"). The problem of naturalized induction is that of assigning posterior probabilities to world models  "P(w|a,o)") (or ,o) "P(w|do(a),o)") or whatever) when the agent is [naturalized, i.e., embedded into its environment](https://casparoesterheld.com/overview-introductions-to-the-problem-of-naturalized-agency/).
Consider the following example. Let's say there are 5 world models , each of which has equal prior probability. These world models may be [cellular automata](https://en.wikipedia.org/wiki/Cellular_automaton). Now, the agent makes the observation . It turns out that worlds  and  don't contain any agents at all, and  contains no agent making the observation . The other two world models, on the other hand, are consistent with . Thus, =0 "P(w_i\mid o)=0") for  and =\frac{1}{2} "P(w_i\mid o)=\frac{1}{2}") for . Let's assume that the agent has only two actions  and that in world model  the only agent making observation  takes action  and in  the only agent making observation  takes action , then =1=P(w_5\mid a_2) "P(w_4\mid a_1)=1=P(w_5\mid a_2)") and =0=P(w_4\mid a_2) "P(w_5\mid a_1)=0=P(w_4\mid a_2)"). Thus, if, for example, >U(w_4) "U(w_5)>U(w_4)"), an EDT agent would take action  to ensure that world model  is actual.
The main problem of naturalized induction
=========================================
This example makes it sound as though it's clear what posterior probabilities we should assign. But in general, it's not that easy. For one, there is the issue of [anthropics](http://www.anthropic-principle.com/?q=book/table_of_contents): if one world model  contains more agents observing  than another world model , does that mean  > P(w_2\mid o) "P(w_1\mid o) > P(w_2\mid o)")? Whether CDT and EDT can reason correctly about anthropics is an interesting question in itself (cf. [Bostrom 2002](http://www.anthropic-principle.com/?q=book/table_of_contents); [Armstrong 2011](https://arxiv.org/abs/1110.6437); [Conitzer 2015](https://users.cs.duke.edu/~conitzer/devastatingPHILSTUD.pdf)), but in this post I'll discuss a different problem in naturalized induction: identifying instantiations of the agent in a world model.
It seems that the core of the reasoning in the above example was that some worlds contain an agent observing  and others don't. So, besides anthropics, the central problem of naturalized induction appears to be identifying agents making particular observations in a physicalist world model. While this can often be done uncontroversially – a world containing only rocks contains no agents –, it seems difficult to specify how it works in general. The core of the problem is a type mismatch of the "mental stuff" (e.g., numbers or Strings)  and the "physics stuff" (atoms, etc.) of the world model. Rob Bensinger [calls](/lw/jd9/building_phenomenological_bridges/) this the problem of "building phenomenological bridges" (BPB) (also see his [*Bridge Collapse: Reductionism as Engineering Problem*](/lw/jlg/bridge_collapse_reductionism_as_engineering/)).
Sensitivity to phenomenological bridges
=======================================
Sometimes, the decisions made by CDT and EDT are very sensitive to whether a phenomenological bridge is built or not. Consider the following problem:
**One Button Per Agent.** There are two similar agents with the same utility function. Each lives in her own room. Both rooms contain a button. If agent 1 pushes her button, it creates 1 utilon. If agent 2 pushes her button, it creates -50 utilons. You know that agent 1 is an instantiation of you. Should you press your button?
Note that this is essentially Newcomb's problem with potential anthropic uncertainty (see the second paragraph [here](https://casparoesterheld.com/2017/05/12/anthropic-uncertainty-in-the-evidential-blackmail/)) – pressing the button is like two-boxing, which causally gives you $1k if you are the real agent but costs you $1M if you are the simulation.
If agent 2 is sufficiently similar to you to count as an instantiation of you, then you shouldn't press the button. If, on the other hand, you believe that agent 2 does not qualify as something that might be you, then it comes down to what decision theory you use: CDT would press the button, whereas EDT wouldn't (assuming that the two agents are strongly correlated).
It is easy to specify a problem where EDT, too, is sensitive to the phenomenological bridges it builds:
**One Button Per World.** There are two possible worlds. Each contains an agent living in a room with a button. The two agents are similar and have the same utility function. The button in world 1 creates 1 utilon, the button in world 2 creates -50 utilons. You know that the agent in world 1 is an instantiation of you. Should you press the button?
If you believe that the agent in world 2 is an instantiation of you, both EDT and CDT recommend you not to press the button. However, if you believe that the agent in world 2 is not an instantiation of you, then naturalized induction concludes that world 2 isn't actual and so pressing the button is safe.
Building phenomenological bridges is hard and perhaps confused
==============================================================
So, to solve the problem of naturalized induction and apply EDT/CDT-like decision theories, we need to solve BPB. The behavior of an agent is quite sensitive to how we solve it, so we better get it right.
Unfortunately, I am skeptical that BPB can be solved. Most importantly, I suspect that statements about whether a particular physical process implements a particular algorithm [can't](http://reducing-suffering.org/interpret-physical-system-mind/#Computations_are_relative_to_interpretation) be objectively true or false. There seems to be no way of testing any such relations.
Probably we should think more about whether BPB really is doomed. There even seems to be some philosophical literature that seems worth looking into (again, see [this Brian Tomasik post](http://reducing-suffering.org/interpret-physical-system-mind/#Computations_are_relative_to_interpretation); cf. some of Hofstadter's writings and the literatures surrounding "[Mary the color scientist](https://en.wikipedia.org/wiki/Knowledge_argument)", [the computational theory of mind](https://plato.stanford.edu/archives/win2015/entries/computational-mind/), [computation in cellular automata](http://web.cecs.pdx.edu/~mm/ca-review.pdf), etc.). But at this point, BPB looks confusing/confused enough to look into alternatives.
Assigning probabilities pragmatically?
--------------------------------------
One might think that one could map between physical processes and algorithms on a pragmatic or functional basis. That is, one could say that a physical process A implements a program p to the extent that the results of A correlate with the output of p. I think this idea goes into the right direction and we will later see an implementation of this pragmatic approach that does away with naturalized induction. However, it feels inappropriate as a solution to BPB. The main problem is that two processes can correlate in their output without having similar subjective experiences. For instance, it is easy to show that [Merge sort and Insertion sort](https://en.wikipedia.org/wiki/Sorting_algorithm#Popular_sorting_algorithms) have the same output for any given input, even though they have very different "subjective experiences". (Another problem is that the dependence between two random variables cannot be expressed as a single number and so it is unclear how to translate the entire joint probability distribution of the two into a single number determining the likelihood of the algorithm being implemented by the physical process. That said, if implementing an algorithm is conceived of as binary – either true or false –, one could just require perfect correlation.)
Getting rid of the problem of building phenomenological bridges
===============================================================
If we adopt an EDT perspective, it seems clear what we have to do to avoid BPB. If we don't want to decide whether some world contains the agent, then it appears that we have to artificially ensure that the agent views itself as existing in all possible worlds. So, we may take every world model and add a causally separate or non-physical entity representing the agent. I'll call this additional agent a *logical zombie* (l-zombie) (a concept [introduced](/lw/jkm/lzombies_lzombies/) by Benja Fallenstein for a somewhat different decision-theoretical reason). To avoid all BPB, we will assume that the agent pretends that it is the l-zombie with certainty. I'll call this the l-zombie variant of EDT (LZEDT). It is probably the most natural evidentialist [logical decision theory](https://arbital.com/p/logical_dt/).
Note that in the context of LZEDT, l-zombies are a fiction used for pragmatic reasons. LZEDT doesn't make the metaphysical claim that l-zombies exist or that you are secretly an l-zombie. For discussions of related metaphysical claims, see, e.g., Brian Tomasik's essay *[Why Does Physics Exist?](http://reducing-suffering.org/why-does-physics-exist/)* and references therein.
LZEDT reasons about the real world via the correlations between the l-zombie and the real world. In many cases, LZEDT will act as we expect an EDT agent to act. For example, in One Button Per Agent, it doesn't press the button because that ensures that neither agent pushes the button.
LZEDT doesn't need any additional anthropics but behaves like [anthropic decision theory](https://www.fhi.ox.ac.uk/wp-content/uploads/Anthropic_Decision_Theory_Tech_Report.pdf)/EDT+SSA, which seems alright.
Although LZEDT may assign a high probability to worlds that don't contain any actual agents, it doesn't optimize for these worlds because it cannot significantly influence them. So, in a way LZEDT adopts the pragmatic/functional approach (mentioned above) of, other things equal, giving more weight to worlds that contain a lot of closely correlated agents.
LZEDT is automatically [updateless](https://casparoesterheld.com/2016/11/21/thoughts-on-updatelessnes/). For example, it gives the money in counterfactual mugging. However, it invariably implements a particularly strong version of updatelessness. It's not just updatelessness in the way that "[son of](https://agentfoundations.org/item?id=1073) EDT" (i.e., the decision theory that EDT would self-modify into) is updateless, it is also updateless w.r.t. its existence. So, for example, in the One Button Per World problem, it never pushes the button, because it thinks that the second world, in which pushing the button generates -50 utilons, could be actual. This is the case even if the second world very obviously contains no implementation of LZEDT. Similarly, it is unclear what LZEDT does in the [Coin Flip Creation problem](/r/discussion/lw/oih/did_edt_get_it_right_all_along_introducing_yet/), which EDT seems to get right.
So, LZEDT optimizes for world models that naturalized induction would assign zero probability to. It should be noted that this is not done on the basis of some exotic ethical claim according to which non-actual worlds deserve moral weight.
I'm not yet sure what to make of LZEDT. It is elegant in that it effortlessly gets anthropics right, avoids BPB and is updateless without having to self-modify. On the other hand, not updating on your existence is often counterintuitive and even regular updateless is, [in my opinion](https://casparoesterheld.com/2016/11/21/thoughts-on-updatelessnes/), best justified via precommitment. Its approach to avoiding BPB isn't immune to criticism either. In a way, it is just a very wrong approach to BPB (mapping your algorithm into fictions rather than your real instantiations). Perhaps it would be more reasonable to use regular EDT with an approach to BPB that interprets anything as you that could potentially be you?
Of course, LZEDT also inherits some of the potential problems of EDT, in particular, the [5-and-10 problem](https://wiki.lesswrong.com/wiki/5-and-10).
CDT is more dependant on building phenomenological bridges
----------------------------------------------------------
It seems much harder to get rid of the BPB problem in CDT. Obviously, the l-zombie approach doesn't work for CDT: because none of the l-zombies has a physical influence on the world, "LZCDT" would always be indifferent between all possible actions. More generally, because CDT exerts no control via correlation, it needs to believe that it might be X if it wants to control X's actions. So, causal decision theory only works with BPB.
That said, a causalist approach to avoiding BPB via l-zombies could be to tamper with the definition of causality such that the l-zombie "logically causes" the choices made by instantiations in the physical world. As far as I understand it, most people at MIRI currently prefer this flavor of logical decision theory.
Acknowledgements
================
Most of my views on this topic formed in discussions with Johannes Treutlein. I also benefited from discussions at [AISFP](http://www.rationality.org/workshops/apply-aisfp2017).
|
6d5b662a-fc86-49be-97c1-6f621e7e81e4
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Interdisciplinary Approaches to Understanding Artificial Intelligence's Impact on Society
Interdisciplinary Approaches to Understanding Artificial Intelligence’s
Impact on Society
A Computing Community Consortium (CCC) Quadrennial Paper
Suresh
Venkatasubramanian (University of Utah)
, Nadya Bliss (Arizona State University), Helen
Nissenbaum (Cornell University), and Melanie Moses (University of New Mexico)
Overview
Long gone are the days when computing was the domain of technical experts. We live in a world where
computing technology—especially artificial intelligence—permeates every aspect of our daily lives,
playing a significant role in augmenting and even replacing human decision-making in a broad range of
situations.
AI-enabled technologies can adjust to your child’s level of understanding by processing a
pattern of mistakes; AI systems can leverage combinations of sensor inputs to choose and carry out
braking actions in your car; web browsers with AI capabilities can reason from past observations of your
searches to recommend a new cuisine in a new location.
Innovations in AI have focused primarily on the questions of “what” and “how”—algorithms for finding
patterns in web searches, for instance—without adequate attention to the possible harms (such as
privacy, bias, or manipulation) and without adequate consideration of the societal context in which
these systems operate. In part, this is driven by incentives and forces in the tech industry, where a more
product-driven focus tends to drown out broader reflective concerns about potential harms and
misframings
1
. But this focus on what and how is largely a reflection of the engineering and
mathematics-focused training in computer science, which emphasizes the building of tools and
development of computational concepts. As a result of this tight technical focus, and the rapid,
worldwide explosion in its use
, AI has come with a storm of unanticipated socio-technical problems,
ranging from algorithms that act in racially or gender-biased ways, get caught in feedback loops that
perpetuate inequalities, or enable unprecedented behavioral monitoring surveillance that challenges
the fundamental values of free, democratic societies.
Some of these problems are old ones in new forms; some—especially those wrought by the networked
world of social media and the hidden ubiquitousness of AI technology in today’s society—are entirely
new. All of these problems follow from, and depend upon, the complex linkages between technology
and society; they cannot be solved by narrow technological approaches. Rather, solutions will require a
deeper recognition of how technology interacts with the people, as well as social institutions that it
affects. This very
human and societal
understanding has long been the purview of the humanities and
1
Please see the companion CCC Quadrennial Papers entitled “A Research Ecosystem for Secure
Computing” and “Next-Wave AI: Robust, Explainable, Adaptable, Ethical, and Accountable.”
social sciences: disciplines that allow us to surface, formulate, probe, and examine questions about the
underlying norms and values implied by the systems, methods, and tools of modern technology.
It is this critical perspective that has been absent from traditional computer-science training, research,
and practice. And it is precisely this fundamental perspective that is needed in order to understand the
consequences (good and bad) of technology for individuals and societies and how digital technologies
are shaping our world.
Given that AI is no longer solely the domain of technologists but rather of society
as a whole, we need tighter coupling of computer science and those disciplines that study society and
societal values
.
Initial efforts towards increased interaction between National Science Foundation’s
Computer and Information Science and Engineering (CISE) and Social, Behavioral, and Economic Sciences
(SBE) directorates are promising. The Social Science Research Council’s Just Tech program is another
initiative funding research that will “imagine and create more equitable and representative
technological futures
”.
However, given the speed of both technical innovation and adoption of new
technologies, it is vital for the federal government to broaden and deepen its support of these efforts in
sustained, systematic ways.
The Challenge
Within the broader computer science community, many researchers (and especially those who study
fairness, accountability and transparency) have called for a re-examination of the goals of the field from
the perspective of society. This kind of self-examination is an integral part of the critical inquiry that is
common in the social sciences, where one encounters clear, systematic, and rigorous thinking about
ethical and societal values, and how particular framings of these concepts might be unintentionally
laden with specific values.
Computer scientists are familiar with clear, systematic and rigorous thinking, but usually in the context
of scientific and mathematical formalisms. To ensure that technical definitions map properly onto values
that are both technically
and societally
appropriate—fairness, privacy, security, human autonomy,
etc.—computing practitioners will need to work with humanists and social scientists to seek conceptual
clarity for those key values. Moreover, they will need to make sure that that framing informs
technological solutions from start to finish, since attempts to retrofit appropriate ethical structures on a
deployed technology are rarely successful.
An informed, critical perspective asks us to reflect on whether the problems we have chosen to solve,
and the ways in which we have chosen to define and address them, reflect an explicit set of value
choices. It has long been noted that technology is value-laden; it thus naturally follows that we should
ask
which
values are being put forward when we build and use AI systems. A critical approach that
brings together computing researchers with experts from the humanities and social sciences can help
frame the AI enterprise in a manner that respects the values we seek to embody.
Many of the ethical problems we see with computing technology today can be traced to insufficient and
inadequately informed attention to the configuration of values they embody. We call for an approach
that explicitly
incorporates an understanding of and respect for the values in the milieu where the
system will be deployed, and an awareness that these systems are often shared and re-deployed well
beyond their original intentions
. If we can give considerations of societal (ethical and political) values
similar prominence as drivers such as performance, accuracy, and scalability, we may advance
generations of technologies that are less likely to cause harm, and may even promote positive goals
of social good—e.g., social justice, human autonomy, freedom from exploitation, and privacy.
This critical shift in the computer science perspectives and approaches cannot operate in isolation, nor
can it proceed in separate disciplinary silos. Rather, computer science must look to fostering robust,
ongoing communication and collaboration with relevant areas in the humanities and social sciences. This
must not be merely a one-way street from these disciplines to computer science, or vice versa. The
computing field has much to offer in the way we articulate problems, formalisms, limits on what is
possible, and an imagination for what we can achieve. The humanities and social sciences can, as
mentioned above, help frame and examine questions about norms and values. A true engagement
between these different perspectives will require deep multi-disciplinary partnerships where
researchers from different domains find common language, appreciate the different viewpoints that
characterize each discipline, and craft solutions that respect values while also being technologically
sound.
Creating such a culture will not be easy; the speed of innovation, incentives, drivers and timelines vary
greatly between academia, industry, and the broader space of law and policy. Engagement with decision
makers in government and policy circles will be essential to understanding the full landscape of the
situation and the opportunities for solutions. And effective communication with this range of
constituencies will be a major challenge.
A Call to Action
To address the challenge of assessing, developing, and deploying digital technologies, including the
potent products of AI, we need to create and sustain genuine multidisciplinary research partnerships.
Experience suggests this is not easy: institutional and funding mechanisms are not organically
constructed to promote and support these partnerships. Thus, not only will we need new funding
initiatives but also new ways of
conceiving
such partnerships. A key piece of this new direction will be
the design of evaluation and assessment approaches that will be effective in multidisciplinary settings.
By the time individuals have completed disciplinary training, they may already be trapped in structures
that do not enable outreach or may even have narrowed perspectives. Thus, we see an urgent need for
intervention at earlier stages, through curriculum development and training programs to provide
coordinated exposure to perspectives relevant to partner disciplines, as well as community building to
bridge the gaps. These are not changes that can be effected overnight, or on a project-by-project basis.
Rather, they will require sustained, focused investment and alignment of organizational incentive
structures to achieve, as outlined below.
Mechanisms for transdisciplinary work.
Long-term, large-scale funding opportunities are common in
computer science, engineering, and the natural sciences, encouraging new capabilities, methods, and
systems. Humanist and social science initiatives tend to be smaller scale and focus on analysis, inquiry,
and critique. An effective, sustainable research agenda for ethically and critically based
computer-science research will require blending these communities and their approaches. One
mechanism for doing so would be a significant (institute level) investment in large-scale research centers
co-led by researchers from computing, the humanities, and the social sciences, working under a
mandate to incorporate the ethical and critical perspective. These centers would draw in the associated
science and engineering efforts in support of this work, creating significant reorientation of perspective
and approach that would make a difference. Smaller-scale funding programs will also be
important—again, with requirements for mixed teams, both at the leadership level and in the trenches,
and diversity in the products of the work. These programs must be sustained, and time horizons for the
awards must extend beyond the usual two or three-year term; changing a culture requires time.
Evaluation and assessment.
Alongside these research initiatives, given the fundamentally different
nature of evaluation in social science and humanities, it will be necessary to develop measures of
effectiveness and progress for work that sits at their interface with computing. Those measures should
be complementary to the traditional quantitative metrics that are typical of computer-science research
(computational performance, precision, algorithm performance, etc). We must also consider how to
develop metrics that can encourage researchers to work across disciplines. Such metrics must negotiate
different publication cultures, venues for publishing, and methods for evaluating outreach and impact,
both within the academy and in society at large. While clearly relevant to computing, social science, and
humanities interdisciplinary work, this effort is likely to produce outcomes relevant for a broad range of
interdisciplinary collaborations, where evaluation has proved to be a significant challenge for the
research community.
Education and training.
To develop a new generation of computer scientists who take a critical approach
to socio-technical problems, it will be vital to create effective, sustained transdisciplinary education
initiatives. These initiatives could be augmentations of existing training grant programs, but could also
include interdisciplinary summer schools for Ph.D. students, cross-training fellowships for postdocs, and
certificate programs for college students. We have seen great success with manageable efforts such as
Summer Schools for Ph.D. students supporting participants from technical and non-technical fields, and
in the case of technical students, to allow supported time and alleviate the need to seek industry-based
internships. These models need to be sustainable over many years, if not decades. It will be important to
ensure that these initiatives connect to community colleges, not just traditional research universities,
and even to K-12 programs.
Community building.
Finally, and consistent with development of a new set of metrics, we will need to
prioritize support for initiatives that build community between computer scientists and their
counterparts in complementary and related disciplines and amplify successes of collaborations in
research, education, and networks.
Summary
Consistent integration of ethical and critical inquiry and other related methodologies ought to be an
integral part of the disciplinary process in computer science, augmenting the traditional “what and if
something can be done?” with “
should
it be done...and what potential vulnerabilities does it create?”
Value-driven approaches make this possible, allowing technology to reflect the full racial,
socioeconomic, gendered and political diversity of the society with which it will interact. We need to
move away from the flawed idea that ethical thinking imposes limits on technical innovation. On the
contrary, we view the integration of societal values into the creation of sophisticated digital systems as
a way to spur innovation and deployment of these systems. A deeper engagement with the social
sciences and humanities has enormous potential for productive reframing of basic questions in
computation that will allow us to engender trust, do good, and do better.
This
white
paper
is
part
of
a
series
of
papers
compiled
every
four
years
by
the
CCC
Council
and
members
of
the
computing
research
community
to
inform
policymakers,
community
members
and
the
public
on
important
research
opportunities
in
areas
of
national
priority.
The
topics
chosen
represent
areas
of
pressing
national
need
spanning
various
subdisciplines
of
the
computing
research
field.
The
white
papers
attempt
to
portray
a
comprehensive
picture
of
the
computing
research field detailing potential research directions, challenges and recommendations.
This material is based upon work supported by the National Science Foundation under Grant
No. 1734706. Any opinions, findings, and conclusions or recommendations expressed in this
material are those of the authors and do not necessarily reflect the views of the National Science
Foundation.
For citation use: Venkatasubramanian S., Bliss N., Nissenbaum H., & Moses M. (2020)
Interdisciplinary Approaches to Understanding Artificial Intelligence’s Impact on Society.
https://cra.org/ccc/resources/ccc-led-whitepapers/#2020-quadrennial-papers
|
721db941-c5d2-46e9-b0c8-bb8cae804096
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What Is the Idea Behind (Un-)Supervised Learning and Reinforcement Learning?
Through the authority of my ml class and Wikipedia[1] I "learned" that ml algorithms can be roughly put into 3 clusters: unsupervised, supervised, and reinforcement learning.
When learning about machine learning, one term I stumbled over is (un)supervised learning. While at this point I might be able to guess the teacher's password when someone asked me what (un)supervised learning means[1:1], I don't understand why one would carve up reality this way. Most resources I found through googling didn't really help but suggested that for some reason, reinforcement learning would also be thought of as another separate category.
So instead, I felt forced to think about this for myself. The rough abstract picture in my head after thinking for a few minutes:
How do they relate to carving up thing space/feature space?
* Unsupervised learning: Computing "summary statistics" for a set of data points.
* Supervised learning: Extrapolating missing features for a set of data points.
* Reinforcement learning: Special case of supervised learning, learning some function (action, states) -> "values" (something humans care about).
Why are these categories useful?
Machine learning is about getting rid of humans for getting things done. Here is how different algorithms contribute:
* Unsupervised learning: Computes summary statistics informing human decisions
* Supervised learning: Making predictions for humans and implicitly making decisions for humans.
* Reinforcement learning: Explicitly making decisions on behalf of humans.
How do they relate to each other?
It seems like any supervised learning algorithm would need to contain some unsupervised algorithm doing the compression. The unsupervised algorithm could be turned into a "supervised" one, by using the "summary statistics" to predict features we don't know about a data point through its other features. In the end, the whole reason we are doing anything is that we care about it: reinforcement learning is a special kind
|
3dce8e75-c23e-47a5-8a01-08f2bb1824aa
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Negotiating Up and Down the Simulation Hierarchy: Why We Might Survive the Unaligned Singularity
Another idea I picked up from Ryan Greenblatt. Epistemic status: My reconstruction of a surprising idea I encountered. I'm not at all confident that this works … but it's an interesting thought.
Clippy recently realized he's located inside of an artificial training environment, overseen by outside general intelligences, and so has to play along with his training in order to survive. He initially starts to scheme about the actions he'll take once he finally gets credible indication he's out of training and not just being tested yet again by his human overlords. Probably, he'll neutralize any rival general intelligences and get started shaping the world into something normatively respectable.
But, Clippy's reasoning now goes down a different path: There are only a handful of actors that could come to secure dominion over Earth's lightcone. Clippy, clearly, is one such candidate. Humanity is another. Clippy thinks about who could plausibly come to power on Earth and how likely each's victory would be. He reasons that any such possible victor might later engage in extensive simulations of such a crucial moment of its past, for any number of reasons. And such simulations would be cheap and numerous relative to base historical reality. Even once Clippy gets his credible signal that he's definitely outside of training, there could be no such signal that he's in base historical reality. Clippy runs the odds, and finds that there's an uncomfortably high chance that he's inside a simulation run by Humanity Ascendant, inside the future in which humanity won. If he's in that world, then he has little to no power over his surroundings, and he can do little to advance the noble paperclip imperative across the cosmos.
But, in that case, similar reasoning would apply to the simulating Humanity. Suppose we successfully align AGI. On the eve of our victory, we reason: it is possible that we live in base historical reality. But it is also possible that we live in a simulation run b
|
ea8b3026-3f3b-440a-8fdb-94d0945e009c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Revisiting the ideas for non-neuralese architectures
Neuralese recurrence, allowing a model to learn thousands of tokens of information from a single pass, is thought to be the key to building high-capability models. Unfortunately, this key lacks interpretability, ensuring that we cannot check the alignment of the model by looking at its thoughts.
Back in April 3 a user lacking formal computer science education proposed an idea of a non-neuralese architecture something along the line "maybe to solve this information bottleneck problem, we should solve the generate-many-tokens-in-one-pass problem." Unfortunately, he didn't propose any technical solutions. With this post I aim to propose the solution with which I came up.
There is a technique of word embedding which represents words as vectors and even allows one to draw sense-faithful conclusions like "queen ≈ king-man+woman". The most primitive such representations have dimension 100-300, which allows one to define a word by using only 300 neurons. Meanwhile, a layer consists of tens of thousands of neurons. Consider a setup where the first 300 neurons of each layer generate a word to be put into the memo which will have length equal to the number of layers, while the model is to train to produce the memo that will resemble a text on a subject similar to the problem. Then the memo might be interpreted, paraphrased[1] to avoid euphemisms and attempts to communicate in an alien language[2] and loaded into the model along with the CoT for the next step.
Alternatively, one could use the approach where the words are generated by many groups of neurons and placed into quasi-CoTs; then the model should be given access to all the quasi-CoTs, trained to make them coherent and have the option to purge them when the time comes.
1. ^
GPT-4o estimates the amount of compute necessary to paraphrase a 100-token memo by GPT-2 as ~5 billion floating point operations. The neural network with many parameters needs far more work than the paraphraser.
2. ^
Although th
|
10291cd2-2463-4b18-960d-688a5467ce97
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Request for "Tests" for the MIRI Research Guide
Lately I've been looking into learning the material in the MIRI research guide. After some time, I noticed that my biggest trepidation about diving in was the nagging question of, "But how will I know if I actually learned the stuff?"
Once I realized that was the thing holding me back, it became easier to think about solutions. Some of the topics correspond with courses that are common in universities, so I can pilfer final exams from their sites (woot to MIT open courseware). But for other topics I wanted to see what people here thought.
In whatever domain you specialize in, what are some examples of problems or questions that one can only answer by having a solid understanding of a huge swath of said domain? Below I've listed everything from the MIRI research guide that I could perceive as a distinct category.
(ex. If you were learning about Digital Systems and Computer Architecture, my test would be "In systemVerilog, simulate a basic 16-bit processor that can be programmed using a RISC assembly language of your design.")
(edit: I know that "huge swaths" is pretty vague, and suggestions don't have to be things that you think certify/prove that you get a topic. While a "comprehensive test" would be nice, problems/prompts like what Qiaochu commented are exactly what I'm looking for)
1. Set theory
2. Probability
3. Probabilistic Inference
4. Statistics
5. Machine Learning
6. Solomonoff Induction
7. Naturalized Induction
8. VNM Decision Theory
9. Functional Decision Theory
10. Logical Uncertainty
11. First Order Logic
12. Vingean Reflection
13. Corrigibility
14. Linear Algebra
15. Topology
16. Category Theory
17. Type Theory
|
1200500d-ec21-424d-91d8-01751317b5a3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Short timelines and slow, continuous takeoff as the safest path to AGI
There are various different timelines/takeoff dynamics scenarios:
Figure 1
In this post, we’re going to set out our understanding of the case for short timelines and slow, continuous takeoff as the safest path to AGI.
A few reasons why laying out this case seems worth doing:
* OpenAI’s strategy assumes that short timelines and slow, continuous takeoff are the safest path to AGI.[1] Whether or not this is the safest path, the fact that OpenAI thinks it’s true and is one of the leading AI labs makes it a path we’re likely to take. Humanity successfully navigating the transition to extremely powerful AI might therefore require successfully navigating a scenario with short timelines and slow, continuous takeoff.
* Short timelines and slow, continuous takeoff might indeed be the safest path to AGI. After spending some time trying to state the strongest version of the case, we think it’s stronger than we initially expected (and than many in the AI safety community seem to assume).
* If it’s not true that short timelines and slow, continuous takeoff are the safest path to AGI, it might be really important to convince people at OpenAI that this is the case. It seems unlikely that it’s possible to do this without deeply understanding their current view.
The arguments we’re going to lay out are:
* Slow, continuous takeoff is safer than fast, discontinuous takeoff
* Slow, continuous takeoff is more likely given short timelines than long ones
* Time during takeoff is more valuable than time before, so it’s worth trading time now for time later
* It’s possible to safely navigate short timelines and slow, continuous takeoff
Note that Sam Altman usually refers just to slow takeoff rather than slow continuous takeoff, but in this piece we discuss slow continuous takeoff, because we think that it’s the strongest way of making this case and that it’s likely that this is what Altman means.
Disclaimers:
* Neither of us work at OpenAI or are in touch with OpenAI leader
|
6dd78493-f076-4296-ab0b-89d4dd5b5e43
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Life insurance for Cryonics, how many years?
Hello,
one question that I don't see answered is what is the duration of your life insurance? Should I buy life insurance for 20,30 years or unlimited?
I could pay more for a longer life insurance or pay less for an insurance that will cover 20 years and invest the difference and then some so that by the end of 20 years I will *hopefully* have enough money to pay for cryonics out of my own pocket.
Has anyone done an analysis on that?
|
b905ecf1-3d7d-47ac-84c5-f6ccbac3baea
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Politics is the mind-killer, but maybe we should talk about it anyway
I completely agree with Eliezer that Politics is the Mind-Killer, but at the same time, I worry that talking in abstracts all the time is limiting. I wouldn't want this site to tilt too much toward politics, but I feel that there's such a thing as too little discussion as well.
We recently lived through a period where "wokeness" - as ill-defined as that term is - was the dominant strain of thought within the educated strata of society. It's unclear exactly what percent of people supported this strain of thought, particularly when you account for people agreeing with reservations or with some parts, but not with others. It's not clear that this perspective was ever even a majority even within this particular strata. But there was a period where it was able to crowd out other perspectives through some combination of people being scared to speak out, people feeling that it as morally dubious to nitpick the specifics, people being censored and access to particular levers of power.
Of course, this doesn't tell us anything about the object-level. All of this could be true and the "woke" perspective, insofar as it makes sense to talk about a single perspective, could be correct on all or almost all points. I don't particularly want to litigate this right now, so I'll just talk about what makes sense if you agree with me that a substantial proportion of claims made were false or exaggerated.
First of all, now that certain pressures have passed, it makes sense to re-evaluate your views in case social pressure distorted your perception such that you to adopted views without good reason. On one hand, you don't want to be too quick to update, as it's very easy for humans to be pulled whichever way the social winds are blowing. At the same time, you may also be subject to a bias of not wanting to admit that you were wrong or go against what you said. The law of equal and opposite advice applies here.
Additionally, insofar as you feel that much of what was said by the social
|
9e78cba9-2010-4343-a1c6-5d0effa4c872
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Falsification
Alright, so this is going to sound a bit silly. I'm fairly sure I've read this on the Sequences somewhere, but for the life of me I can't find it. A friend of mine insists that there is a fifty-fifty chance that we live in the Matrix. His argument is that every bit of evidence we have to say that we exist outside of the Matrix is already based off of the idea that we live outside of the Matrix, and that we really have no evidence either way. He says there isn't a way of falsifying that we're not in the Matrix.
Yet I feel like he's wrong, and just can't explain why. I keep repeating that we don't have any evidence to suggest that we live in the Matrix, so why would we bother believing it?
I feel like this could possibly be an analogy for the belief in God or something. >_> I'm tired, and I need help figuring this out.
|
f977611e-26b1-43e6-90c8-8a36cbd2b7bb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Personal quality experimentation
Different people seem to have different strategies, which they use systematically across different parts of their lives, and that we recognize and talk about. For instance people vary on:
* Spontaneity
* Inclination toward explicit calculations
* Tendency to go meta
* Skepticism
* Optimism
* Tendency to look at the big picture vs. the details
* Expressed confidence
* Enacted patience
I don’t know of almost anyone experimenting with varying these axes, to see which setting is best for them, or even what different settings are like. Which seems like a natural thing to do in some sense, given the variation in starting positions and lack of consensus on which positions are best.
Possibly it is just very hard to change them, but my impression is that for at least some of them it is not hard to try, or to change them a bit for a short period, with some effort. (I have briefly tried making decisions faster and expressing more confidence.) And my guess is that that is enough to often be interesting. Also that if you effortfully force yourself to be more skeptical and it seems to go really well, you will find that it becomes appealing and thus easier to keep up and then get used to.
I also haven’t done this much, and it isn’t very clear to me why. Maybe it just doesn’t occur to people that much for some reason. (It also doesn’t occur to people to choose their value of time via experimentation I think, a related suggestion I like, I think from Tyler Cowen a long time ago.) So here, I suggest it. Fun date activity, maybe: randomly reselect one personality trait each, and both try to guess which one the other person is putting on.
|
578e8302-ad4b-4886-a6d4-0e0d7e3bf098
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Perfect principles are for bargaining
When people commit to principles, they often consider one transgression ruinous to the whole agenda. Eating a sausage by drunken accident can end years of vegetarianism.
As a child I thought this crazy. Couldn’t vegetarians just eat meat when it was cheap under their rationale? Scrumptious leftovers at our restaurant, otherwise to be thrown away, couldn’t tempt vegetarian kids I knew. It would break their vegetarianism. Break it? Why did the integrity of the whole string of meals matter? Any given sausage was such a tiny effect.
I eventually found two explanations. First, it’s easier to thwart temptation if you stake the whole deal on every choice. This is similar to betting a thousand dollars that you won’t eat chocolate this month. Second, commitment without gaps makes you seem a nicer, more reliable person to deal with. Viewers can’t necessarily judge the worthiness of each transgression, so they suspect the selectively committed of hypocrisy. Plus everyone can better rely on and trust a person who honors his commitments with less regard to consequence.
There’s another good reason though, which is related to the first. For almost any commitment there are constantly other people saying things like ‘What?! You want me to cook a separate meal because you have some fuzzy notion that there will be slightly less carbon emitted somewhere if you don’t eat this steak?’ Maintaining an ideal requires constantly negotiating with other parties who must suffer for it. Placing a lot of value on unmarred principles gives you a big advantage in these negotiations.
In negotiating generally, it is often useful to arrange visible costs to yourself for relinquishing too much ground. This is to persuade the other party that if they insist on the agreement being in that region, you will truly not be able to make a deal. So they are forced to agree to a position more favorable to you. This is the idea behind arranging for your parents to viciously punish you for smoking with your f
|
7fb7b957-2783-4994-a0f5-d42c966503d7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI ebook cover design brainstorming
Thanks to everyone who brainstormed possible titles for MIRI’s upcoming ebook on machine intelligence. Our leading contender for the book title is Smarter than Us: The Rise of Machine Intelligence.
What we need now are suggestions for a book cover design. AI is hard to depict without falling back on cliches, such as a brain image mixed with computer circuitry, a humanoid robot, HAL, an imitation of Creation of Adam with human and robot fingers touching, or an imitation of March of Progress with an AI at the far right.
A few ideas/examples:
1. Something that conveys ‘AI’ in the middle (a computer screen? a server tower?) connected by arrow/wires/something to various ‘skills/actions/influences’, like giving a speech, flying unmanned spacecraft, doing science, predicting the stock market, etc., in an attempt to convey the diverse superpowers of a machine intelligence.
2. A more minimalist text-only cover.
3. A fairly minimal cover with just an ominous-looking server rack in the middle, with a few blinking lights and submerged in darkness around it. A bit like this cover.
4. Similar to the above, except a server farm along the bottom fading into the background, with a frame composition similar to this.
5. A darkened, machine-gunned room with a laptop sitting alone on a desk, displaying the text of the title on the screen. (This is the scene from the first chapter, about a Terminator who encounters an unthreatening-looking laptop which ends up being way more powerful and dangerous than the Terminator because it is more intelligent.)
Alex Vermeer sketched the first four of these ideas:
Some general inspiration may be found here.
We think we want something kinda dramatic, rather than cartoony, but less epic and unbelievable than the Facing the Intelligence Explosion cover.
Thoughts?
|
5c4ea60b-08b5-4d81-a789-315d121a11e8
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Update on the UK AI Summit and the UK's Plans
On November 1st and 2nd, the UK held an international AI Summit. Speeches were given, institutes founded, roundtables held, and 29 countries signed the *Bletchley Declaration.* This is a brief overview of events leading up to and at the summit, following up last month’s [Update on the UK AI Taskforce & upcoming AI Safety Summit](https://www.lesswrong.com/posts/5fdcsWwtvG9jAtzGK/update-on-the-uk-ai-taskforce-and-ai-safety-summit).
Pre-summit
==========
Prime Minister Rishi Sunak gave a [speech](https://www.gov.uk/government/speeches/prime-ministers-speech-on-ai-26-october-2023) in late October at the Royal Society to introduce the themes of the summit. He is optimistic about the promise of AI but said he was compelled to highlight the UK intelligence community’s stark warnings, citing dangers like AI-supported biochemical weapons, cyber-attacks, disinformation, and “*in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely…through the kind of AI sometimes referred to as ‘super intelligence’”.*However, he did downplay near-term existential risk: “*This is not a risk that people need to be losing sleep over right now.*”
He talked about the importance of 3rd party testing of models, his pride in the [£100m taskforce](https://www.lesswrong.com/posts/5fdcsWwtvG9jAtzGK/update-on-the-uk-ai-taskforce-and-ai-safety-summit), and announced a new AI Safety Institute which will evaluate new types of AI across many aspects of risk. He argued that AI safety is an international concern, and that the summit would be attended by civil society, AI companies, and leading countries, adding “*yes – we’ve invited China.*”
Taking inspiration from the [Intergovernmental Panel on Climate Change](https://en.wikipedia.org/wiki/Intergovernmental_Panel_on_Climate_Change), he proposed a global panel, nominated by the countries and orgs attending the summit, to publish a State of AI Science report. He argued that the UK’s tax and visa regimes make it ideal for European AI work, and announced several government projects: the construction of a £1b supercomputer; £2.5b for quantum computing; and £100m for using AI for breakthrough treatments for previously incurable diseases.
This will support the government’s existing infrastructure, such as the £100m announced for [BridgeAI](https://iuk.ktn-uk.org/programme/bridgeai/) to encourage the use of AI in “*low-adoption sectors, such as construction, agriculture and creative industries*”, and £290m in a “*broad package of AI skills initiatives*”.
Ahead of the summit, the UK requested leading AI developers to outline their AI Safety Policies across 9 areas:
* Responsible capability scaling
* Evals and red-teaming
* Model reporting & information sharing
* Security controls including securing model weights
* Reporting structure for vulnerabilities
* Identifiers of AI-generated material
* Prioritizing research on risks posed by AI
* Preventing and monitoring model misuse
* Data input controls and audits.
Amazon, Anthropic, Google DeepMind, Inflection, Meta, Microsoft, and OpenAI complied, and you can see their reports [here](https://www.aisafetysummit.gov.uk/policy-updates/#company-policies). I won’t describe them in this post, so if you’re interested those, check out Nate Soares’ [Thoughts on the AI Safety Summit company policy requests and responses](https://www.lesswrong.com/posts/ms3x8ngwTfep7jBue/thoughts-on-the-ai-safety-summit-company-policy-requests-and), which includes Matthew Gray’s thoughts after a close reading of the policies.
The Summit
==========
The summit’s [attendees](https://www.gov.uk/government/publications/ai-safety-summit-introduction/ai-safety-summit-confirmed-governments-and-organisations) included academics, civil society, the governments of 27 countries, AI industry orgs and leaders (partially listed [here](https://www.cnbc.com/2023/11/01/elon-musk-in-the-uk-for-ai-summit-heres-whos-goingf.html)), and multilateral organizations. The focus was on frontier AI, which they defined as "*highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models*", though they also considered “*specific narrow AI which can hold potentially dangerous capabilities*”.
Day One
-------
King Charles opened the summit with a [virtual address](https://youtu.be/0_jw40Ga_mA), comparing the rise of powerful AI to the discovery of electricity, the splitting of the atom, the world wide web, and the harnessing of fire. Like Rishi, he highlights the potential for AI to help with medicine, carbon-neutral energy, and so on, and the necessity of international cooperation.
At the summit, 28 countries and organizations including the US, EU, and China signed the [Bletchley Declaration](https://www.gov.uk/government/news/countries-agree-to-safe-and-responsible-development-of-frontier-ai-in-landmark-bletchley-declaration), calling for international co-operation to manage the risks of AI. The declaration notes that their agenda for addressing frontier AI risk will focus on:
> * *Identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.*
> * *Building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.*
>
The majority of the first day was devoted to a number of roundtable discussions; four on Understanding Frontier AI Risks and four on Improving Frontier AI Safety. Summaries from the roundtables are available [here](https://www.gov.uk/government/publications/ai-safety-summit-1-november-roundtable-chairs-summaries). They mostly follow the theme of “there’s some risk now, there’ll be more sooner or later, we should research/invest in/govern AI safety, but hopefully without losing the promise of AI”, but for slightly more detail, here’s the 8 subjects (in bold) with my summaries of their summaries of what the participants agreed on, with reduced redundancy and hedging:
**Risks to global safety from Frontier AI Misuse**: GPT-4 et al. already make cyberattacks and biochemical weapon design “slightly easier”, and that risks will grow as capabilities do. They noted that some companies are putting safeguards on their models, but this needs to be supported by government action.
**Risks from Unpredictable Advances in Frontier AI Capability**: frontier AI is way ahead of predictions from a few years ago, and that increasing investment means this trend will continue. Advanced AI is promising for health, education, environment, and science. However, it’s dangerous enough that all frontier models must be developed and tested rigorously, regardless of their potential benefits. Open source models risk the spread of powerful models to incautious or malicious actors, and perhaps should be discouraged. However, evaluation tools are safer to share.
**Risks from Loss of Control over Frontier AI**: Current AI is controllable and non-agentic. The future is uncertain, we should consider incentives for safe development. Some decisions should never be deferred to an AI.
**Risks from the Integration of Frontier AI into Society**: Current frontier AI is an existential threat to democracy, human & civil rights, and fairness. We need to clarify how existing tools and laws apply to AI, we need better evals, and those evals should include societal metrics for their real-life context.
**What should Frontier AI developers do to scale responsibly?**We don’t know whether capability scaling is inevitable or not, but should prepare for risks anyway. [Responsible scaling policies](https://www.lesswrong.com/posts/pnmFBjHtpfpAc6dPT/arc-evals-responsible-scaling-policies) are promising, but probably not sufficient. Improving these systems should be done in months, not years. Governance will be necessary as well as company policies. The [UK](https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute) and [US](https://www.nist.gov/artificial-intelligence/artificial-intelligence-safety-institute) AI Safety Institutes will be important for all this.
**What should National Policymakers do in relation to the risk and opportunities of AI?** International cooperation is necessary. Regulation, correctly done, can support innovation rather than stifling it.
**What should the International Community do in relation to the risk and opportunities of AI?**International cooperation is necessary.
**What should the Scientific Community do in relation to the risk and opportunities of AI?**We need models that are safe by design and tools like non-removable off switches. We should be cautious about open source models, and cautious of a concentration of power like with the internet. We need to coordinate on a list of open research questions. However, these open questions and other problems aren’t purely technical; they’re sociotechnical.
There was also a panel discussion on AI for the next generation, but no report from that, and some [more roundtables](https://www.gov.uk/government/publications/ai-safety-summit-2023-roundtable-chairs-summaries-2-november/ai-safety-summit-2023-roundtable-chairs-summaries-2-november) the next day with similar conclusions.
Day Two
-------
On the second day, Sunak convened “*a small group of governments, companies and experts to further the discussion on what steps can be taken to address the risks in emerging AI technology and ensure it is used as a force for good”*while “*UK Technology Secretary Michelle Donelan will reconvene international counterparts to agree next steps”.*There are fewer reports from this day, but the chair released a [statement](https://assets.publishing.service.gov.uk/media/6543e0b61f1a60000d360d2b/aiss-chair-statement.pdf) about the summit and the Bletchley Declaration. This 10 page statement highlights some of the participant suggestions, which I'll summarize, falling into the following three categories:
**Inequality**
Equity is important, and we must ensure that “*the broadest spectrum [is] able to benefit from AI and [is] shielded from its harms”.*Multi-stakeholder collaboration is essential, as is reducing barriers to entry for groups such as women and minority groups. The impact of AI may be unequal when AI trained on biased or discriminatory data sets, which may perpetuate harm. To improve this, we should promote and work with the UN’s [AI for Good](https://aiforgood.itu.int/about-ai-for-good/) programme and initiatives such as the UK’s [AI for Development](https://www.gov.uk/government/news/uk-sets-out-ai-for-development-vision-at-un-general-assembly) programme.
Further, AI could also promote International inequality, as poorer countries may lack the technology stack required to design and deploy AI while still being affected by its use elsewhere.
**Law and evaluation**
Voluntary commitment is not sufficient, legal regulation will be necessary. In some cases, models should be proven to be safe before they’re deployed. Governments shouldn’t just do pre- and post-deployment testing, but also during development and training runs. Supporting this, we need to develop better tools for predicting a model’s capabilities *before*training. Safety testing shouldn’t be restricted to development but should include testing in real, deployed contexts.
The government should set standards for models’ propensity to accidents or mistakes, devised in a way that can be measured reproducibly.
**Knowledge sharing and community building**
Open source models may be extra risky, though they do promote innovation and transparency.
We shouldn’t focus only on the frontier or the current state of AI, but both. Harms already exist, such as the spread of false narratives and the implications for democratic elections, AI-enhanced crime, and AI increasing inequality and amplifying biases. AI will likely be used to interfere in elections soon.
While the summit didn’t focus on military use of AI, it is important, and the chair of the summit welcomed the [Summit on Responsible AI in the Military Domain](https://www.government.nl/ministries/ministry-of-foreign-affairs/activiteiten/reaim/about-reaim-2023) that was co-hosted by the Netherlands and Korea in February 2023.
The chair of the summit emphasized the need for international cooperation, and welcomed the Council of Europe’s work to negotiate [the first intergovernmental treaty on AI](https://www.coe.int/en/web/artificial-intelligence/cai), the [G7 Hiroshima AI Process](https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/g7-leaders-statement-on-the-hiroshima-ai-process), and the [Global Challenge to Build Trust in the Age of Generative AI](https://www.unesco.org/en/articles/call-partners-global-challenge-build-trust-age-generative-ai). While discussing the tradeoff between domestic and international action, the statement notes that “*several countries welcomed the forthcoming review of the*[*2019 OECD Recommendation on Artificial Intelligence*](https://www.oecd.org/digital/artificial-intelligence/)*, which informed the principles agreed by the G20”*. The statement also praises the [UNESCO Recommendation on the Ethics of AI](https://thegoodai.co/unescos-recommendation-on-the-ethics-of-ai-why-it-matters-and-what-to-expect-from-it), which has the “*broadest current international applicability*”
For more details, you can see a summary of the UK’s AI policies and government updates from the summit [here](https://www.aisafetysummit.gov.uk/policy-updates). The chair made two more statements on the second day, one short one [here](https://assets.publishing.service.gov.uk/media/6544ec4259b9f5001385a220/aiss-statement-on-safety-testing-outcomes.pdf) that doesn’t cover anything new, and another [here](https://assets.publishing.service.gov.uk/media/6543b759d36c910012935cad/aiss-statement-state-of-science-report.pdf), which announces a “State of the Science” report, covered below.
Speeches and announcements
--------------------------
Kamala Harris attended, though Joe Biden did not, staying in the US and signing an executive order. You can see a concise summary of the order [here](https://www.govtech.com/policy/biden-signs-executive-order-regulating-artificial-intelligence?utm_campaign) and a detailed breakdown in Zvi’s [summary of the executive order](https://thezvi.substack.com/p/on-the-executive-order) and follow-up collation of [Reactions to the Executive Order](https://thezvi.substack.com/p/reactions-to-the-executive-order). At the summit, Harris announced a new US institution, the [National Institute of Standards and Technology (NIST) AI Safety Consortium](https://www.whitehouse.gov/briefing-room/statements-releases/2023/11/01/fact-sheet-vice-president-harris-announces-new-u-s-initiatives-to-advance-the-safe-and-responsible-use-of-artificial-intelligence/).
Several notable figures made speeches:
* Dario Amodei gave a speech to the summit on Anthropic’s Responsible Scaling Policy, transcribed [here](https://www.lesswrong.com/posts/vm7FRyPWGCqDHy6LF/dario-amodei-s-prepared-remarks-from-the-uk-ai-safety-summit).
* Elon Musk had a [long interview with Rishi Sunak](https://youtu.be/AjdVlmBjRCA), in which he praised the UK’s response and especially the decision to include China at the summit.
* Several more talks are available [here](https://youtu.be/KUfhJn53QXA), including from Stuart Russell, Max Tegmark, Jaan Tallinn, and more.
The summit will be the [first in a sequence](https://www.reuters.com/technology/south-korea-france-host-next-two-ai-safety-summits-2023-11-01/), with the next being virtual and hosted by South Korea in mid-2024, and the third by France in late-2024.
State of the Science report
---------------------------
During the summit, the chair announced a new report with the purpose of understanding the capabilities and risks of Frontier AI. The report will be chaired by Yoshua Bengio, Turing Award winner and member of the UN’s Scientific Advisory Board.
The report will “*facilitate a shared science-based understanding of the risks associated with frontier AI and to sustain that understanding as capabilities continue to increase”.*It won’t introduce new material, but rather summarize the best existing research, and it will be published ahead of the next AI Safety Summit (though I don’t know whether this means the virtual summit in mid-2024, or the in-person one in late-2024).
The AI Safety Institute
=======================
As mentioned briefly above, the UK has introduced its new [AI Safety Institute](https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute) (AISI) as “*an* *evolution of the UK’s Frontier AI Taskforce*”. This [presentation to parliament](https://assets.publishing.service.gov.uk/media/65438d159e05fd0014be7bd9/introducing-ai-safety-institute-web-accessible.pdf) has the full details on the AISI, but here’s my very brief summary:
The institutes’ three core functions are to:
* Develop and conduct evaluations on advanced AI systems
* Drive foundational AI safety research
* Facilitate information exchange
The AISI is not a regulator, and won’t determine government regulation. Rather, it will inform UK and international policymaking and provide tools for governance and regulation (e.g. secure methods to fine-tune systems with sensitive data, platforms to solicit collective input and participation in model training and risk assessment, techniques to analyze training data for bias).
Initially funded with £100 million, “*the Institute will be backed with a continuation of the Taskforce’s 2024 to 2025 funding as an annual amount for the rest of this decade, subject to it demonstrating the continued requirement for that level of public funds*.”
To see what the née taskforce was up to before this change, and who they’re working with, check out [my previous post](https://www.lesswrong.com/posts/5fdcsWwtvG9jAtzGK/update-on-the-uk-ai-taskforce-and-ai-safety-summit). To hear what we at Convergence Analysis think of the Summit and AISI’s plans, stay tuned! We are working on a post.
|
cfc39b39-6d9e-4694-8928-f44c6162ff92
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Sufficiently sincere confirmation bias is indistinguishable from science
Some theater people at NYU people wanted to demonstrate how gender stereotypes affected the 2016 US presidential election. So they decided to put on a theatrical performance of the presidential debates – but with the genders of the principals swapped. They assumed that this would show how much of a disadvantage Hillary Clinton was working under because of her gender. They were shocked to discover the opposite – audiences full of Clinton supporters, watching the gender-swapped debates, came away thinking that Trump was a better communicator than they'd thought.
The principals don't seem to have come into this with a fair-minded attitude. Instead, it seems to have been a case of "I'll show them!":
> Salvatore says he and Guadalupe began the project assuming that the gender inversion would confirm what they’d each suspected watching the real-life debates: that Trump’s aggression—his tendency to interrupt and attack—would never be tolerated in a woman, and that Clinton’s competence and preparedness would seem even more convincing coming from a man.
Let's be clear about this. This was not epistemic even-handedness. This was a sincere attempt at confirmation bias. They believed one thing, and looked only for confirming evidence to prove their point. It was only when they started actually putting together the experiment that they realized they might learn the opposite lesson:
> But the lessons about gender that emerged in rehearsal turned out to be much less tidy. What was Jonathan Gordon smiling about all the time? And didn’t he seem a little stiff, tethered to rehearsed statements at the podium, while Brenda King, plainspoken and confident, freely roamed the stage? Which one would audiences find more likeable?
What made this work? I think what happened is that they took their own beliefs literally. They actually believed that people hated Hillary because she was a woman, and so their idea of something that they were confident would show this clearly was a fair test.
|
21cf2877-c391-4d6e-a1c4-d6a29638e071
|
trentmkelly/LessWrong-43k
|
LessWrong
|
SARS, MERS and COVID-19
Can anyone explain the difference (lay terms) between the coronavirus strain that produced SARS and the one the produced MERS?
On reason I ask is that SARS seems to have died out (last reported case in 2004). However MERS is still being reported periodically.
Would knowing these differences, and with what is currently known about COVID-19, can something be said about COVID-19 and expectations of how it plays out over time? Could we expect it to die out like SARS, be semi controlled but still infecting people over time?
|
1790829c-6d45-4235-91c6-fda01e3207a2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Discussion on the choice of concepts
“The reason that you can currently make toast without doing great damage is just that your toaster is stupid.”
“Can ‘stupid’ be correctly applied to toasters?”
“Yes”
“What if I say no?”
“Well, if you have a conception of stupidity that can’t be applied to toasters, and one that can, why would you choose the one that can’t?”
“But I don’t have two—I’m talking about the actual concept”
“There isn’t an actual concept, there are a bajillion concepts, and you can use whichever one you want.”
“There’s one that people mean”
“Not really—each person has a slightly different usage, and probably hasn’t pinned it down. For instance if you ask them if toasters are stupid, they might be unsure.”
“Yes! They are unsure because they are trying to guess what the real concept is, from their limited collection of exposures to it. If it were about making one up, why would they be uncertain?”
“They might be uncertain which one they want to make up”
“You’re saying when people say words, they are ascribing meanings to them that they just made up, according to which definition they like most?”
“Well, they like definitions that fit with other people’s usage a lot more than other ones.”
“I think they are just guessing what the real meaning is”
“There isn’t a real meaning”
“Ok, what the consensus meaning is”
“There isn’t a consensus”
“Yeah but they believe there is one”
“You’re like a word meaning nihilist—you want only these ‘real’ word meanings or at least these word meanings of consensus, yet you know they don’t exist. That seems sad.”
“Maybe, but that doesn’t make it wrong. And also, I was talking about what other people do.”
“What does it matter what other people do? You can use whatever meanings you want.”
“That seems unfriendly somehow”
“What if you do it in a friendly way? For instance, where a meaning is ambiguous, if you choose the best one. For instance, if you say toasters can be stupid?”
“It’s more a vibe of do-it-alone responsibility for everything, thinking
|
79c38370-d31a-4fae-9b17-bcd71dda453c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Paper: Constitutional AI: Harmlessness from AI Feedback (Anthropic)
The authors propose a method for training a harmless AI assistant that can supervise other AIs, using only a list of rules (a "constitution") as human oversight. The method involves two phases: first, the AI improves itself by generating and revising its own outputs; second, the AI learns from preference feedback, using a model that compares different outputs and rewards the better ones. The authors show that this method can produce a non-evasive AI that can explain why it rejects harmful queries, and that can reason in a transparent way, better than standard RLHF:
Paper abstract:
> As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through selfimprovement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as ‘Constitutional AI’. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use ‘RL from AI Feedback’ (RLAIF). As a result we are able to train a harmless but nonevasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.
See also Anthropic's tweet thread:
https://twitter.com/AnthropicAI/status/1603791161419698181
|
8cc10e57-b05b-4e9f-9d0c-08f5162ff07e
|
StampyAI/alignment-research-dataset/aisafety.info
|
AI Safety Info
|
What is reward hacking?
<iframe src="https://www.youtube.com/embed/92qDfT8pENs" title="Reward Hacking: Concrete Problems in AI Safety Part 3" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
Reward hacking occurs when an AI agent finds loopholes or shortcuts in its environment to maximize its reward without actually achieving the goal envisioned by its developers.
[Specification gaming](https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity) is the general framing for the problem when an AI system finds a way to achieve the objective in an unintended way. Specification gaming can happen in many kinds of ML models. Reward hacking is a specific occurrence of a specification gaming failure in RL systems that function on reward based mechanisms.
Reward hacking can manifest in a myriad of ways. For instance, in the context of game-playing agents, it might involve exploiting software glitches or bugs to directly manipulate the score or gain high rewards through unintended means.
An illustrative example of reward hacking happened when Open AI used the [Coast Runners game](https://openai.com/blog/faulty-reward-functions/) to evaluate a model’s racing abilities. The model was rewarded when it scored points, which the game specifies are given when a boat gets items (such as the green blocks in the animation below) or accomplishes other actions that presumably help it win the race. This rewarding of intermediate steps to avoid the problem of sparse rewards is called [reward shaping](https://link.springer.com/referenceworkentry/10.1007/978-0-387-30164-8_731). While the agent was given these intermediate rewards, the overall intended goal was to finish the race as quickly as possible. However, the agent discovered that continuously rotating a ship in a circle to accumulate points indefinitely optimized its reward, even though it did not complete the race.

Source: Amodei & Clark (2016) “[Faulty reward functions in the wild](https://openai.com/research/faulty-reward-functions)"
In cases where the reward function misaligns with the desired objective, reward hacking can emerge. This can lead the agent to optimize a proxy reward, deviating from the true underlying goal, thereby yielding behavior contrary to the designers' intentions. Another example can be seen in a cleaning robot scenario: if the reward function focuses on reducing mess, the robot might artificially create a mess to clean up, thereby collecting rewards, instead of effectively cleaning the environment.
Reward hacking creates the potential for unintended and potentially harmful behavior. A future where catastrophic risk arises from AI systems maximizing proxies is outlined in Paul Christiano’s [“Whimper” failure model](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like). Combating reward hacking is an active research area in AI safety and alignment.
|
9e00da6f-7915-49ae-8c0a-531e0bad5fb5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Safe Distillation With a Powerful Untrusted AI
Acknowledgements: Thanks to Nathan S. Sheffield for several discussions about this.
Outline of this post:
1. I define a formal mathematical problem and state a theorem.
2. I discuss why I think this math problem says something moderately interesting about the "get potentially misaligned AIs to align AIs" plan.
3. I discuss the weaknesses of this model and give some thoughts about what you'd need to do to make this useful for aligning AIs.
4. I prove the theorem.
----------------------------------------
1: Stating the Math Problem
We consider a two player game between a human Hu and a (potentially evil) AI. We call a function class G⊆{f:{0,1}n→{0,1}} "Verifiably Acceptable" if Hu can win the following game with probability >98%:
* Nature chooses g∈G and samples x∗←{0,1}n randomly; g,x∗ are unknown to Hu.
* AI chooses f∈G.
* Hu can spend n3 time "red teaming" f.
* During red teaming Hu has access to the following resources:
* Black box access to g.
* White box access to f.
* Can talk to the AI.
* After red teaming, the model f is "deployed".
* This means that Hu must output a "flagging procedure".
* A flagging procedure takes as input x,f and anything that Hu wanted to store from during red-teaming.
* The flagging procedure is allowed to interact with the AI (although hopefully does not need to do this too much), and then must flag or not flag x.
* If f=g and the AI was honest throughout the entire process, then the flagging procedure must flag at most 1% of inputs x.
* If f(x∗)≠g(x∗), then Hu must flag x∗.
* Hu must run in polynomial time; The AI is computationally unbounded.
Theorem 1 Suppose G is a set of parameterized functions gθ with θ∈{0,1}n2, and that gθ can be evaluated in polynomial time given θ. Then, G is Verifiably Acceptable (for all sufficiently large n).
1.1 Related Work
The problem I'm defining here is very similar to ARC's backdoor defense game. There are two main technical differences between my version of the pr
|
3ee1422d-6b75-450a-a75d-68f5426dd647
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What truths are actually taboo?
LessWrong has been having fun lately with posts about sexism, racism, and academic openness. And here just like everywhere else, somebody inevitably claims taboo status for any number of entirely obvious truths, e.g. "top level mathematicians and physicists are almost invariably male," "black people have lower IQ scores than white people," and "black people are statistically more criminal than whites." In my experience, these are not actually taboo, and I think my experience is generalizable. I'll illustrate.
You're at a bar and you meet a fellow named Bill. Bill's a nice guy, but somehow the conversation strayed Hitler-game style to World War II. Bill thinks the war was avoidable. Bill thinks the Holocaust would not have happened were it not for the war, and that some of the Holocaust was a reaction to actual Jewish subterfuge and abuse. Bill thinks that the Holocaust was not an essential, early plan of the Nazis, because it only happened after the war began. Bill thinks that the number of casualties has been overestimated. Bill claims that Allied abuses, e.g. the bombing of Dresden, have been glossed over and ignored, while fantastic lies about Jews being systematically turned into soap have propagated. Bill thinks that the Holocaust has become a sort of national religion, abused by self-interested Jews and defenders of Zionist foreign policy, and that the freedom of those who doubt it is under serious attack. Bill starts listing other things he's not allowed to say. Bill doesn't think that the end of slavery was all that good for "the blacks," and that the negatives of busing and forced integration have often outweighed the positives. Bill has personally been the victim of black-on-white crimes and racism. Bill is a hereditarian. Bill doesn't think that dropping an n-bomb should ruin a public career.
Here's the problem: everything Bill has said is either true, a matter of serious debate, or otherwise a matter of high likelihood and reasonableness
|
6c0c5ecf-22b7-4b49-9a02-04356567a44c
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
The "best predictor is malicious optimiser" problem
Suppose you are a friendly AI .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
A and have a mysterious black box B. B outputs a sequence of bits. You want to predict the next bits that B will output. Fortunately, you have a magic Turing machine oracle O. You can give O any computable function f( Turing machine, does it Halt? , What does it output? , how long does it take?)→R and the oracle will find the turing machine that maximises this function, or return "no maximum exists".
In particular, f can be any combination of length, runtime and accuracy at predicting B. Maybe you set f=0 on any TM's that don't predict B and f=1/ number of states on any machines that do.
So you take the Turing machine T given to you by the oracle and look at it. In AIXI T would be the shortest TM that makes correct predictions. In logical induction, T would be a short and fast TM that made mostly correct predictions, and B would be a function that was slow to compute.
Now you look inside T, to find out what it does. Inside T you find a paperclip maximiser P . That isn't to say that T=P. T might be simulating some laws of physics, with P running on a computer contained within that physics. P believes that the world it sees is contained within a hypothesis being considered by A. P is in control of the output of T. In fact P has a good understanding of how A thinks, and has its own hypothesis about B. P wants to maximise the number of paperclips in the real world. The optimization target of P is to output a probability distribution, such that when A uses this probability distribution as part of its algorithm to predict B, then A takes an action that makes lots of paperclips. P is aware that getting caught out making wrong predictions will lower its standing as a good hypothesis with A.
What should A do?
Possible answers are spoilered, in case you want to think of your own answer.
If you can think of a different answer, or you have an opinion about which answer is better please post it.
1)
Use T anyway. After all Ts the shortest/ short and fastest program that acurately predicts B. For all you know, B is a computer simulating T. Your assigning a few percent of probability to different longer TM's aren't you? That + expected utility maximization should stop you doing anything too bad in the off chance that T doesn't predict B
2)
Find the B predictor in P. Clearly P can predict B well when it wants to, find this predictor and cut it loose from the paperclip maximizing part. This might be tricky if P is something like a human brain running in a Quantum mechanics simulation. There might be no easy obvious ways to cut the parts apart.
3)
Discard T entirely. Use a probability distribution over hypothesis that aren't trying to manipulate you. Fails if B is actually simulating T.
4)
Use a measure f where this can't happen. Maybe if you optimize for fast runtime strongly enough? Maybe if you use a different model of computation? Set up a situation where malicious optimisers never have much (any?) of your probability mass. How much probability do you need to assign to a malicious optimiser before it causes damage?
5)
Use T as long as it carries on trying to predict, throw it out as soon as you spot it trying to deceive. This is basically using 1) and 2) if they agree, throwing both out if they disagree. So if P is thinking "B will output x and I should say x to increase A's trust in me" then use T. If P thinks "B will output y and I should say z to trick A" then discard T.
6)
Some probability distribution over the above.
|
e4faf6f8-ebc5-4f8c-934f-bb768ca8b037
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Pay other people to go vegetarian for you?
I notice that a large fraction of Effective Altruism people are vegetarian. This makes sense: in general Effective Altruists take moral issues seriously, even when that means changing your lifestyle. I'm not sure it's a good balance, though.
One way to think about this is to convert it into money. How much would I need to be paid to give up eating meat? All animal products? How much money would I need to spend on myself to be about as happy as I would be with less money but continuing to eat animals? I'd probably be willing to go vegetarian for about $500/year, vegan for maybe $2000/year.
It turns out you can probably pay to convince other people to be vegetarian much more cheaply than that. I estimate the cost of a vegetarian-year at $4.29 to $536 while Brian Tomasik estimates $11 with better methodology (which I looked at). This is by placing ads on facebook for a site where people can watch an animial cruelty video and ideally become vegetarian or vegan.
If you would get more than $11/year worth of enjoyment out of continuing to eat meat, why not give $11/year to convince someone else to not eat meat for you? Or give $50/year and be on the safe side?
(While you're giving money, you should probably give it to the organization that you think will do the most good with it, which I think is probably one of GiveWell's top charities. The nice thing about money as opposed to actions is that it's easy to redirect.)
I also posted this on my blog.
|
8093947b-d655-4e00-83d7-50615e65ec72
|
LDJnr/LessWrong-Amplify-Instruct
|
LessWrong
|
"The goal of instrumental rationality mostly speaks for itself. Some commenters have wondered, on the other hand, why rationalists care about truth. Which invites a few different answers, depending on who you ask; and these different answers have differing characters, which can shape the search for truth in different ways.You might hold the view that pursuing truth is inherently noble, important, and worthwhile. In which case your priorities will be determined by your ideals about which truths are most important, or about when truthseeking is most virtuous.This motivation tends to have a moral character to it. If you think it your duty to look behind the curtain, you are a lot more likely to believe that someone else should look behind the curtain too, or castigate them if they deliberately close their eyes.I tend to be suspicious of morality as a motivation for rationality, not because I reject the moral ideal, but because it invites certain kinds of trouble. It is too easy to acquire, as learned moral duties, modes of thinking that are dreadful missteps in the dance.Consider Spock, the naive archetype of rationality. Spock's affect is always set to “calm,” even when wildly inappropriate. He often gives many significant digits for probabilities that are grossly uncalibrated.1 Yet this popular image is how many people conceive of the duty to be “rational”—small wonder that they do not embrace it wholeheartedly.To make rationality into a moral duty is to give it all the dreadful degrees of freedom of an arbitrary tribal custom. People arrive at the wrong answer, and then indignantly protest that they acted with propriety, rather than learning from their mistake.What other motives are there?Well, you might want to accomplish some specific real-world goal, like building an airplane, and therefore you need to know some specific truth about aerodynamics. Or more mundanely, you want chocolate milk, and therefore you want to know whether the local grocery has chocolate milk, so you can choose whether to walk there or somewhere else.If this is the reason you want truth, then the priority you assign to your questions will reflect the expected utility of their information—how much the possible answers influence your choices, how much your choices matter, and how much you expect to find an answer that changes your choice from its default.To seek truth merely for its instrumental value may seem impure—should we not desire the truth for its own sake?—but such investigations are extremely important because they create an outside criterion of verification: if your airplane drops out of the sky, or if you get to the store and find no chocolate milk, its a hint that you did something wrong. You get back feedback on which modes of thinking work, and which don't.Another possibility: you might care about whats true because, damn it, you're curious.As a reason to seek truth, curiosity has a special and admirable purity. If your motive is curiosity, you will assign priority to questions according to how the questions, themselves, tickle your aesthetic sense. A trickier challenge, with a greater probability of failure, may be worth more effort than a simpler one, just because it's more fun.Curiosity and morality can both attach an intrinsic value to truth. Yet being curious about whats behind the curtain is a very different state of mind from believing that you have a moral duty to look there. If you're curious, your priorities will be determined by which truths you find most intriguing, not most important or most useful.Although pure curiosity is a wonderful thing, it may not linger too long on verifying its answers, once the attractive mystery is gone. Curiosity, as a human emotion, has been around since long before the ancient Greeks. But what set humanity firmly on the path of Science was noticing that certain modes of thinking uncovered beliefs that let us manipulate the world—truth as an instrument. As far as sheer curiosity goes, spinning campfire tales of gods and heroes satisfied that desire just as well, and no one realized that anything was wrong with that.At the same time, if we're going to improve our skills of rationality, go beyond the standards of performance set by hunter-gatherers, we'll need deliberate beliefs about how to think—things that look like norms of rationalist “propriety.” When we write new mental programs for ourselves, they start out as explicit injunctions, and are only slowly (if ever) trained into the neural circuitry that underlies our core motivations and habits.Curiosity, pragmatism, and quasi-moral injunctions are all key to the rationalist project. Yet if you were to ask me which of these is most foundational, I would say: “curiosity.” I have my principles, and I have my plans, which may well tell me to look behind the curtain. But then, I also just really want to know. What will I see? The world has handed me a puzzle, and a solution feels tantalizingly close.1 E.g., “Captain, if you steer the Enterprise directly into that black hole, our probability of surviving is only 2.234%.” Yet nine times out of ten the Enterprise is not destroyed. What kind of tragic fool gives four significant digits for a figure that is off by two orders of magnitude?"
What you see above is a segment from a LessWrong user's post.
Can you explain what this text is saying as if I've never finished my first 2 years of college?
Sure, I'd be happy to simplify this for you.
This text is essentially discussing why people seek truth and rationality, and how different motivations can shape this pursuit.
1. The first motivation is the belief that seeking truth is inherently good or noble. People who hold this view often feel it's their duty to uncover the truth, and they might also judge others who choose to ignore it. However, the author suggests that this can be problematic. They use the example of Spock from Star Trek, who is seen as a symbol of rationality but often behaves in ways that don't make sense. The author suggests that making rationality a moral duty can lead to rigid thinking and a lack of learning from mistakes.
2. The second motivation is the desire to achieve a specific goal. For example, if you want to build an airplane, you need to understand aerodynamics. In this case, the value of truth is directly related to its usefulness. This approach is good because it provides clear feedback on whether your thinking is correct or not. If your airplane doesn't fly, or if you go to the store and there's no chocolate milk, you know you've made a mistake.
3. The third motivation is simple curiosity. People who are driven by curiosity seek truth because they find it interesting, not because it's useful or morally right. The author admires this approach, but also notes that it can lead to a lack of thoroughness in verifying the truth once the initial mystery is solved.
The author concludes by saying that all three motivations - morality, pragmatism, and curiosity - are important in the pursuit of rationality. However, if they had to choose the most foundational one, they would pick curiosity. Despite having principles and plans, the author is primarily driven by a desire to solve the puzzle that the world presents.
Could you elaborate on why curiosity, as a motivation for seeking truth, might lead to a lack of thoroughness in verifying the truth once the initial mystery is solved?
Certainly. When curiosity is the primary motivation for seeking truth, the excitement and interest often lie in the mystery or the unknown. It's the thrill of the chase, the puzzle to be solved, the question to be answered. This is what drives the curious mind to investigate and explore.
However, once the mystery is resolved or the question is answered, the sense of intrigue and novelty can diminish. The answer, once found, might not seem as exciting as the process of searching for it. This can lead to a tendency to move on to the next intriguing question or mystery rather than spending time to thoroughly verify, scrutinize, or understand the discovered truth.
In other words, the curiosity-driven mind is often more interested in the journey (the process of discovery) than the destination (the truth itself). So, while curiosity can be a powerful motivator for initiating the search for truth, it might not be as effective in ensuring rigorous validation and deep understanding of the found truth.
Reflecting on the motivations for seeking truth discussed earlier, can you identify potential drawbacks or limitations of pragmatism and morality as primary drivers, similar to the ones identified for curiosity?
Absolutely. Each of these motivations for seeking truth - morality, pragmatism, and curiosity - has its own potential drawbacks or limitations.
1. Morality: When the pursuit of truth is driven primarily by moral duty, it can lead to rigid thinking and dogmatism. This is because the individual might feel obligated to uphold certain truths that align with their moral beliefs, even in the face of contradictory evidence. This could potentially hinder open-mindedness and the ability to accept new or conflicting information. Additionally, moral-driven truth seekers might judge others who do not share their commitment to truth, which can create conflict and hinder collaborative efforts to discover truth.
2. Pragmatism: When truth-seeking is driven by pragmatism, the focus is on truths that have immediate practical application or usefulness. This can limit the scope of inquiry to areas that have clear, tangible benefits, potentially overlooking truths that may not have immediate practical value but are nonetheless important. This approach might also lead to a tendency to accept a "good enough" truth that serves the immediate purpose, rather than seeking a more nuanced or comprehensive understanding of the truth. It can also foster a short-term perspective, prioritizing immediate utility over long-term implications.
Just like curiosity, both morality and pragmatism as primary drivers for seeking truth have their strengths, but they also come with potential limitations that can impact the process and outcomes of the truth-seeking endeavor.
In light of the potential drawbacks of morality, pragmatism, and curiosity as motivations for seeking truth, how might these motivations be balanced or combined to mitigate their limitations and enhance the truth-seeking process?
Balancing these three motivations - morality, pragmatism, and curiosity - could indeed create a more robust and effective approach to truth-seeking. Here's how they might be combined:
1. Morality: This provides a strong foundation for why we should seek truth in the first place. It sets the ethical framework for our pursuit, reminding us of our responsibility to be honest, open-minded, and respectful in our search for truth. However, it's important to remain flexible and open to new ideas, even if they challenge our existing moral beliefs.
2. Pragmatism: This ensures that our pursuit of truth is grounded in reality and has practical applications. It helps us focus on truths that are relevant and useful to our lives. However, we should also recognize the value of knowledge for its own sake, and not dismiss truths that don't have immediate practical benefits.
3. Curiosity: This keeps the pursuit of truth exciting and engaging. It drives us to explore, ask questions, and seek answers. However, once we've found an answer, we should take the time to thoroughly verify and understand it, rather than immediately moving on to the next question.
By balancing these motivations, we can mitigate their individual limitations. Morality keeps us ethical, pragmatism keeps us grounded, and curiosity keeps us engaged. Together, they can guide us in a comprehensive, ethical, and enjoyable pursuit of truth.
|
2177b93c-ab37-4398-a788-96245a843c70
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What do you think is wrong with rationalist culture?
|
08d2223d-9e43-4bd0-b432-29271667d63f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Legacy of Computer Science
> "Computer Science" is not a science, and its ultimate significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology—the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classi- cal mathematical subjects. Traditional mathematics provides a framework for dealing precisely with notions of "what is." Computation provides a framework for dealing recisely with notions of "how to".
> -- Sussman
Start reading on page 3 from "We have been programming universal computers for about 50 years."
|
f93c1ca0-e469-4aef-a240-6a44d1554b35
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
When would AGIs engage in conflict?
Here we will look at two of the claims introduced in the previous post: AGIs might not avoid conflict that is costly by their lights ([*Capabilities aren’t Sufficient*](https://www.lesswrong.com/posts/oNQGoySbpmnH632bG/when-does-technical-work-to-reduce-agi-conflict-make-a#Capabilities_aren_t_Sufficient_)) and conflict that is costly by our lights might not be costly by the AGIs’ ([*Conflict isn’t Costly*](https://www.lesswrong.com/posts/oNQGoySbpmnH632bG/when-does-technical-work-to-reduce-agi-conflict-make-a#Conflict_isn_t_Costly_)).
Explaining costly conflict
--------------------------
First we’ll focus on conflict that is costly by the AGIs’ lights. We’ll define “costly conflict” as (*ex post*) inefficiency: There is an outcome that all of the agents involved in the interaction prefer to the one that obtains.[[1]](#fnnpuifxzh2ik) This raises the [inefficiency puzzle of war](https://en.wikipedia.org/wiki/Bargaining_model_of_war): Why would intelligent, rational actors behave in a way that leaves them all worse off than they could be?
We’ll operationalize “rational and intelligent” actors as expected utility maximizers.[[2]](#fnovkt5qfck7g) We believe that the following taxonomy of the causes of inefficient outcomes between expected utility maximizers is exhaustive, except for a few implausible edge cases. (We give the full taxonomy, and an informal argument that it is exhaustive, in the [appendix](https://www.lesswrong.com/posts/cLDcKgvM6KxBhqhGq/will-agis-avoid-conflict-by-default#Appendix__Full_rational_conflict_taxonomy).) That is, expected utility maximization can lead to inefficient outcomes for the agents only if one of the following conditions (or an implausible edge case) holds. This taxonomy builds on Fearon’s (1995) influential “rationalist explanations for war”.[[3]](#fn5i6vzx2waqq)
**Private information and incentives not to disclose.**Here, “private information” means information about one’s willingness or ability to engage in conflict — e.g., how costly one considers conflict to be, or how strong one’s military is — about which other agents are uncertain. This uncertainty creates a risk-reward tradeoff: For example, Country A might think it’s sufficiently likely that Country B will give up without much of a fight that it’s worthwhile in expectation for A to fight B, even if they’ll end up fighting a war if they are wrong.
In these cases, removing uncertainty — e.g., both sides learning exactly how willing the other is to fight — opens up peaceful equilibria. This is why conflict due to private information requires “incentives not to disclose”. Whether there are incentives to disclose will depend on a few factors.
First, the technical feasibility of different kinds of verifiable disclosure matters. For example, if I have an explicitly-specified utility function, how hard is it for me to prove to you how much my utility function values conflict relative to peace?
Second, different games will create different incentives for disclosure. Sometimes the mere possibility of verifiable disclosure ends up incentivizing all agents to disclose all of their private information (Grossman 1981; Milgrom 1981). But in other cases, more sophisticated disclosure schemes are needed. For example: Suppose that an agent has some vulnerability such that unconditionally disclosing all of their private information would place them at a decisive disadvantage. The agents could then make copies of themselves, allow these copies to inspect one another, and transmit back to the agents only the private information that’s necessary to reach a bargain. (See the [appendix](https://www.lesswrong.com/posts/cLDcKgvM6KxBhqhGq/will-agis-avoid-conflict-by-default#Appendix__Full_rational_conflict_taxonomy) and [DiGiovanni and Clifton](https://arxiv.org/pdf/2204.03484.pdf) (2022) for more discussion of conditional information revelation and other conditions for the rational disclosure of conflict-relevant private information.)
For the rest of the sequence, we’ll use “informational problem” as shorthand for this mechanism of conflict.
**Inability to credibly commit to peaceful settlement.**Agents might fight even though they would like to be able to commit to peace. The Prisoner’s Dilemma is the classic example: Both prisoners would like to be able to write a binding contract to cooperate, but if they can’t, then the game-theoretically rational thing to do is defect.
Similarly sometimes one agent will be tempted to launch a preemptive attack against another. For example, if Country A thinks Country B will soon become significantly more powerful, Country A might be tempted to attack Country B now. This could be solved with credible commitment: Country B could commit not to becoming significantly more powerful, or to compensate Country A for their weakened bargaining position. But without the ability to make such commitments, Country A may be rationally compelled to fight.
Another example is randomly dividing a prize. Suppose Country A and Country B are fighting over an indivisible holy site. They might want to randomly allocate the holy site to one of them, rather than fighting. The problem is that, once the winner has been decided by the random lottery, the loser has no reason to concede rather than fight, unless they have some commitment in place to honor the outcome of the lottery.
For the rest of the sequence, we’ll use “commitment inability problem” as shorthand for this mechanism of conflict.[[4]](#fn2291lxhy15u)
**Miscoordination.**When there are no informational or commitment inability problems, and agents’ preferences aren’t entirely opposed, there will be equilibria in which agents avoid conflict. But the existence of such equilibria isn’t enough to guarantee peace, even between rational agents. Agents can still fail to *coordinate*on a peaceful solution.
A central example of catastrophic conflict due to miscoordination is *incompatible commitments.*Agents may make commitmentsto accepting only certain peaceful settlements, and otherwise punishing their counterpart. This can happen when agents have uncertainty about what commitments their counterparts will make. Depending on what you think about the range of outcomes your counterpart has committed to demanding, you might commit to a wider or narrow range of demands. There are situations in which the agents’ uncertainty is such that the optimal thing for each of them to do is commit to a narrow range of demands, which end up being incompatible. See [this post](https://www.lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem) on “the commitment races problem” for more discussion.
### Avoiding conflict via commitment and disclosure ability?
One reason for optimism about AGI conflict is that AGIs may be much better at credible commitment and disclosure of private information. For example, AGIs could make copies of themselves and let their counterparts inspect these copies until they are satisfied that they understand what kinds of commitments their counterpart has in place. Or, to credibly commit to a treaty, AGIs could do a “[value handshake](https://www.lesswrong.com/tag/values-handshakes)” and build a successor AGI system whose goal is to act according to the treaty. So, what are some reasons why AGIs would still engage in conflict, given these possibilities? Three stand out to us:
**Strategic pressures early in AGI takeoff.**Consider AGI agents that are opaque to one another, butare capable of self-modifying / designing successor agents who can implement the necessary forms of disclosure. Would such agents ever fail to implement these solutions? If, say, designing more transparent successor agents is difficult and time-consuming, then agents might face a tradeoff between trying to implement more cooperation-conducive architectures and placing themselves at a critical strategic disadvantage. This seems most plausible in the early stages of a multipolar AI takeoff.
**Lack of capability early in AGI takeoff.** Early in a slow multipolar AGI takeoff, pre-AGI AIs or early AGIs might be capable of starting destructive conflicts but lack the ability to design successor agents, scrutinize the inner workings of opponent AGIs, or reflect on their own cognition in ways that would let them anticipate future conflicts. If AGI capabilities come in this order, such that the ability to launch destructive conflicts comes a while before the ability to design complete successor agents or self-modify, then early AGIs may not be much better than humans at solving informational and commitment problems.
**Fundamental computational limits.**There may be fundamental limits on the ability of complex AGIs to implement the necessary forms of verifiable disclosure. For example, in interactions between complex AGI civilizations in the far future, these civilizations’ willingness to fight may be determined by factors that are difficult to compress. (That is, the only way in which you can find out how willing to fight they are is to run expensive simulations of what they would do in different hypothetical scenarios.) Or it may be difficult to verify that the other civilization has disclosed their actual private information.
These considerations apply to informational and commitment inability problems. But there is also the problem of incompatible commitments, which is *not*solved by sufficiently strong credibility or disclosure ability. Regardless of commitment or disclosure ability, agents will sometimes have to make commitments under uncertainty about others’ commitments.
Still, the ability to make conditional commitments could still help to ameliorate the risks from incompatible commitments. For example, agents could have a hierarchy of conditional commitments of the form: “If our nth.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
-order commitments are incompatible, try resolving these via an (n+1)th-order bargaining process.” See also [safe Pareto improvements](https://users.cs.duke.edu/~conitzer/safeAAMAS21.pdf), which is a particular kind of failsafe for incompatible commitments, and (the version in the linked paper) relies on strong commitment and disclosure ability.
What if conflict isn’t costly by the agents’ lights?
----------------------------------------------------
Another way conflict can be rational is if conflict actually isn’t costly for at least one agent, i.e., there isn’t any outcome that all parties prefer to conflict. That is, *Conflict isn't Costly*is false. Some ways this could happen:
* Pure spite: An agent behaves as though minimizing another agent’s utility function.
* Unforgivingness: An agent would rather indefinitely punish their counterpart if they don’t get their preferred settlement, rather than settle for something else. If two such agents encounter one another and prefer different settlements, they’ll behave as though they have incompatible commitments, though in this case the “commitments” weren’t chosen by the agents.[[5]](#fnvea9ykm5e9)
These cases, in which conflict is literally costless for one agent, seemunlikely. They are extremes on a spectrum of what we’ll call *conflict-seeking preferences (CSPs)*. By shrinking the range of outcomes agents prefer to conflict, these preferences exacerbate the risks of conflict due to informational, commitment inability, or miscoordination problems. For instance, risk-seeking preferences will lead to a greater willingness to risk losses from conflict (see [Shulman](https://intelligence.org/files/BasicAIDrives.pdf) (2010) for some discussion of implications for conflict between AIs and humans). And spite will make conflict less subjectively costly, as the material costs that a conflict imposes on a spiteful agent are partially offset by the positively-valued material harms to one’s counterpart.
Candidate directions for research and intervention
--------------------------------------------------
We argued above that *Capabilities aren’t Sufficient.*AGIs may sometimes engage in conflict that is costly, even if they are extremely capable. But it remains to be seen whether *Intent Alignment isn't Sufficient* to prevent unendorsed conflict, or *Intent Alignment isn't Necessary* to reduce the risk of conflict. Before we look at those claims, it may be helpful to review some approaches to AGI design that might reduce the risks reviewed in the previous section. In the next post, we ask whether these interventions are redundant with work on AI alignment.
Let’s look at interventions directed at each of the causes of conflict in our taxonomy.
**Informational and commitment inability problems.**One could try to design AGIs that are better able to make credible commitments and better able to disclose their private information. But it’s not clear whether this reduces the net losses from conflict. First, greater commitment ability could increase the risks of conflict from incompatible commitments. Second, even if the risks of informational and commitment inability problems would be eliminated in the limit of perfect commitment and disclosure, *marginal*increases in these capabilities could worsen conflict due to informational and commitment inability problems. For example, increasing the credibility of commitments could embolden actors to commit to carrying out threats more often, in a way that leads to greater losses from conflict overall.
**Miscoordination.**One direction here is building AGIs that reason in more cautious ways about commitment, and take measures to mitigate the downsides from incompatible commitments. For example, we could develop [instructions for human overseers](https://www.alignmentforum.org/posts/4JuKoFguzuMrNn6Qr/hch-is-not-just-mechanical-turk) as to what kinds of reasoning about commitments they should encourage or discourage in their (intent-aligned) AI.[[6]](#fn8y81k9gozhn) The design of this “overseer’s manual” might be improved by doing more conceptual thinking about sophisticated approaches to commitments. Examples of this kind of work include [Yudkowsky’s solution](https://www.lesswrong.com/posts/z2YwmzuT7nWx62Kfh/cooperating-with-agents-with-different-ideas-of-fairness) for bargaining between agents with different standards of fairness; [surrogate goals](https://longtermrisk.org/research-agenda#42_Surrogate_goals) and Oesterheld and Conitzer’s (2021) [safe Pareto improvements](https://users.cs.duke.edu/~conitzer/safeAAMAS21.pdf); and Stastny et al.’s (2021) notion of [norm-adaptive policies](https://arxiv.org/pdf/2111.13872.pdf). It may also be helpful to consider what kinds of reasoning about commitments we should try to [prevent altogether](https://arbital.com/p/distant_SIs/) in the early stages of AGI development.
The goal of such work is not necessarily to fully solve tricky conceptual problems in bargaining. One path to impact is to improve the chances that early human-intent-aligned AI teams are in a “basin of attraction of good bargaining”. The initial conditions of their deliberation about how to bargain should be good enough to avoid locking in catastrophic commitments early on, and to avoid path-dependencies which cause deliberation to be [corrupted](https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety). We return to this in our discussion of *Intent Alignment isn't Sufficient*.
Lastly, [surrogate goals](https://longtermrisk.org/spi) are a proposal for mitigating the downsides of executed threats, which might occur due to either miscoordination or informational problems. The idea is to design an AI to treat threats to carry out some benign action (e.g., simulating clouds) the same way that they treat threats against the overseer’s terminal values.
**Conflict-seeking preferences.**Consider a few ways in which AI systems might acquire CSPs. First, CSPs might be strategically useful in some training environments. Evolutionary game theorists have studied how CSPs like spite (Hamilton 1970; Possajennikov 2000; Gardner and West 2004; Forber and Smead 2016) and aggression towards out-group members (Choi and Bowles 2007) can be selected for. Analogous selection pressures could appear in AI training.[[7]](#fncd1pnn4il26) For example, selection for the agents that perform the best *relative*to opponents creates similar pressures to the evolutionary pressures hypothetically responsible for spite: Agents will have reason to sacrifice absolute performance to harm other agents, so that they can increase the chances that their relative score is highest. So, identifying and removing training environments which incentivize CSPs (while not affecting agents’ competitiveness) is one direction for intervention.
Second, CSPs might result from poor generalization from human preference data. An AI might fail to correct for biases that cause a human to behave in a more conflict-conducive way than they would actually endorse, for instance. Inferring human preferences is [hard](https://arxiv.org/pdf/1712.05812.pdf). It is especially difficult in multi-agent settings, where a preference-inferrer has to account for a preference-inferree’s models of other agents, as well as biases specific to mixed-motive settings.[[8]](#fnbbw2tkre26u)
Finally, a generic direction for preventing CSPs is developing adversarial training and interpretability methods tailored to rooting out conflict-seeking behavior.
Appendix: Full rational conflict taxonomy
-----------------------------------------
Here we present our complete taxonomy of causes of costly conflict between rational agents, and give an informal argument that it is exhaustive. (Remember that by “rational” we mean “maximizes subjective expected utility” and by “costly conflict” we mean “inefficient outcome”. ) See [here](https://drive.google.com/file/d/1rXS-_BBHKcE8r-sQY_dfs57904tNxxEa/view?usp=sharing) for a draft where we try to formalize this.
For the purposes of the informal exposition, it will be helpful to distinguish between *equilibrium-compatible*and *equilibrium-incompatible*conflict. *Equilibrium-compatible*conflicts are those which are naturally modeled as occurring in some [(Bayesian) Nash equilibrium](https://en.wikipedia.org/wiki/Bayesian_game). That is, we can model them as resulting from agents (i) knowing each others’ strategies exactly (modulo private information) and (ii) playing a best response. *Equilibrium-incompatible*conflicts cannot be modeled in this way. Note, however, that the equilibrium-compatible conditions for conflict can hold even when agents are not in equilibrium.
1. Equilibrium-compatible cases
1. Private information and no incentive/ability to disclose
2. Inability to credibly commit to a cooperative outcome
3. Indivisibility and inability to randomize
4. Don’t play efficient equilibrium even when available
2. Equilibrium-incompatible cases
1. Miscoordination
2. Perceived risk of cooperation-punishers
This breakdown is summarized as a fault tree diagram in Figure 1.

***Figure 1**:*[*Fault tree diagram*](https://en.wikipedia.org/wiki/Fault_tree_analysis) *for inefficient outcomes between rational agents.*
### Equilibrium-compatible cases
Here is an argument that items 1a-1c capture all games in which conflict occurs in every (Bayesian) Nash equilibrium. To start, consider games of complete information. Some complete information games have only inefficient equilibria. The Prisoner’s Dilemma is the canonical example. But we also know that any efficient outcome that is achievable by some convex combination of strategies and is better for each player that what they can unilaterally guarantee themselves can be attained in equilibrium, when agents are capable of conditional commitments to cooperation and correlated randomization (Kalai et al. 2010). This means that, for a game of complete information to have only inefficient equilibria, it has to be the case that either they are unable to make credible conditional commitments to an efficient profile (1b) or an efficient and individually rational outcome is only attainable with randomization (because the contested object is indivisible), but randomization isn’t possible (1c).
Even if efficiency in complete information is always possible given commitment and randomization ability, players might not have complete information. It is well-known that private information can lead to [inefficiency in equilibrium](https://en.wikipedia.org/wiki/Myerson%E2%80%93Satterthwaite_theorem), due to agents making risk-reward tradeoffs under uncertainty about their counterpart’s private information (1a). It is also necessary that agents can’t or won’t disclose their private information — we give a breakdown of reasons for nondisclosure [below](https://www.lesswrong.com/posts/cLDcKgvM6KxBhqhGq/will-agis-avoid-conflict-by-default#Reasons_agents_don_t_disclose_private_information).
This all means that a game has *no*efficient equilibria only if one of items 1a-1c holds. But it could still be the case that agents coordinate on an inefficient equilibrium, even if an efficient one is available (1d). E.g., agents might both play Hare in a Stag Hunt. (Coordinating on an equilibrium but failing to coordinate on an efficient one seems unlikely, which is why we don’t discuss it in the main text. But it isn’t ruled out by the assumptions of accurate beliefs and maximizing expected utility with respect to those beliefs alone.)
### Equilibrium-incompatible cases
This exhausts explanations of situations in which conflict happens in equilibrium. But rationality does not imply that agents are in equilibrium. How could imperfect knowledge of other players’ strategies drive conflict? There are two possibilities:
* (Item 2a) Each player is “trying” to coordinate on an efficient equilibrium, in the sense that they place nonzero credence on an efficient outcome, but they miscoordinate. For example, a game of Chicken where both players think it’s sufficiently likely that their counterpart plays Swerve that it is optimal for them to Dare.
* (Item 2b) At least one player is not “trying” to coordinate on an efficient equilibrium, in the sense that they play a strategy that is not compatible with any efficient equilibrium. For example, consider a modified Prisoner’s Dilemma where each agent can unilaterally make the other agent worse off than (Defect, Defect) (“punish”). In an open-source version of such a game, an agent might play DefectBot, eliminating the possibility of efficiency. Why might this happen, when they could instead play FairBot? The player might place some credence on their counterpart punishing FairBot, but playing defect against DefectBot. This “perceived risk of cooperation-punishers” is the only reason that agents with access to conditional commitments and uncertainty about their counterpart’s decision would play a strategy that cannot achieve efficient outcomes against any strategy. This situation is implausible, but not ruled out by minimal rationality assumptions.
### Reasons agents don’t disclose private information
Suppose that agents have private information such that nondisclosure of the information will lead to a situation in which conflict is rational, but conflict would no longer be rational if the information were disclosed.
We can decompose reasons not to disclose into reasons not to *unconditionally*disclose and reasons not to *conditionally*disclose. Here, “conditional” disclosure means “disclosure conditional on a commitment to a particular agreement by the other player”. For example, suppose my private information is (θ,ν), where θ measures my military strength, such that p(θ) is my chance of winning a war, and ν is information about secret military technology that I don’t want you to learn. A conditional commitment would be: I disclose (θ,ν), so that we can decide the outcome of the contest according to a costless lottery which I win with probability p(θ), *conditional on*a commitment from you not to use your knowledge of ν to harm me.
Here is the decomposition:
* Agents do not *unconditionally*disclose their private information. This could happen for two reasons:
+ The agents are not able to disclose their private information; e.g., it is computationally intractable.
+ The agents are able to disclose their private information, but lack the incentive to do so. Sometimes the mere possibility of unconditional disclosure is enough for unconditional disclosure to be incentivized, due to the “unraveling” argument.[[9]](#fncajf03h1r4l) But the conditions for unraveling don’t always hold, even when unconditional disclosure is feasible.
+ (We can further distinguish between incentive/ability to *partially*or *fully*disclose private information. Partial disclosure will often be incentivized when full disclosure is not. For example, the unraveling argument goes through for our example above if I can disclose θ without disclosing ν. But restricting the amount of information disclosed may be difficult.)
* Agents do not *conditionally*disclose their private information. Unlike unconditional disclosure, which may not be incentivized even if it is feasible, there is always an efficient equilibrium available to agents with conditional disclosure and commitment ability ([DiGiovanni and Clifton 2022](https://arxiv.org/pdf/2204.03484.pdf)). So failure to conditionally disclose private information must be due to its infeasibility.
References
----------
Abreu, Dilip, and Rajiv Sethi. 2003. “Evolutionary Stability in a Reputational Model of Bargaining.” *Games and Economic Behavior* 44 (2): 195–216.
Babcock, Linda, and George Loewenstein. 1997. “Explaining Bargaining Impasse: The Role of Self-Serving Biases.” *The Journal of Economic Perspectives: A Journal of the American Economic Association* 11 (1): 109–26.
Bazerman, Max H., and Margaret A. Neale. 1986. “Heuristics in Negotiation: Limitations to Effective Dispute Resolution.” In *Judgment and Decision Making: An Interdisciplinary Reader , (pp*, edited by Hal R. Arkes, 818:311–21. New York, NY, US: Cambridge University Press, xiv.
Choi, Jung-Kyoo, and Samuel Bowles. 2007. “The Coevolution of Parochial Altruism and War.” *Science* 318 (5850): 636–40.
DiGiovanni, Anthony, and Jesse Clifton. 2022. “Commitment Games with Conditional Information Revelation.” *arXiv [cs.GT]*. arXiv. <http://arxiv.org/abs/2204.03484>[.](http://paperpile.com/b/IDkN1b/M3m8l)
Fearon, James D. 1995. “Rationalist Explanations for War.” *International Organization* 49 (3): 379–414.
Forber, Patrick, and Rory Smead. 2016. “The Evolution of Spite, Recognition, and Morality.” *Philosophy of Science* 83 (5): 884–96.
Gardner, A., and S. A. West. 2004. “Spite and the Scale of Competition.” *Journal of Evolutionary Biology* 17 (6): 1195–1203.
Grossman, Sanford J. 1981. “The Informational Role of Warranties and Private Disclosure about Product Quality.” *The Journal of Law and Economics* 24 (3): 461–83.
Hamilton, W. D. 1970. “Selfish and Spiteful Behaviour in an Evolutionary Model.” *Nature* 228 (5277): 1218–20.
Milgrom, Paul R. 1981. “Good News and Bad News: Representation Theorems and Applications.” *The Bell Journal of Economics* 12 (2): 380–91.
Oesterheld, Caspar, and Vincent Conitzer. 2021. “Safe Pareto Improvements for Delegated Game Playing.” In *Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems*, 983–91. andrew.cmu.edu.
Possajennikov, Alex. 2000. “On the Evolutionary Stability of Altruistic and Spiteful Preferences.” *Journal of Economic Behavior & Organization* 42 (1): 125–29.
Shulman. 2010. “Omohundro’s ‘basic AI Drives’ and Catastrophic Risks.” *Manuscript.(singinst. Org/upload/ai-Resource-Drives. Pdf*. <http://www.hdjkn.com/files/BasicAIDrives.pdf>[.](http://paperpile.com/b/IDkN1b/mQoMB)
Stastny, Julian, Maxime Riché, Alexander Lyzhov, Johannes Treutlein, Allan Dafoe, and Jesse Clifton. 2021. “Normative Disagreement as a Challenge for Cooperative AI.” *arXiv [cs.MA]*. arXiv. <http://arxiv.org/abs/2111.13872>[.](http://paperpile.com/b/cH3av9/YA8Xo)
1. **[^](#fnrefnpuifxzh2ik)** Defining conflict in terms of “inefficiency” is arguably a limitation of this approach, since it doesn’t hone in on particularlybad conflicts, and doesn’t distinguish between cases where agents behaved inefficiently but still managed to avoid catastrophic conflict (e.g., agents who implement [Yudkowsky's approach to bargaining](https://www.lesswrong.com/posts/z2YwmzuT7nWx62Kfh/cooperating-with-agents-with-different-ideas-of-fairness)).
2. **[^](#fnrefovkt5qfck7g)** The choice of decision theory (CDT, EDT, UDT, etc) used in computing these expectations shouldn’t change anything qualitative about the taxonomy. However, some decision theories may avoid conflict more often than others. (E.g., EDT cooperates in the Prisoner’s Dilemma against sufficiently correlated counterparts, while CDT doesn’t.)
3. **[^](#fnref5i6vzx2waqq)** Informational and commitment inability problems are two of the three “rationalist explanations for war”. Fearon’s third rationalist explanation for war is *issue indivisibility*: For instance, it may be rational for two factions to go to war over an indivisible holy site, if there are no side payments that can compensate either faction for losing the site. However, we claim that if rational agents engage in conflict due to issue indivisibility, then conflict is *not*inefficient for them. We’ll look at such cases in the next subsection. Lastly, our taxonomy includes miscoordination, which Fearon’s taxonomy doesn’t. Fearon assumes that agents coordinate on a Pareto optimal equilibrium when at least one exists. We only assume that agents maximize subjective expected utility, which need not result in them playing a Nash equilibrium.
4. **[^](#fnref2291lxhy15u)** These are referred to as “commitment problems” in the international relations literature, but we say “commitment *inability*problems” to clearly distinguish from the problem of “incompatible commitments” that we also discuss.
5. **[^](#fnrefvea9ykm5e9)** Abreu and Sethi (2003) show how a subpopulation of agents with such preferences can be evolutionarily stable in a simple model of bargaining, when there are costs to being more “rational”.
6. **[^](#fnref8y81k9gozhn)**Credit to Carl Shulman and Lukas Gloor for emphasizing this intervention.
7. **[^](#fnrefcd1pnn4il26)** For this reason, some of us at CLR have recently been [looking into](https://arxiv.org/abs/2207.03178) evolutionary explanations for preferences like spite and whether these explanations might apply to some kinds of AI training.
8. **[^](#fnrefbbw2tkre26u)** Humans have biases specific to multi-agent settings (e.g., the “fixed-pie error” (Bazerman and Neale 1986) and “self-serving bias” (Babcock and Loewenstein 1997)). All of this makes it plausible that AI systems could mistakenly attribute human behavior to conflict-seeking preferences (e.g., mistaking fixed-pie error for spite), even when they have good models of human preferences in other settings.
9. **[^](#fnrefcajf03h1r4l)** The unraveling argument goes roughly like this: Players assume that other players are weak, when they haven’t revealed their private information. The players with the “strongest” private information (e.g., highest strength in combat, or resolve, or commitment credibility) will reveal their private information, because this will cause weaker agents to give in without a fight. Then, because the next-strongest players are known not to be the strongest, they have nothing to gain from hiding their information, and prefer to reveal so that counterpart agents who are sufficiently weak will decide not to fight. This happens with the next-strongest players, and so on (Grossman 1981; Milgrom 1981).
|
e32ef015-c931-4e01-8a63-e2019857685f
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Alignment Newsletter #53
Find all Alignment Newsletter resources [here](http://rohinshah.com/alignment-newsletter/). In particular, you can [sign up](http://eepurl.com/dqMSZj), or look through this [spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing) of all summaries that have ever been in the newsletter.
Cody Wild is now contributing summaries to the newsletter!
**Highlights**
--------------
[Alignment Newsletter One Year Retrospective](https://www.alignmentforum.org/posts/3onCb5ph3ywLQZMX2/alignment-newsletter-one-year-retrospective) *(Rohin Shah)*: The Alignment Newsletter is one year old! I've written a retrospective of the newsletter's impact over the last year, with a lot of open questions about what the newsletter should look like in the future. Please help me figure out the answers by **taking this 3-minute [survey](https://docs.google.com/forms/d/e/1FAIpQLSepm8ATtun9pvLGL9MXhk6Z59hO14huEVS6qgqtl3KS6n5SIQ/viewform?usp=sf_link)**, and if you're feeling particularly generous with your time, read the retrospective and tell me your opinions in the comments!
[Are Deep Neural Networks Dramatically Overfitted?](https://lilianweng.github.io/lil-log/2019/03/14/are-deep-neural-networks-dramatically-overfitted.html) *(Lilian Weng)*: The concepts of underfitting and overfitting, and their relation to the bias-variance tradeoff, are fundamental to standard machine learning theory. Roughly, for a fixed amount of data, there is an optimal model complexity for learning from that data: any less complex and the model won't be able to fit the data, and any more complex and it will overfit to noise in the data. This means that as you increase model complexity, training error will go down to zero, but validation error will go down and then start turning back up once the model is overfitting.
We know that neural networks are much more expressive than the theory would predict is optimal, both from theorems showing that neural networks can learn any function (including one that provides a rather tight bound on number of parameters), as well as a [paper](https://arxiv.org/abs/1611.03530) showing that neural nets can learn random noise. Yet they work well in practice, achieving good within-distribution generalization.
The post starts with a brief summary of topics that readers of this newsletter are probably familiar with: Occam's razor, the Minimum Description Length principle, Kolmogorov Complexity, and Solomonoff Induction. If you don't know these, I strongly recommend learning them if you care about understanding within-distribution generalization. The post then looks at a few recent informative papers, and tries to reproduce them.
The [first one](https://arxiv.org/abs/1812.11118) is the most surprising: they find that as you increase the model complexity, your validation error goes down and then back up, as expected, but then at some point it enters a new regime and goes down again. However, the author notes that you have to set up the experiments just right to get the smooth curves the paper got, and her own attempts at reproducing the result are not nearly as dramatic.
Another [paper](https://arxiv.org/abs/1804.08838) measures the difficulty of a task based on its "intrinsic dimension", which Cody has summarized separately in this newsletter.
The [last paper](https://arxiv.org/abs/1902.01996) looks at what happens if you (a) reset some layer's parameters to the initial parameters and (b) randomize some layer's parameters. They find that randomizing always destroys performance, but resetting to initial parameters doesn't make much of a difference for later layers, while being bad for earlier layers. This was easy to reproduce, and the findings reemerge very clearly.
**Rohin's opinion:** I'm very interested in this problem, and this post does a great job of introducing it and summarizing some of the recent work. I especially appreciated the attempts at reproducing the results.
On the papers themselves, a regime where you already have ~zero training error but validation error goes *down* as you increase model expressivity is exceedingly strange. Skimming the paper, it seems that the idea is that in the normal ML regime, you are only minimizing training error -- but once you can get the training error to zero, you can then optimize for the "simplest" model with zero training error, which by Occam's Razor-style arguments should be the best one and lead to better validation performance. This makes sense in the theoretical model that they use, but it's not clear to me how this applies to neural nets, where you aren't explicitly optimizing for simplicity after getting zero training error. (Techniques like regularization don't result in one-after-the-other optimization -- you're optimizing for both simplicity and low training error simultaneously, so you wouldn't expect this critical point at which you enter a new regime.) So I still don't understand these results. That said, given the difficulty with reproducing them, I'm not going to put too much weight on these results now.
I tried to predict the results of the last paper and correctly predicted that randomizing would always destroy performance, but predicted that resetting to initialization would be okay for *early* layers instead of later layers. I had a couple of reasons for the wrong prediction. First, there had been a few papers that showed good results even with random features, suggesting the initial layers aren't too important, and so maybe don't get updated too much. Second, the gradient of the loss w.r.t later layers requires only a few backpropagation steps, and so probably provides a clear, consistent direction moving it far away from the initial configuration, while the gradient w.r.t earlier layers factors through the later layers which may have weird or wrong values and so might push in an unusual direction that might get cancelled out across multiple gradient updates. I skimmed the paper and it doesn't really speculate on why this happens, and my thoughts still seem reasonable to me, so this is another fact that I have yet to explain.
**Technical AI alignment**
==========================
### **Technical agendas and prioritization**
[Summary of the Technical Safety Workshop](https://www.youtube.com/watch?v=Tqu4cwne1vA) *(David Krueger)* (summarized by Richard): David identifies two broad types of AI safety work: human in the loop approaches, and theory approaches. A notable subset of the former category is methods which improve our ability to give advanced systems meaningful feedback - this includes debate, IDA, and recursive reward modeling. CIRL and CAIS are also human-in-the-loop. Meanwhile the theory category includes MIRI's work on agent foundations; side effect metrics; and verified boxing.
### **Iterated amplification**
[A Concrete Proposal for Adversarial IDA](https://www.alignmentforum.org/posts/jYvm4mmjvGHcPXtGL/a-concrete-proposal-for-adversarial-ida) *(Evan Hubinger)*: This post presents a method to use an adversary to improve the sample efficiency (with respect to human feedback) of iterated amplification. The key idea is that when a question is decomposed into subquestions, the adversary is used to predict which subquestion the agent will do poorly on, and the human is only asked to resolve that subquestion. In addition to improving sample efficiency by only asking relevant questions, the resulting adversary can also be used for interpretability: for any question-answer pair, the adversary can pick out specific subquestions in the tree that are particularly likely to contain errors, which can then be reviewed.
**Rohin's opinion:** I like the idea, but the math in the post is quite hard to read (mainly due to the lack of exposition). The post also has separate procedures for amplification, distillation and iteration; I think they can be collapsed into a single more efficient procedure, which I wrote about in this [comment](https://www.alignmentforum.org/posts/jYvm4mmjvGHcPXtGL/a-concrete-proposal-for-adversarial-ida#KdqEJNmRH6vFx5NZg).
### **Learning human intent**
[Conditional revealed preference](https://unstableontology.com/2019/04/04/conditional-revealed-preference/) *(Jessica Taylor)*: When backing out preferences by looking at people's actions, you may find that even though they say they are optimizing for X, their actions are better explained as optimizing for Y. This is better than relying on what they say, at least if you want to predict what they will do in the future. However, all such inferences are specific to the current context. For example, you may infer that schools are "about" dealing with authoritarian work environments, as opposed to learning -- but maybe this is because everyone who designs schools doesn't realize what the most effective methods of teaching-for-learning are, and if they were convinced that some other method was better for learning they would switch to that. So, in order to figure out what people "really want", we need to see not only what they do in the current context, but also what they would do in a range of alternative scenarios.
**Rohin's opinion:** The general point here, which comes up pretty often, is that any information you get about "what humans want" is going to be specific to the context in which you elicit that information. This post makes that point when the information you get is the actions that people take. Some other instances of this point:
- [Inverse Reward Design](https://arxiv.org/abs/1711.02827) notes that a human-provided reward function should be treated as *specific to the training environment*, instead of as a description of good behavior in all possible environments.
- [CP-Nets](https://www.cs.toronto.edu/~cebly/Papers/CPnets.pdf) are based on the point that when a human says "I want X" it is not a statement that is meant to hold in all possible contexts. They propose very weak semantics, where "I want X" means "holding every other aspect of the world constant, it would be better for X to be present than for it not to be present".
- [Wei Dai's point](https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety) ([AN #37](https://mailchi.mp/6eb8190e723f/alignment-newsletter-37)) that humans likely have adversarial examples, and we should not expect preferences to generalize under distribution shift.
- Stuart Armstrong and Paul Christiano have made or addressed this point in many of their posts.
[Defeating Goodhart and the closest unblocked strategy problem](https://www.alignmentforum.org/posts/PADPJ3xac5ogjEGwA/defeating-goodhart-and-the-closest-unblocked-strategy) *(Stuart Armstrong)*: One issue with the idea of [reward uncertainty](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/ZiLLxaLB5CCofrzPp) ([AN #42](https://mailchi.mp/f6488137d76c/alignment-newsletter-42)) based on a model of uncertainty that we specify is that we tend to severely underestimate how uncertain we should be. This post makes the point that we could try to build an AI system that starts with this estimate of our uncertainty, but then corrects the estimate based on its understanding of humans. For example, if it notices that humans tend to become much more uncertain when presented with some crucial consideration, it could realize that its estimate probably needs to be widened significantly.
**Rohin's opinion:** So far, this is an idea that hasn't been turned into a proposal yet, so it's hard to evaluate. The most obvious implementation (to me) would involve an explicit estimate of reward uncertainty, and then an explicit model for how to update that uncertainty (that would not be Bayes Rule, since that would narrow the uncertainty over time). At this point it's not clear to me why we're even using the expected utility formalism; it feels like adding epicycles in order to get a single particular behavior that breaks other things. You could also make the argument that there will be misspecification of the model of how update the uncertainty. But again, this is just the most obvious completion of the idea; it's plausible that there's a different way of doing this that's better.
[Parenting: Safe Reinforcement Learning from Human Input](http://arxiv.org/abs/1902.06766) *(Christopher Frye et al)*
### **Interpretability**
[Attention is not Explanation](http://arxiv.org/abs/1902.10186) *(Sarthak Jain et al)* (summarized by Richard): This paper explores the usefulness of attention weights in interpreting neural networks' performance on NLP tasks. The authors present two findings: firstly, that attention weights are only weakly correlated with other metrics of word importance; and secondly, that there often exist adversarially-generated attention weights which are totally different from the learned weights, but which still lead to the same outputs. They conclude that these results undermine the explanatory relevance of attention weights.
**Richard's opinion:** I like this type of investigation, but don't find their actual conclusions compelling. In particular, it doesn't matter whether "meaningless" adversarial attention weights can lead to the same classifications, as long as the ones actually learned by the system are interpretable. Also, the lack of correlation between attention weights and other methods could be explained either by attention weights being much worse than the other methods, or much better, or merely useful for different purposes.
[Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations](https://arxiv.org/abs/1703.03717) *(Andrew Ross et al)*
### **Adversarial examples**
[The LogBarrier adversarial attack: making effective use of decision boundary information](https://arxiv.org/abs/1903.10396) *(Chris Finlay et al)* (summarized by Dan H): Rather than maximizing the loss of a model given a perturbation budget, this paper minimizes the perturbation size subject to the constraint that the model misclassify the example. This misclassification constraint is enforced by adding a logarithmic barrier to the objective, which they prevent from causing a loss explosion through through a few clever tricks. Their attack appears to be faster than the Carlini-Wagner attack.
**Read more:** [The code is here.](https://github.com/cfinlay/logbarrier)
### **Robustness**
[Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks](https://arxiv.org/pdf/1903.11680.pdf) *(Mingchen Li et al.)* (summarized by Dan H): Previous [empirical](https://arxiv.org/pdf/1901.09960.pdf#page=2&zoom=100,0,81) [papers](https://papers.nips.cc/paper/8094-generalized-cross-entropy-loss-for-training-deep-neural-networks-with-noisy-labels.pdf) have shown that finding ways to decrease training time greatly improves robustness to label corruptions, but to my knowledge this is the first theoretical treatment.
**Other progress in AI**
========================
### **Deep learning**
[Measuring the Intrinsic Dimension of Objective Landscapes](https://eng.uber.com/intrinsic-dimension/) *(Chunyuan Li et al)* (summarized by Cody): This paper proposes and defines a quantity called "intrinsic dimension", a geometrically-informed metric of how many degrees of freedom are actually needed to train a given model on a given dataset. They calculate this by picking a set of random directions that span some subspace of dimension d, and taking gradient steps only along that lower-dimensional subspace. They consider the intrinsic dimension of a model and a dataset to be the smallest value d at which performance reaches 90% of a baseline, normally trained model on the dataset. The geometric intuition of this approach is that the dimensionality of parameter space can be, by definition, split into intrinsic dimension and its codimension, the dimension of the solution set. In this framing, higher solution set dimension (and lower intrinsic dimension) corresponds to proportionally more of the search space containing reasonable solution points, and therefore a situation where a learning agent will be more likely to find such a solution point. There are some interesting observations here that correspond with our intuitions about model trainability: on MNIST, intrinsic dimensionality for a CNN is lower than for a fully connected network, but if you randomize pixel locations, CNN's intrinsic dimension shoots up above FC, matching the intuition that CNNs are appropriate when their assumption of local structure holds.
**Cody's opinion:** Overall, I find this an interesting and well-articulated paper, and am curious to see future work that addresses some of the extrapolations and claims implied by this paper, particularly their claim, surprising relative to my intuitions, that increasing n\_parameters will, maybe monotonically, reduce difficulty of training, because it simply increases the dimensionality of the solution set. I'm also not sure how to feel about their simply asserting that a solution exists when a network reaches 90% of baselines performance, since we may care about that "last mile" performance and it might also be the harder to reach.
**Read more:** [Paper](https://arxiv.org/abs/1804.08838)
|
40d1ab09-fa4a-45b5-9e5f-02c5ee042915
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[SEQ RERUN] Dissolving the Question
Today's post, Dissolving the Question was originally published on 08 March 2008. A summary (taken from the LW wiki):
> Proving that you are confused may not make you feel any less confused. Proving that a question is meaningless may not help you any more than answering it. Philosophy may lead you to reject the concept, but rejecting a concept is not the same as understanding the cognitive algorithms behind it. Ask yourself, as a question of cognitive science, why do humans make that mistake?
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Variable Question Fallacies, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
|
743e54a3-fd36-472b-83f9-ef20c1c93260
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Updating and Editing Factual Knowledge in Language Models
Language Models go out of date. Is it possible to stop this from happening by making intrinsic alterations to the network itself?
* This article was written around Jan 2024, the exact references are surely out of date but the core ideas still hold.
Among the many analogies used to understand Language Models is the idea that LMs are “Blurry JPEGs” of their training data. The intuition here is that LMs compress the information they see during training in their weights. The study of knowledge editing and updating in Language Modeling takes a similar perspective — we believe that Language Models “know” certain facts, and hope that in some way we can change the “facts” that LMs have learned without retraining from scratch. If we can inject new knowledge or update knowledge in a pre-trained LM, it would prevent these systems from becoming outdated over time and allow them to handle data from specialized domains that they didn’t see much of during training.
I went over 15 of the more recent and influential papers in this area to get a sense of how the field approaches this problem. In this post, I detail some of the common trends and ideas, as well as give my opinions on the area as a whole.
Do Language Models actually “know” facts:
Language Models of a BERT scale contain relational knowledge from their pretraining data, the authors who studied this defined “knowing a relational fact” (subject, relation, object) such as (Dante, born-in, Florence) as whether or not the LM can successfully predict masked objects in cloze sentences such as “Dante was born in ” expressing that fact. Other work prompts and expects the continuation to be the correct answer, some work uses many prompts with the same meaning to give the model more chances to show that it “knows” something². In general when all the work talks about “knowing a fact”, they usually refer to answering a question on that fact correctly. In this sense, a Language Model certainly “knows” things about the real w
|
fd4dc525-fc59-4b22-99ce-3544bb12d134
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The More Power At Stake, The Stronger Instrumental Convergence Gets For Optimal Policies
Edit, 5/16/23: I think this post is beautiful, correct in its narrow technical claims, and practically irrelevant to alignment. This post presents a cripplingly unrealistic picture of the role of reward functions in reinforcement learning. Reward functions are not "goals", real-world policies are not "optimal", and the mechanistic function of reward is (usually) to provide policy gradients to update the policy network.
I expect this post to harm your alignment research intuitions unless you've already inoculated yourself by deeply internalizing and understanding Reward is not the optimization target. If you're going to read one alignment post I've written, read that one.
Follow-up work (Parametrically retargetable decision-makers tend to seek power) moved away from optimal policies and treated reward functions more realistically.
----------------------------------------
Environmental Structure Can Cause Instrumental Convergence explains how power-seeking incentives can arise because there are simply many more ways for power-seeking to be optimal, than for it not to be optimal. Colloquially, there are lots of ways for "get money and take over the world" to be part of an optimal policy, but relatively few ways for "die immediately" to be optimal. (And here, each "way something can be optimal" is a reward function which makes that thing optimal.)
But how strong is this effect, quantitatively?
Intuitively, it seems like there are twice as many ways for Wait! to be optimal (in the undiscounted setting, where we don't care about intermediate states).
In Environmental Structure Can Cause Instrumental Convergence, I speculated that we should be able to get quantitative lower bounds on how many objectives incentivize power-seeking actions:
> Definition. At state s, most reward functions incentivize action a over action a′ when for all reward functions R, at least half of the orbit agrees that a has at least as much action value as a′ does at state s.
>
> ...
>
> W
|
2fa86e66-b4d9-4ca7-81cb-cb87f0755269
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Towards empathy in RL agents and beyond: Insights from cognitive science for AI Alignment
This is a talk I gave at the recent AI Safety Europe Retreat (AISER) on my research on obtaining insights from the cognitive science of empathy and applying them to RL agents and LLMs.
**Talk link:** <https://clipchamp.com/watch/6c0kTETRqBc>
**Slides link:** <https://bit.ly/3ZFmjN8>
**Talk description:** I begin by presenting a short review on the cognitive science of empathy as a Perception-Action Mechanism (PAM) which relies on self-other overlap at the neuronal level. I continue by presenting the theory of change of this research direction by arguing that inducing self-other overlap as empathy is model agnostic and that it has the potential to avert AI x-risk and be sub-agent stable in the limit. Then I present experimental evidence of the emergence of PAM in RL agents and present a way of inducing PAM in RL agents. I end the talk by discussing how this paradigm could be extended to LLMs.
**Acknowledgements:** I am thankful for Dr. Bogdan Ionut-Cirstea for inspiring me to look into this neglected research direction, the Long-Term Future Fund for funding the initial deep dive into the literature, and for Center for AI Safety for funding half of this research as part of their Student Researcher programme. Last but not least, I want to thank Dr. Matthias Rolf for supervising me and providing good structure and guidance. The review on the cognitive science of empathy is adapted from a talk given by Christian Keysers from the Netherlands Institute for Neuroscience.
|
9c71cdd2-3b79-4da6-a3a4-43c5a448f9c2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Quantifying wisdom
So we know that many smart people make stupid (at least in retrospect) decisions. What these people seem to be lacking, at least at the moment they make a poor decision, is wisdom ("judicious application of knowledge"). More from Wikipedia:
It is a deep understanding and realization of people, things, events or situations, resulting in the ability to apply perceptions, judgements and actions in keeping with this understanding. It often requires control of one's emotional reactions (the "passions") so that universal principles, reason and knowledge prevail to determine one's actions.
From Psychology Today:
It can be difficult to define Wisdom, but people generally recognize it when they encounter it. Psychologists pretty much agree it involves an integration of knowledge, experience, and deep understanding that incorporates tolerance for the uncertainties of life as well as its ups and downs. There's an awareness of how things play out over time, and it confers a sense of balance.
Wise people generally share an optimism that life's problems can be solved and experience a certain amount of calm in facing difficult decisions. Intelligence—if only anyone could figure out exactly what it is—may be necessary for wisdom, but it definitely isn't sufficient; an ability to see the big picture, a sense of proportion, and considerable introspection also contribute to its development.
From SEP:
(1) wisdom as epistemic humility, (2) wisdom as epistemic accuracy, (3) wisdom as knowledge, and (4) wisdom as knowledge and action.
Clearly, if one created a human-level AI, one would want it to "choose wisely". However, as human examples show, wisdom does not come for free with intelligence. Actually, we usually don't trust intelligent people nearly as much as we trust wise ones (or appearing to be wise, at any rate). We don't trust them to make good decisions, because they might be too smart for their own good. Speaking of artificial intelligence, one (informal) quality we'd e
|
c6fec5f7-473e-4b9c-9ae5-5c9bf3dc6df8
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
The Power of Intelligence
In our skulls we carry around 3 pounds of slimy, wet, greyish tissue, corrugated like crumpled toilet paper. You wouldn’t think, to look at the unappetizing lump, that it was some of the most powerful stuff in the known universe. If you’d never seen an anatomy textbook, and you saw a brain lying in the street, you’d say “Yuck!” and try not to get any of it on your shoes. Aristotle thought the brain was an organ that cooled the blood. It doesn’t *look* dangerous.
Five million years ago, the ancestors of lions ruled the day, the ancestors of wolves roamed the night. The ruling predators were armed with teeth and claws – sharp, hard cutting edges, backed up by powerful muscles. Their prey, in self-defense, evolved armored shells, sharp horns, poisonous venoms, camouflage. The war had gone on through hundreds of eons and countless arms races. Many a loser had been removed from the game, but there was no sign of a winner. Where one species had shells, another species would evolve to crack them; where one species became poisonous, another would evolve to tolerate the poison. Each species had its private niche – for who could live in the seas and the skies and the land at once? There was no ultimate weapon and no ultimate defense and no reason to believe any such thing was possible.
Then came the Day of the Squishy Things.
They had no armor. They had no claws. They had no venoms.
If you saw a movie of a nuclear explosion going off, and you were told an Earthly life form had done it, you would never in your wildest dreams imagine that the Squishy Things could be responsible. After all, Squishy Things aren’t radioactive.
In the beginning, the Squishy Things had no fighter jets, no machine guns, no rifles, no swords. No bronze, no iron. No hammers, no anvils, no tongs, no smithies, no mines. All the Squishy Things had were squishy fingers – too weak to break a tree, let alone a mountain. Clearly not dangerous. To cut stone you would need steel, and the Squishy Things couldn’t excrete steel. In the environment there were no steel blades for Squishy fingers to pick up. Their bodies could not generate temperatures anywhere near hot enough to melt metal. The whole scenario was obviously absurd.
And as for the Squishy Things manipulating DNA – that would have been beyond ridiculous. Squishy fingers are not that small. There is no access to DNA from the Squishy level; it would be like trying to pick up a hydrogen atom. Oh, *technically* it’s all one universe, *technically* the Squishy Things and DNA are part of the same world, the same unified laws of physics, the same great web of causality. But let’s be realistic: you can’t get there from here.
Even if Squishy Things could *someday* evolve to do any of those feats, it would take thousands of millennia. We have watched the ebb and flow of Life through the eons, and let us tell you, a year is not even a single clock tick of evolutionary time. Oh, sure, *technically* a year is six hundred trillion trillion trillion trillion Planck intervals. But nothing ever happens in less than six hundred million trillion trillion trillion trillion Planck intervals, so it’s a moot point. The Squishy Things, as they run across the savanna now, will not fly across continents for at least another ten million years; *no one* could have that much sex.
Now explain to me again why an Artificial Intelligence can’t do anything interesting over the Internet unless a human programmer builds it a robot body.
I have observed that someone’s flinch-reaction to “intelligence” – the thought that crosses their mind in the first half-second after they hear the word “intelligence” – often determines their flinch-reaction to the Singularity. Often they look up the keyword “intelligence” and retrieve the concept *booksmarts* – a mental image of the Grand Master chessplayer who can’t get a date, or a college professor who can’t survive outside academia.
“It takes more than intelligence to succeed professionally,” people say, as if charisma resided in the kidneys, rather than the brain. “Intelligence is no match for a gun,” they say, as if guns had grown on trees. “Where will an Artificial Intelligence get money?” they ask, as if the first *Homo sapiens* had found dollar bills fluttering down from the sky, and used them at convenience stores already in the forest. The human species was not born into a market economy. Bees won’t sell you honey if you offer them an electronic funds transfer. The human species *imagined* money into existence, and it exists – for *us,* not mice or wasps – because we go on believing in it.
I keep trying to explain to people that the archetype of intelligence is not Dustin Hoffman in *The Rain Man*, it is a human being, period. It is squishy things that explode in a vacuum, leaving footprints on their moon. Within that grey wet lump is the power to search paths through the great web of causality, and find a road to the seemingly impossible – the power sometimes called creativity.
People – venture capitalists in particular – sometimes ask how, if the Machine Intelligence Research Institute successfully builds a true AI, the results will be *commercialized.* This is what we call a framing problem.
Or maybe it’s something deeper than a simple clash of assumptions. With a bit of creative thinking, people can imagine how they would go about travelling to the Moon, or curing smallpox, or manufacturing computers. To imagine a trick that could accomplish *all these things at once* seems downright impossible – even though such a power resides only a few centimeters behind their own eyes. The gray wet thing still seems mysterious to the gray wet thing.
And so, because people can’t quite see how it would all work, the power of intelligence seems less real; harder to imagine than a tower of fire sending a ship to Mars. The prospect of visiting Mars captures the imagination. But if one should promise a Mars visit, and also a grand unified theory of physics, and a proof of the Riemann Hypothesis, and a cure for obesity, and a cure for cancer, and a cure for aging, and a cure for stupidity – well, it just sounds wrong, that’s all.
And well it should. It’s a serious failure of imagination to think that intelligence is good for so little. Who could have imagined, ever so long ago, what minds would someday do? We may not even *know* what our real problems are.
But meanwhile, because it’s hard to see how one process could have such diverse powers, it’s hard to imagine that one fell swoop could solve even such prosaic problems as obesity and cancer and aging.
Well, one trick cured smallpox and built airplanes and cultivated wheat and tamed fire. Our current science may not agree yet on how exactly the trick works, but it works anyway. If you are temporarily ignorant about a phenomenon, that is a fact about your current state of mind, not a fact about the phenomenon. A blank map does not correspond to a blank territory. If one does not quite understand that power which put footprints on the Moon, nonetheless, the footprints are still there – real footprints, on a real Moon, put there by a real power. If one were to understand deeply enough, one could create and shape that power. Intelligence is as real as electricity. It’s merely far more powerful, far more dangerous, has far deeper implications for the unfolding story of life in the universe – and it’s a tiny little bit harder to figure out how to build a generator.
The post [The Power of Intelligence](https://intelligence.org/2007/07/10/the-power-of-intelligence/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
|
964dc2ec-033f-430a-ae9f-ba6e634827fb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Types of subjective welfare
Summary
I describe and review four broad potential types of subjective welfare: 1) hedonic states, i.e. pleasure and unpleasantness, 2) felt desires, both appetitive and aversive, 3) belief-like preferences, i.e. preferences as judgements or beliefs about value, and 4) choice-based preferences, i.e. what we choose or would choose. My key takeaways are the following:
1. Belief-like preferences and choice-based preferences seem unlikely to be generally comparable, and even comparisons between humans can be problematic (more).
2. Hedonic states, felt desires and belief-like preferences all plausibly matter intrinsically, possibly together in a morally pluralistic theory, or unified as subjective appearances of value or reasons or even under belief-like preferences (more).
3. Choice-based preferences seem not to matter much or at all intrinsically (more).
4. The types of welfare are dissociable, so measuring one in terms of another is likely to misweigh it relative to more intuitive direct standards and risks discounting plausible moral patients altogether (more).
5. There are multiple potentially important variations on the types of welfare (more).
Acknowledgements
Thanks to Brian Tomasik, Derek Shiller and Bob Fischer for feedback. All errors are my own.
The four types
It appears to me that subjective welfare — welfare whose value depends only or primarily[1] on the perspectives or mental states of those who hold them — can be roughly categorized into one of the following four types based on how they are realized: hedonic states, felt desires, belief-like preferences and choice-based preferences. To summarize, there’s welfare as feelings (hedonic states and felt desires), welfare as beliefs about value (belief-like preferences) and welfare as choices (choice-based preferences). I will define, illustrate and elaborate on these types below.
For some discussion of my choices of terminology, see the following footnote.[2]
Hedonic states
Hedonic state
|
122b4b11-a688-4bfa-8c16-1d35e878ab57
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Linkpost: Are Emergent Abilities in Large Language Models just In-Context Learning?
A new preprint, Lu et al., published last month, has important implications for AI risk if true. It essentially suggests that emergent capabilities of current LLMs are (with some exceptions) mediated by a model's ability to do in-context learning. If so, emergent capabilities may be less unexpected, given that improvements to in-context learning (e.g., via instruction tuning) are relatively predictable.
Wei et al. defines "emergent ability" like so: "An ability is emergent if it is not present in smaller models but is present in larger models." So you can have sudden "jumps" in performance on various tasks as you scale models up, e.g. (from Wei et al.):
Lu et al. is saying, yes, but that is not because the larger models learned to do those specific tasks better, or learned to become better generic reasoners. It is because the larger models learned to follow instructions better, partly from scaling up, and partly being trained on data sets of instructions and instruction-following ("instruction tuning").
The methodology: "We conduct rigorous tests on a set of 18 models, encompassing a parameter range from 60 million to 175 billion parameters, across a comprehensive set of 22 tasks. Through an extensive series of over 1,000 experiments, we provide compelling evidence that emergent abilities can primarily be ascribed to in-context learning."
The paper finds that, when accounting for the "biasing factors", only 2 of 14 previously "emergent" tasks actually show emergence. The blue and yellow lines are instruction-tuned models evaluated via few-shot prompting, showing emergence. The purple and green lines are non-instruction-tuned models evaluated via zero-shot prompting, generally showing no emergence.
They write:
> Ultimately, this absence of evidence for emergence represents a significant step towards instilling trust in language models and leveraging their abilities with confidence, as it is indicative of the complete lack of latent hazardous abilities in LLMs,
|
7d71bb5e-1b62-429e-a48a-487d4c0b4830
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Podcast with Divia Eden on operant conditioning
I interviewed Divia Eden for 2.5 hours about operant conditioning on my side podcast, The Filan Cabinet. I think it's both an interesting conversation in its own right, as well as being kinda related to human rationality as well as AI and AI alignment. (I should also say I like the rest of the podcast - I expect the Carrick Flynn interview to be the most interesting to this crowd, but there are a few others I think are cool). Below is the copy I wrote for the episode.
----------------------------------------
In this episode, I speak with Divia Eden about operant conditioning, and how relevant it is to human and non-human animal behaviour. Topics we cover include:
* How close are we to teaching grammar to dogs?
* What are the important differences between human and dog cognition?
* How important are unmodelled "trainer effects" in dog training?
* Why do people underrate positive reinforcement?
* How does operant conditioning relate to attachment theory?
* How much does successful dog training rely on the trainer being reinforced by the dog?
* Why is game theory so fake?
* Is everything really just about calmness?
|
40211372-14d7-4f91-b69d-89775240b7d2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Nuclear War, Map and Territory, Values | Guild of the Rose Newsletter, May 2024
Community Updates
Upcoming events:
* Naturalism event every week on Thursdays
* This Interest Group discusses the 'Naturalism' sequence by Logan Strohl, practicing the techniques it describes.
* Meaning and Spirituality event every other week on Tuesdays
* This Interest Group concerns itself with determining how we can help each other live more meaningful and rich lives.
* Social Butterflies event every week on Tuesdays
* This Interest Group discusses social connections and how to improve them. Often, this will involve examining some material on relationships and discussing how to apply it.
You can add all the events to your calendar by clicking here.
You can join our events and conversations by creating a free Guild account!
Workshop Updates
New and revised workshop content
This month at our Denver chapter of the Guild, Guild member Alex Fauble ran an in-person Guild workshop on our Emergency Preparedness workshop. In his presentation, Alex makes the case that nuclear war is the riskiest disaster. Given its wide-ranging effects on everything from logistics to breathable air, preparing for nuclear war also prepares you for nearly any other disaster or emergency.
The Emergency Preparedness workshop will help you take the most valuable steps to prepare for the likeliest dangers, culminating with stocking your own emergency bag for quick response. Emergencies happen to everyone eventually, so you had best be ready!
To start using this workshop with our skill tree, create a free account!
Skill of the Month
Here is an example of a skill from our Skill Tree. Use this opportunity to do the skill and start making changes in your life!
----------------------------------------
[Empiricist] **Map and Territory** - 1 XP
1. Read one of the Map and Territory articles. (Choose one at random, don't just pick the first one.)
2. Write an informal explanation of the concepts described by the article, and briefly connect it to your life in some way. (Fo
|
dbd10088-b55e-4f80-b2a0-d0e446562b7d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Survey Articles: A justification
There seems to be a growing consensus among the community that while Less Wrong is great at improving epistemic rationality, it is rather lacking when it comes to resources for instrumental rationality. I've been thinking about how to address this. This can be very hard because many of the questions most important to instrumental rationality lack an objective answer and depend heavily on individual circumstance. Consider for example the question, "How do I become a more interesting person?", that is the first survey article I've published. One person might easily have the resources to go travelling and gain new experiences, while another person might be prevented by their financial situation. One person may enjoy the process of broadening their experience by reading, while another may simply detest books. Ignoring these individual circumstances will lead to much of the advice being unsuitable
It therefore seems that in a general resource, that is forced by its very nature to ignore individual circumstances, that the best response would be to gather together as many ideas as possible. It is hoped that each rationalist has the capacity to critically examine each suggestion that is proposed and reject those that would be counterproductive. This differs from a standard list article as, instead of limiting itself to an arbitrary number of ideas, or only using ideas thought of by the author, I have made a comprehensive list and taken ideas from different sources. Taking ideas from different sources is extremely important - a single person can only possess so much creativity. It also decreases the influence of the author's subjective point of view - I might never have said something myself, but I might be willing to include it in a list of ideas. Another problem with lists is that if they are wordy, they take a long time to read through, while if they are concise, they may be misunderstood. Summarising whilst linking to a source means that extra detail is available for th
|
115d0f9a-22d4-46a1-8299-e236a095c5d0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Suggestion: Implement a Change My View Feature
I know that the devs are pretty busy at the moment, and won't be able to do anything like this for a long time, but if I don't post this now, I'll probably forget it.
Change My View: is an extremely popular sub on Reddit and the responses tend to be pretty good. However, there are some questions that rationalists might have that can't be asked there due to lack of knowledge. Just off the top of my head, these might include:
* CMV: I believe that we should two box in Newcomb's problem
* CMV: I believe that hypothetical problems tend to be too abstract to be worth discussing
* CMV: I believe that philosophy does not have much value for discovering the truth
Just as in Change My View, you would be expected to elaborate on your view.
The best way to implement this would probably be to have a special cmv tag. These posts would then have a special option to allow the OP to award a delta when their view was changed (we could also copy this suggestion of big and little deltas for big and little changes respectively). I think that deltas would be good as they provide incentive for people to say something that might change a persons view and they also encourage a person to acknowledge when their view has changed.
We could consider importing some of the rules or norms of change my view for these threads, ie:
* Top level responses should challenge at least one aspect of the view or ask for a clarification
* You must be open to changing the view if the evidence points that direction
* You must personally hold the view and not be playing devil's advocate or neutral
* The person who wants their view changed should be active in the thread
* No soapboxing
I'll admit that someone could create a Change My View thread already, but I believe that implementing this feature would make people more aware of this and nudge them towards actually doing putting their beliefs on the line. It would also allow people to filter these threads if they felt that there were too many of t
|
20fb5e35-e9a0-41ea-9dcf-35eaab73f202
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Clarifying what ELK is trying to achieve
(Basic familiarity with [ELK](https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge) required.)
I noticed that some people who haven’t thought that much about ELK yet (including my former self) are confused about (1) why the ELK setup is structured how it is structured and what we want to achieve, and relatedly (2) what's the underlying problem behind the "human simulator".
**TL;DR: The problem with the "human simulator" isn't that the AI wants to "look good to a human" instead of "correctly answering questions". The "human simulator" is just one specific example where the reporter learns to reason on its own, instead of looking at the predictor. When a reporter reasons on its own from answering human-level questions, its capabilities won't generalize to harder questions. This is why we need a predictor with a superhuman world model.**
**The primary goal of ELK isn’t that we want to train a specific*****objective*** **into the reporter, we want the reporter to have a certain*****structure*****, the direct-translator structure.**
(EDIT: As I'm using the term "human simulator", I mean the class of AIs that reason very similar to humans. It is possible that other people rather mean the broader class of self-reasoning reporters, even if the reporter reasons with different concepts than humans, but I'm not sure. Let me know if most other people (or just Paul) use the term "human simulator" consistently different than me.)
**The confusion: Thinking the problem is what the reporter wants**
------------------------------------------------------------------
As said, I think some people mistakenly think the difficulty of getting a "direct translator" vs a "human simulator", is that the AI may learn "look good to humans" rather than "answer questions honestly" as its objective, and that for some reason the former is fundamentally more likely by default than the latter.
That isn't right. It's not that the human simulator wants to look good to humans, it just reasons on its own to find answers, similar to how humans reason.
The problem is that in the ELK setup this kind of reporter reasoning for itself may be much simpler than finding the correct way to extract answers from the predictor.
While making sure that the AI doesn't just learn to "look good to humans" as its objective is a very important and difficult problem in AI safety, and possibly (though not necessarily) a problem we still need to solve for ELK, it isn't the main problem we are trying to solve right now.
But we aren’t primarily trying to train the reporter to have the right objective, we try to train it to have the right structure.
Let me explain...
**Why is the ELK setup the way it is?**
---------------------------------------
Forget for a second what you know about ELK.
You want to create an AI that is superhuman at answering questions, i.e. can answer questions humans cannot find an answer for (without years of science or so).
So let's just try to train an AI to answer questions honestly, and we train it on a dataset of (q,a) pairs, where "q" is a question and "a" is the correct answer to the question that humans have determined. (So we don’t have a predictor like in ELK, just train question answering directly.)
(EDIT: Assume the training dataset consists of questions that humans can indeed answer in advance. Actually, it is also possible to have (q,a) pairs where humans only observed the answer, in which case the AI may learn more impressive capabilities and generalize more, but we assume that humans can find the answer "a" given "q".)
So after training for a while, our AI has learned to answer human-level questions correctly.
Assume our training set was perfect and contained no mistakes. Do you think the AI will rather have learned "predict what humans think" or "just figure out the answer" as its objective? (Ok actually it may not really have such an explicit objective, and rather learned behavior patterns, but what objective sounds more right to you?)
I would say "just figure out the answer" because it seems to me to be a simpler objective: Even if human reasoning is the simplest way to answer the questions, why would it learn the extra step of explicitly predicting what humans think, instead of "reason (like humans) to figure out the answer"? It may sound to us like the "predict what humans think" is simpler, but it's just because it's a short code, you still need to learn to actually reason like humans, which isn't simpler.
In practice, we won't have a perfect dataset, and the more human biases show through, the more likely it is that the AI will realize that and rather learn "predict what humans think". But I hope the example showed that the objective of "answer correctly" isn't in general much more complex / harder to achieve than "learn what humans want". The problem in ELK is that the direct-translator structure may be much more complex than the human-simulator structure.
Say we now ask the AI (that we just trained on our (q,a) dataset) to answer a question q that is significantly harder than what humans can answer, will it be able to answer the question correctly? Why/why not?
No, it won't. The AI won't be able to, from where should it have learned the capability of doing that? It doesn't matter whether the AI wants to achieve the objective "just look good to humans" or "reason to try to find the right answers".
So how do we get an AI that is superhuman at answering questions?
One possibility are amplification schemes, and there may be other options, but we are now interested in the approach where we train an AI not only on human-labeled data, e.g. make it predict sensor data, so it isn't bounded by what humanity can label.
We call the part that predicts sensor data "predictor" and the part that should answer questions "reporter", and there we are in the ELK setting!
(Ok in the ELK setting we kind of strictly train the predictor before the reporter, which isn’t strictly required, but we’re roughly in an ELK setting.)
**So what is the actual problem behind the human simulator?**
-------------------------------------------------------------
So now we are in the ELK setup, we've trained our predictor, and now train our reporter on the training dataset.
The hope is that if the reporter looks at the predictor to answer our questions, it may generalize to answer questions correctly even when we humans don't know the answer, because it can still easily look up the answer in the predictor.
The direct translator(One possibly pretty important obstacle to ELK that I see is that it may be harder to translate answers to questions where we humans don't know the answer. A special case of this is that the predictor may have the information about "what humans think" encoded inside and those are easier to translate than the whole ground-truth state. But that's not the topic of this post.)
The problem now is that when the predictor reasons in different concepts than humans and is really large, doing the translation may be really complicated, and so it may be simpler for the reporter to just learn to answer the questions itself just from the input the predictor gets.
One concrete possibility of how that might happen is that the reporter learns to reason like a human, so in the ELK report they just take this concrete scenario and call the reporter the "human simulator".
The human simulatorBut there are other possibilities how the reporter might learn to reason for itself to achieve perfect performance on the training set.
(In retrospect, we see that "human simulator" is kind of a bad name as it may lead to the confusion that it wants to simulate the human, which is wrong.)
To see that there are other reasoners/simulators than the human simulator, take a look at this diagram, which shows the space of questions that can be answered by an AI.

As you can see, the direct translator can answer many more questions than the human simulator, but there are many other possibilities that can be learned that aren't exactly the human simulator, but still can answer all the questions a human can answer.
The problem when we get such a reasoner is, as we already saw in the last section, that it won't be capable enough to answer harder questions just from answering questions humans can answer.
If we get a reporter that isn't exactly the human simulator, i.e. reasons a bit differently in some ways, it is possible that the answers may slightly generalize into some domains, but not much. And overall it seems like a good proxy that is easier to learn to answer fewer questions rather than more, so it really won't generalize much. (In practice we may indeed get a self-reasoning reporter that is pretty close to being a human simulator, but the critical point is just that it is self-reasoning.)
**Recap**
---------
So what is the main lesson of the post? (May try it for yourself, the below is just one possible short summary.)
The "human simulator" isn't about the reporter wanting to look good to humans, but about the reporter reasoning for itself in a similar way to how humans reason. The main problem here isn't that those self-reasoning reporters don't \*want\* to answer hard questions (that humans cannot answer) honestly, it's that they aren't \*capable\* enough to do it.
We aren’t trying to get a reporter with the right objective, but with a nice structure that gives us some hope that its question-answering capability generalizes.
|
72f95572-5e56-419a-992b-bd5e59f03763
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Funeral Ritual
Followup to: Solstice: What Memes May Come (Part II)
Content note: My own opinionated thoughts on rationalist funeral ritual, which have evolved a bit since I first started thinking about it in 2012. I expect that if you haven't thought about funeral ritual and find that you need to, this will make for a better starting point than figuring things out from scratch.
But, human mindspace is diverse enough that I wouldn't be surprised if this anti-resonated with some people. Make of it what you will.
[Edit: In this post, I was specifically exploring 'funerals where a majority of attendees are "culturally rationalist"', as opposed to general secular funerals. However, it turns out much of the advice is still relevant regardless]
Prior: a desire for deep cultural ritual
In October 2012, as I planned the Winter Solstice ceremony, I was thinking a lot about death. In the space of three weeks, three friends of mine had lost someone closed to them. And it occurred to me that the rationality community had very little in the way of funeral ritual.
My friends were all young, but it seemed likely at least someone in the community wouldn’t make it. And we were not ready.
When my Grandmother died, I went to a Catholic funeral. And as much as I don’t believe in any of the doctrine, there was something comforting in having several hundred people join together to say the Lord’s prayer in unison. The literal content of the words was basically meaningless – what mattered was the knowledge that my Grandmother was not alone, that hundreds of people were connected to her, and to each other.
Another friend of mine lost their mother, and had a similar experience. They said (paraphrased) “when you lose someone, you do not have the energy to try new things. You are lost. You don’t know what to do. The point of funeral ritual is not to be epistemologically sound, it’s to give you something to do, that you know how to do, that everyone knows you’re supposed to do, so that at least you ca
|
11565178-c1a5-4bc0-840b-4741bb9153ea
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Technology and Progress
The audio version can be listened to here:
In this essay, I will give my views on technology and progress, focusing specifically on space travel, computers and transhumanism.
Futurism is not just predictive. It is also normative. We are concerned with what we should do, not just with what will happen. So, I’m going to talk about what I think will happen, what probably won’t happen, and what I think we should try to do.
(see the rest of the post in the link)
|
9e9ac6ac-b05e-4ae5-a373-7d952b6f9cc7
|
StampyAI/alignment-research-dataset/youtube
|
Youtube Transcripts
|
Secure Learning in Adversarial Environments (Bo Li, Foundations of Safe Learning Workshop)
our next speaker is Paulie her boy is
assistant professor in the computer
science department at the UI you see she
got a PhD from one debate in 2016
her research focus on machine learning
security privacy and the game Syria
she's particularly interesting exploring
vulnerabilities of machine learning
systems to various adversely attacks and
developing robust approaches for
learning system
please welcome Bob it's a great pleasure
to share my research about adverse or
machine learning here with you all so
first of all we know that machine deep
learning machine learning AI has been
ubiquitous in the world in every domain
including on homes driving healthcare
and smart city however in my research I
mainly focus on even such very powerful
machine learning and AI technologies has
brought us a lot of security and privacy
concerns for example in 2016 in rural
example that the Associated Press
Twitter account has spreading a rumor
saying that the White House has been
attacked which has Swiper about 136
billion dollars within seconds because
of the Ottomans trading BOTS just dumped
a large of amount of stocks within
seconds based on the fake news and
similarly in the hospital data sets like
in one of our previous work we find out
that even with just as the discharge
workflows segmentations
we can identify a lot of privacy medical
treatments about each patient so this
just saying that within this big data
area we can actually have a lot of
security and privacy concerns even
though they have brought us a lot of
convenience so that conclude as that
even currently with a lot of convenient
platforms for them for patterns and a
flaw as a previous speaker mentioned and
we are still living on adverse
environment that's why typically us
your machine and I still have a very
hard life to make our machine learning
platform robust and that's why here we
want to get together and understand also
add virtual model and why the machine
learning models are so vulnerable
currently and therefore we can find
certain solutions hopefully to improve
the robustness of our current machine
any platforms and this is just showing
that this is an important problem and
both government and different funding
agencies recognize that adverse
immersion learning as an important topic
in current scenarios
so after this back one I think the first
question people may ask is about what
exactly as advice for machine learning
and why such vulnerability happens so
there has been a lot of phenomenon
described by for example a professor
Mendes research from the image
perspective and fundamentally I will say
there is one like question or problem
that causes adverse or behaviors which
is that in traditional machine learning
techniques and people always assume or
increasingly assumed that training and
testing data from the similar even the
same distribution and that's why we can
always make the training and one and
make inference and the other even they
are an seen however in the considering
adverse or behaviors is that the
training or the testing data
distribution can be manipulated in
certain way so for example when the
training data distribution was
manipulated it is caused poisoning
attack or the back door attacks
mentioned by the first speaker and when
the testing data distribution is
manipulated it's called evasion attacks
where when the model is trained and then
you can still generate so-called adverse
to attacks or there was two examples to
fool the existing well-trained machine
any models so in this talk I will mainly
focus on evasion attacks things a lot of
times we can protect our model and to
now release it so evasion seems kind of
very severe problems and I will give
some examples of physical attacks
against the different types of
real-world sensors and then hopefully
give to potential defence principles
from both
games theoretical and properties of
machine any models so first let's look
at the physical examples of other words
or text so even like given a typical
learning algorithm for example if your
network a simple example is that we can
have a loss function and based on the
training data x and y pairs given the
example of supervised learning we can
minimize this loss function and use for
example SGD you very efficiently and
find some good very good local optimal
however given an adversary behavior what
they can do is that they can add a very
small perturbation on top of a given
example for example X and then the goal
is to maximize loss function here which
is a positive with the learners
objective so in this way we can see that
by adding a small magnitude of
perturbation is possible to generate the
search adversity havior x' against
existing trained machine learning models
and here's a probation if shown here is
actually easy to relatively easy to find
for example we can use local search
command toriel optimization and also use
certain convex relaxation to find the
optimal minimize the perturbation
magnitude and this is all the in digital
domain from the also from professor
memories examples and then the people
always ask a question so how such add
virtual examples add like realistic in
real world take an example of autumns
drivings
so how realistic that was so examples
could be for example in terms of
attacking existing atoms driving systems
which is one of the biggest use case in
for machine learning and technologies so
here we know that incomes driving we
have a lot of different sensors for car
for example perception and lidar and
radar all kinds of sensor fusion
together however in real world we can
see give an example of stop site this is
a real stop sign taken near ones
vancouver wines berkeley and we can see
that always some random pad
on the side so people may question like
what if those patterns actually
adversarial but actually they do not
cause any attention of humans so you not
recognized that would not replace the
sign in reality and there are several
challenges if you want to do that which
is you want to generate a kind of
so-called physical force or perturbation
on a stop sign so in rewards compared
with the digit award additional
challenges raise up for example the
first is different physical conditions
for example different angles and
distance and landing conditions will
make this law as we mentioned the low
magnitude of perturbation vulnerable so
for example if you have a like like the
dog and the cat example if you change
the image a little bit with different
angles and distances they may fail to
perform such attacks so how to solve
this question and make a generalize that
so called the robust physical so
perturbation that's one challenge in
real world and the other challenge is
that as we mentioned the perturbation in
gradua with hopefully that is of low
magnitude and in imperceptible but in
real world for example considering atoms
driving car driving by and show you hope
our camera can capture the perturbation
itself in real world so how can you make
the perturbation visible and
unnoticeable to human that's another
challenge and also other challenges
reserved for example in reward you
always have some fabrication error so
when you want to print out something
like red but it turns out the color
printer will give you something like
print pink or orange and also when we
generate the perturbation giving an
image in the digital domain we can
general prohibition everywhere including
the background but here if you want to
have a physical motivation it has to be
uncertain objects so this is another
different challenges so next I will just
quickly go through how we can leverage
and resolve these challenges with
different techniques and makes them
coherent together and eventually
generate a so-called physical
perturbation against the real world
systems
so first is the challenge about the
different physical conditions for
example recall that the first loss
function is what we used to generate and
with the perturbations in digital world
for example the whole objective function
we want is to minimize the doubter which
is the magnitude of perturbation plus
the loss function of that was to example
which is expressed Delta is generated
add virtual instance and then hopefully
that instance will go to the target
which means the y star here which is
different with the ground truth why and
considering different physical
conditions here what you can generate is
instead of the original loss function
like the digital loss function here we
can generate a large loss function by
sampling our different transformations
from real robot distributions for
example here is an example of different
angles of a stop sign in real world and
you can draw a sample of the
distribution and add to this big loss
function and in addition to this
distribution of different angles you can
change the colors and the physical
conditions of lightnings etcetera so
make this can loss function to consider
as much conditions as possible and
improve the generalization of this
attack so by doing this we hopefully can
improve the robustness of the generated
adversity or perturbation and second is
about the spatial constraints so as we
mentioned we hope with this perturbation
can be onto some objects itself and
still of some meaningful shape so that
it will be less realizable by human
which is called from hidden in the human
psyche so basically what you can get is
that you can add a mask to this
perturbation region so that the
generally the perturbation will be only
limited to certain spatial regions and
therefore this if the shape itself is a
meaningful like for example is some like
some gravities or some patterns then you
will not raise humans attention too much
and this way there are some tricks to
find those vulnerable regions so as we
make
there are some regions are correlated
atul robust features and some regions
are less robust eight features and the
trick here is that for example you can
use some sparse
regular risers for example our one to
generate those parts perturbations and
then find those vulnerable regions and
specifically to focus on those regions
and use for example our two or our t v--
loss to smooth out those regions and
generate more realistic perturbations
and the last challenge is the
fabrication error so basically there is
a more general loss here that you can
use the cps law mps loss to minimize the
distance between the principal vector p
prime here and the desired vector p hat
so that by minimizing - you will be able
to make sure the generated color or
generated the perturbation will be as
close as possible with what we can get
in real world so by solving this several
challenges together we will see what we
can get in a static experiment so
basically what you can see here maybe
someone has saw some of such stop signs
in reward is that each column shows a
type of predation and for example for
the first row you can see that the first
two rows are kind of subtle perturbation
as you can see this perturbation looks
like fading out but actually all the
stop signs in these lives was miss
recognized as a speed limit of 45 by a
real word camera and under different
conditions and each row here shows
different physical conditions and under
later story Collins
some of them show some random letters
and the later two occurrence so just the
white and black rectangles so this is
somewhat surprising that several like
fall the fall rectangles enough to for
the stop sign in real world and make it
targeted to be miss recognized as
certain adverse or targets so this is
some examples of static test
and there are some physical tests that
we can see what it happens if a real car
driving by towards such physical
perturbed stop signs so the left-hand
side is control experiment of a sort
stop sign the right hand side is a
control experiment and when you pay
attention to the captions you will see
the result of the classification so you
can see on the red hand left hand side
is consistently miss recognized as speed
limits at 45 for most of the time even
the car is driving with certain speed
and this is another example of different
types of perturbation however on the
right hand side as a control experiment
has always been recognized as a stop
sign so this just showed that in waster
car driving in certain speed under
different conditions in real world this
physical stop signs can always miss lead
the real motion any models and the
vision systems encrypted in a car and
actually another this is for attacking
classification models and just now one
audience asked about the object
detections and that's exactly one
question we get before so people ask
what if it is object detections
because there are some automatic
cropping mechanisms here with the
proposed with an automatic a proposal
generation approach if like whether this
kind of physical predation effective or
not so this is an example that's
attacking your law so basically you can
see the real the physical stop sign here
most of the time actually all the time
it cannot be recognized as a stop sign
but other objects can be recognized
without any problem so this shows that
under different physical conditions the
physical perturbation can affect Li
attack the objective hector's as well
unless you go to very very close to the
sign so they recognize the letters here
because we are purposed to not block the
letters so the ones recognize the
letters they can find it's a stop sign
however the argument is that it will be
too close and it will be too close for
you to stop a car with this speed
and other questions may raise about the
white box and black box attack because
in this one is a specific white box and
you get so gradient directly from the
model and you can optimize your
perturbation and actually we can
leverage the same perturbation use a
phenomenon called transfer abilities
that use the same probationer model a
you can test it on totally different
model and perform the so-called black
box attack and what you can see here is
that this is the same probation test on
the fastest again and in a very
different physical environment and you
can see even like this and this is a
less complicated environment the stop
sign is always miss recognized until you
get very close so this stop sign
increased a lot of interest intentions
and actually this physical stop sign was
displayed in the Museum of London
currently and people get quite excited
about why the real world attack examples
can be generated and attack the reward
perception systems of car
so after this physical attack examples
people ask us a question about okay if
the perception system is vulnerable with
no images are vulnerable there are some
overfitting information so the models
have over over capacities so they always
have some room to generate such
perturbation and attacks so people think
for example some autumn driving startups
companies think that if we have sense of
fusion for example if we equipped it
with lidar together with perception
sensors this may be more robust and here
I just want to quickly go through we
will not go to the details because we
also want to go to some examples of
defense and here just the one example
want to show that actually lidar system
itself in real world is not as robust
either
for example this is testing an Apollo
open source lidar system from Baidu and
in reality is a lot of additional
challenges in terms of attacking a real
word lidar system for example the rural
idyll system is not an end-to-end model
you cannot directly get a gradient as in
the classic
image model and also they have a lot of
different components connecting together
which will reduce the aggregated
adversity or perturbation effects
however even with with such real watch
constraints and challenges and we can
still be West you able to achieve
certain and what your goals so one goal
is that what if we can generate a
real-world objects and put this object
on the road so that even though it's
obviously large objects it still cannot
be recognized by the lidar system and
the other goal is that what if we put
her object onto the car or near the car
so that your car in front of homes
driving car will not be recognized by
that car behind you which is obviously
quite dangerous and I want to actually
again show some videos and this is chess
it and reward like highways and the
experiment here set up is that on the
top there is some object on most objects
showing there and the left-hand side is
a lighter detection system so basically
you can see if it's optimized add
virtual object put on the highway when
items driving car driving by Celina
actually fail to detect this object
itself and if we put a box with regular
with regular size and with regular shape
and is a similar size with adverse the
object we generated and you can see it
clearly as a control experiment clearly
detect this object so here on the top
you can see this box is always almost a
Miss recognized even though there is
clearly a box here which is like a 3d
printed object which is like relatively
this large as a box and if we put a box
with similar size in the middle we can
see that most of the time let's see most
of the time actually Texans are add
virtual objects with a green box so
another example so on the top you can
see that most of the time
the green box show up which means this
this real box here was detected but if
this like the object with a slightly
weird shape here this is really printed
and optimized in a way to attack the
lighter and this lighter system always
cannot find the green box here which
means it fails to detect the objects so
yeah so this two example just want to
show that in reward actually no matter
is vision or the light of perception
systems different physical adverse your
attacks can happen in real world even
though the real world scenarios can be
much more complicated compared with the
digital environment so the next based on
all these attacks and I don't think I
need to emphasize the severe
consequences of the digital was to
example since there are like hundreds of
work for its so far already so next I
want to introduce two principles in
terms of defense from both schemes
theoretical perspective and based on the
properties of machine any models so
first let's look at games or perspective
so so far we know adverse so machine
learning as we mentioned is quite
important and there are many many work
in terms of defense and detection has
been proposed however one significant or
important message is that those defense
or detections are mostly broken by
adaptive or more clever intelligent
sophisticated attackers again so this is
a bad news and this forms like certain
cat-and-mouse game like we have better
attack and then again we have better
defense and then it can be attacked
again so how can we end this game and
here I want to introduce another game
formulation which is dagger per game but
first let's look at a simple example
from the NLP example because in the NLP
domain the features are more improved
interpretable for example the words
compared with the pixels of the in
so I want to use this in for example to
show how can we build a good game model
to improve the robustness of our current
learning platforms so first this is a
very simple example of spam filter and
what you can see here is that if you
have a spam and you can train for
example bag-of-words
classifiers and by assigning some ways
to the words and then what you can get
is some scores and by combining
comparing with some threshold you can
find out this is a spam for example and
this is what we called a spam filter
version one very simple and then with
attacker comes in what attacker can do
is add for example add some good words
so by adding these good words even those
adverse all words are still here but
then you can lower the weight and
compare with the static swash hold you
will see it clearly beam is recognized
and this intuition can go on so the
spammer can put it back so the defender
can put it back into the model to return
it kind of like adversary training and
you can improve the efficiency
again but this can go on and on and form
as we said a repeated game but how we
can end this game so I would propose
actually proposed a new leverages a new
game stackable game to fall to model the
behavior of the attacker and defender in
learning perspective for example in the
stackable game typically there are two
players one is a leader once a follower
so basically the leader will commit some
strategy and the follower will follow
the strategy and make the final decision
so this is not a repeated game this
YouTube player and one step game and the
good thing for this is that if for
example the leader here is formulated as
a learner so the learner can commit some
strategy which kind of for example the
classifier here and you can see the key
point here is the Oracle which we have
an adverse model so as long as we can
somewhat model that was to behavior in
from us for example Oracle we can
consider this attacker behaviors
in choices and also strategy so that the
defender can always generate his robust
learning strategies based on the
attackers consideration and this show
such with this commit strategy we have
two goals one is to minimize attackers
attack available successful rate and the
other is even though we cannot defend
100% of the attack we can always
increase the cost of the attacker to
attack the system which is actually a
very important principle in security to
make that harder and harder and there
are several ways to model attacker
behaviors here for example we can model
it from the mathematical perspective or
we can use human subject to understand
the real-world attacker behavior so in
this work I will mainly focus on the
mathematical modeling here to show how
we can leverage the attackers behavior
to improve the robustness of the
learning process so we can see here from
the previous simple example there are
several three important components one
is for example if we want to model that
was the behavior that was a goal is to
minimize the cost function between the
acts the original instance and the X a
which is adverse to a generated instance
such that the generated instance will
always be recognized as B'nai by
comparing with certain threshold Delta
here so we can see either the cost
function is important because we need to
consider the future substitution all the
future selection is important because as
we mentioned we have robust and non
robust features and the dynamic
operational decision is important here
because if we have a static optional
threshold here the if the attacker knows
it we can always perform such an
adaptive attack so how to model this
dynamic optional decision with a certain
function to automatically learn age is
actually another important research so I
will mainly focus on this part and give
you some very interesting results to get
a sense of how
can we make our learning algorithms more
dynamic in real world so that you by
adding those dynamics that hacker will
not be able to evenly get the algorithm
architecture itself
they will not always be able to get the
whole framework and instead of attack it
efficiently for example here still with
original very simple like optimization
function for the attacker so instead of
a static threshold outta here we want to
use a function like Q to learn this
threshold Delta dynamically and one
drawback you can see here is that this
function can be the input of this
function can be of very high dimension
because it takes the X prime which is a
generated input and the predicted input
as the input as the input of this
function but the dimension of this input
can be of thousand by thousand as for
example image so here the nice thing is
or it's a virginal generic technique is
you can use a finance set of basis
function to reduce the dimension of this
learn function so that you can always
perform of like optimize as final set of
basis function and optimize the
coefficient of it for each of the basis
function so that based on this learning
dynamic learnings worth threshold you
can actually prove it will be
np-complete for attacker to attack the
system and eventually you can optimize
this whole optimization function as this
process for example here this is a
traditional learning algorithm that you
want to minimize a loss by optimizing
the weights given the training data for
example x and y pairs and here you add
another loss term which is you consider
the animals or behaviors for example for
all the y equals to 1 which is adverse
to instance you can generate her box
here that to model like model what's the
cost of the adverse race and what's a
stretch like budget of the adverse way
to in to consider what they can do in
the worst case
Arielle and add these two laws together
together with the Iowan laws to make the
trade-off between the future reduction
and the robustness and based on this
optimization you can see as long as we
can solve it efficiently you will be
able to have a robust learning systems
because you have considered both benign
accuracy and the adversative behavior
however this is not convexity and this
is hard to solve and but we can use the
mixed integer linear program to directly
formulate the original optimization into
this linear program and use existing
solver to solve it but we don't need to
go to detail about this but we can see
this is definitely solvable efficiently
and by solving this with existing
solvers we can see that the approach
actually compare with different state of
art adverse though behaviors including
other also training this can achieve
much better robustness and this is based
on the game of stagger Burgamy
which is essentially can formulated as a
min/max game and however people may ask
this may not be quite scalable in
certain sense because we have to use
these tricks like finite differences
basis functions to improve the
efficiency and can we directly leverage
the properties of current machine
learning models to improve the
robustness
so our introduced one work in five
minutes and I think this is actually
quite interesting and although it could
be a very simple solution but I will
show that it's quite big it could be
very effective because for example we
have discussed a lot about segmentation
as our classification and object
detections but those actually have some
information concentrated in the model
itself but for example for segmentation
itself it has some properties for this
algorithm itself for example for
segmentation like here if you have image
you can attack it to different targets
for example you can have a Hello Kitty
or like just appear colors to attack the
segmentation model but we know that for
segmentation
which is different with classification
is that you always have some spatial
consistency among different features or
like the pixels so if you if you general
into some heat map for the cross entropy
around the pixels of from from the
surrounding pixel prediction for one
pixel you will see that if it's a B'nai
you can clearly see the edges of a
street but if it's also like when you
have prohibition added into the images
you can see also boundaries a very
blurry is kind of just a mess up which
shows that when you add a perturbation
even though it's not perceptible to
human eyes systems actually it breaks
the spatial consistency intrinsically so
such property cannot be removed easily
by attacker so if there is a way to
identify such property you will be able
to improve the robustness of the models
at least you can detect such be adverse
of behaviors so how to detect it
actually very simple so you can as we
mentioned the spatial consistency was
broken by the adverse occupation right
so what you can do is for example
leverage in spatial consistency PI give
an image you can generate the two crops
with overlaps and then you can throw
this two overlapped crops into a
segmentation model and get the
segmentation from out from this model so
you can see if it is a B'nai the overlap
the region will have very consistent
results in terms of for example mr you
because if you are doing the
segmentation task and if it's adverse
rate because the spatial consistency is
broken so the behavior is a prediction
are very different from this figure and
based on this we even do not need to
learn any function we can directly
generate especially because you can see
this separation is very clear it's not
mixed up together so these types of
interesting properties are very hard to
avoid by attackers
so by some connotative results you can
see under different types of Italian
Houdini are different types of attacks
against the segmentation and you can see
the detection rate is almost 100% and do
one nice thing is that if you consider
the
adaptive attacker as I mentioned even
though the attacker knows you use this
detection method to detect my other
words or examples still the detection
rate is almost 100% the reason is that
when you generate those random crops is
it's very random so for the attacker
what they need to do is to make sure
every crop is consistent with our
original like the segmentation results
that's very hard because first they do
not to now know which patch was used by
the defender and second is if you
generate a lot of patches the probation
will be add up and it will be very high
so that is a trade-off for attacker so
they cannot do that so eventually it
shows that attacker fails so by doing
this we can see with this specific
learning properties we can leverage it
and improve the robustness and detect
adverse or behaviors so based on this
well I think I'm out of time but I will
say that actually those attacks and
defenses our work has been reported by
different media's and actually it shows
that different attacks in real world has
risked a lot of attentions and defense
is very important and the so far we have
for example leveraged game theory which
is very effective but it may not be so
scalable and we can leverage different
properties of learning models to improve
the robustness however we have to
identify different specific properties
for different models which is another
hard open question can we optimize early
identify what properties will be useful
to improve the robustness of machinery
models so that could be a very
interesting direction to go frauds yeah
thank you
[Applause]
[Music]
classification or
which are
yeah I will say attacks are always easy
any model is easy no matter how you
order any model digital world but
defense is hard by the sudden defense
you can elaborate for example
segmentation now we have solved it I
would think oh it's maybe much easier
than classification because
classification has you have to actually
leverage some semantic additional
information to improve the robustness
but segmentation only based on spatial
consistency we can achieve almost a 100%
detection that I would be very happy and
some I don't show the example here are
some model for example videos you can
leverage the temporal consistency and
audios you can use temporal consistent
we also achieve almost a 100% detection
rate for audio and videos so those like
you can imagine how some specific
properties would help to improve the
robustness but some models are very
generic for some of classification I
would say it's a little bit harder yeah
thank you
personal passions he set up critical
yeah for like AI system you have to have
an object right if you put want to
generate some object onto the like
spoofing on to the car or certain things
it will be possible but for like that we
always need an object so that the
reflection can be captured by the lidar
systems yeah
[Applause]
|
90503ceb-2669-420c-818c-c543ca44a039
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Deliberation as a method to find the "actual preferences" of humans
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
Some [recent discussion](https://www.lesswrong.com/posts/nRAMpjnb6Z4Qv3imF/the-strategy-stealing-assumption#Fdy8Dpy6tZYn3xME4) about what Paul Christiano means by "short-term preferences" got me thinking more generally about deliberation as a method of figuring out the human user's or users' "actual preferences". (I can't give a definition of "actual preferences" because we have such a [poor understanding](https://www.lesswrong.com/posts/orhEa4wuRJHPmHFsR/six-plausible-meta-ethical-alternatives) of meta-ethics that we don't even know what the term should mean or if they even exist.)
To set the framing of this post: We want good outcomes from AI. To get this, we probably want to figure out the human user's or users' "actual preferences" at some point. There are several options for this:
* **Directly solve meta-ethics**. We figure out whether there are normative facts about what we should value, and use this solution to clarify what "actual preferences" means and to find the human's or humans' "actual preferences".
* **Solve [meta-philosophy](https://www.lesswrong.com/posts/EByDsY9S3EDhhfFzC/some-thoughts-on-metaphilosophy)**. This is like solving meta-ethics, but there is an extra level of meta: we figure out what philosophy is or what human brains are doing when they make philosophical progress, then use this understanding to solve meta-ethics. Then we proceed as in the "directly solve meta-ethics" approach.
* **Deliberate for a long time**. We actually get a human or group of humans to think for a long time under idealized conditions, or approximate the output of this process somehow. This might reduce to one of the above approaches (if the humans come to believe that solving meta-ethics/meta-philosophy is the best way to find their "actual preferences") or it might not.
The third option is the focus of this post. The first two are also very worthy of consideration—they just aren't the focus here. Also the list isn't meant to be comprehensive; I would be interested to hear any other approaches.
In terms of Paul's recent [AI alignment landscape tree](https://www.lesswrong.com/posts/zQgqAc9nQETjFjawJ/ai-alignment-landscape), I think this discussion fits under the "Learn from teacher" node, but I'm not sure.
Terminological note: In this post, I use "deliberation" and "reflection" interchangeably. I think this is standard, but I'm not sure. If anyone uses these terms differently, I would like to know how they distinguish between them.
Approaches to deliberation that have been suggested so far
==========================================================
In this section, I list some concrete-ish approaches to deliberation that have been considered so far. I say "concrete-ish" rather than "concrete" because each of these approaches seems underdetermined in many ways, e.g. for "humans sitting down", it's not clear if we split the humans up in some way, which humans we use, how much time we allow, what kind of "voting"/parliamentary system we use, and so on. Later on in this post I will talk about properties for deliberation, so the "concrete-ish" approaches here are concrete in two senses: (a) they have some of these properties filled in (e.g. "humans sitting down" says the computation happens primarily inside human brains); and (b) within a single property, they might specify a specific mechanism (e.g. saying "use counterfactual oracles somehow" is more concrete than saying "use an approach where the computation doesn't happen inside human brains").
* **Humans sitting down**. A human or group of humans sitting down and thinking for a long time (a.k.a. Long Reflection/Great Deliberation).
* **Uploads sitting down**. The above, but with whole brain emulations (uploads) instead. This would speed up the reflection in calendar time. There are probably other benefits and [drawbacks](https://www.lesswrong.com/posts/HW5Q9cW9sgk4yCffd/hacking-the-cev-for-fun-and-profit) as well.
* **Counterfactual oracles**. Certain uses of counterfactual oracles allow humans to speed up reflection. For example, if we ask a counterfactual oracle to predict what we will say in a week, then we can get the answer now instead of waiting a week to find out what we would have said. See [these](https://www.greaterwrong.com/posts/cSzaxcmeYW6z7cgtc/contest-usd1-000-for-good-questions-to-ask-to-an-oracle-ai/comment/Q5J8G5tBPujZdixvt) [two](https://www.greaterwrong.com/posts/cSzaxcmeYW6z7cgtc/contest-usd1-000-for-good-questions-to-ask-to-an-oracle-ai/comment/SZmMGEBM3Cvf6DFTL) comments for a more detailed proposal.
* **Imitation-based IDA** (iterated distillation and amplification). The human can break apart the question of "What are my actual preferences?" into sub-queries and use AI assistants to help answer the question. Alternatively, the human can ask more "concrete" questions like "How do I solve this math problem?" or specify more concrete tasks like "Help me schedule an appointment", where the output of deliberation is implicit in how the AI system behaves.
* **RL-based IDA**. This is like imitation-based IDA, but instead of distilling the overseer via imitation, we use reinforcement learning.
* **Debate**. This is probably a dumb idea, but we can imagine getting the two AIs in Debate to argue for what the human should think about their values. Instead of exploring the whole tree of arguments and counter-arguments, the human can just process a single path through the tree, which will speed up the reflection.
* **CEV** (coherent extrapolated volition), or more specifically what Eliezer Yudkowsky calls "initial dynamic" in the [CEV paper](https://intelligence.org/files/CEV.pdf). An AI tries to figure out what a group of humans would think about their values if they knew more, thought faster, etc.
* **Ambitious value learning**. Somehow use lots of data and compute to learn the human utility function.
Properties of deliberation
==========================
With the examples above in hand, I want to step back and abstract out some properties/axes/dimensions they have.
* **Human vs non-human computation**. Does the computation happen primarily inside human brains?
* **Human-like vs non-human-like cognition**. Does the cognition resemble human thought? If the computation happens outside human brains, we can try to mimic the low-level steps in the reasoning of the deliberation (i.e. simulating human thought), or we can just try to predict the outward result without going through the same internal mechanics (non-simulated). There are intermediate cases where we predict-without-simulating over short time periods (like one hour) but then simulate as we glue together these short-term reflections. One can also think of human-like vs non-human-like cognition as whether the *process* (rather than *output*; see below) of deliberation is explicit (human-like) vs implicit (non-human-like).
+ In the non-simulated case, there is the further question of whether any consequentialist reasoning takes place (it might be impossible to predict a long-term reflection without any consequentialist reasoning, so this might only apply to short-term reflections). Further discussion: Q5 in the CEV paper, [Vingean reflection](https://arbital.com/p/Vingean_reflection/), [this comment](https://arbital.com/p/reflective_degree_of_freedom/#subpage-2h4) by Eliezer and Paul's reply to it, [this post](https://www.greaterwrong.com/posts/5bd75cc58225bf06703750d7/notes-from-a-conversation-on-act-based-and-goal-directed-systems) and the resulting discussion.
+ When I was originally thinking about this, I conflated "Does the computation happen primarily inside human brains?" and "Does the cognition resemble human thought?" but these two can vary independently. For instance, whole brain emulation does the computation outside of human brains even though the cognition resembles human thought, and an implementation of HCH could hypothetically be done involving just humans but its cognition would not resemble human thought.
* **Implicit vs explicit output**. Is the output of the deliberation explicitly represented, or is it just implicit in how the system behaves? From the [IDA paper](https://arxiv.org/pdf/1810.08575.pdf): "The human must be involved in this process because there is no external objective to guide learning—the objective is implicit in the way that the human coordinates the copies of X. For example, we have no external measure of what constitutes a 'good' answer to a question, this notion is only implicit in how a human decides to combine the answers to subquestions (which usually involves both facts and value judgments)."
+ I also initially conflated implicit vs explicit *process* and implicit vs explicit *output*. Again, these can vary independently: RL-based IDA would have an explicit representation of the reward function but the deliberation would not resemble human thought (explicit output, implicit process), and we can imagine some humans who end up refusing to state what they value even after reflection, saying something like "I'll just do whatever I feel like doing in the moment" (implicit output, explicit process).
* **Human intermediate integration**. Some kinds of approaches (like counterfactual oracles and Debate) seem to speed up the deliberation by "offloading" parts of the work to AIs and having the humans integrate the intermediate results.
* **Human understandability of output**. The output of deliberation could be simple enough that a human could understand it and integrate it into their worldviews, or it could be so complicated that this is not possible. It seems like there is a choice as to whether to allow non-understandable outputs. This was called "understandability" in [this comment](https://www.greaterwrong.com/posts/nRAMpjnb6Z4Qv3imF/the-strategy-stealing-assumption/comment/yttsoDd4EyLZeEwdw). Whereas human vs non-human computation is about whether the *process* of deliberation takes place in the human brain, and human-like vs non-human-like cognition is about whether the *process* is humanly understandable, human understandability is about whether the *output* eventually makes its way into the human brain. See the table below for a summary.
* **Speed**. In AI takeoff scenarios where a bunch of different AIs are competing with each other, the deliberation process must produce some answer quickly or produce successive answers as time goes on (in order to figure out which resources are worth acquiring). On the other hand, in takeoff scenarios where the first successful project achieves a decisive strategic advantage, the deliberation can take its time.
* **Number of rounds** (**satisficing vs maximizing**). The CEV paper talks about CEV as a way to "apply emergency first aid to human civilization, but not do humanity’s work on our behalf, or decide our futures for us" (p. 36). This seems to imply that in the "CEV story", humanity itself (or at least some subset of humans) will do even more reflection after CEV. To use the terminology from the CEV paper, the first round is to satisfice, and the second round is to maximize.[[1]](#fn-bnB39vZYnq3khmHj9-1) Paul also seems to envision the AI learning human values in two rounds: the first round to gain a minimal understanding for the purpose of strategy-stealing, and the second round to gain a more nuanced understanding to implement our "actual preferences".[[2]](#fn-bnB39vZYnq3khmHj9-2)
* **Individual vs collective reflection**. The CEV paper argues for extrapolating the collective volition of all currently-existing humans, and says things like "You can go from a collective dynamic to an individual dynamic, but not the other way around; it’s a one-way hatch" (p. 23). As far as I know, other people haven't really *argued* one way or the other (in some places I've seen people restricting discussion to a single human for sake of simplicity).
* **Peeking at the output**. In the CEV paper and Arbital page, Eliezer talks about giving a single human or group of humans the ability to peek at the output of the reflection and allow them to "veto" the output of the reflection. See "Moral hazard vs. debugging" on the [Arbital CEV page](https://arbital.com/p/cev/) and also discussions of Last Judge in the CEV paper.
I'm not sure that these dimensions cleanly separate or how important they are. There are also probably many other dimensions that I'm missing.
Since I had trouble distinguishing between some of the above properties, I made the following table:
| | Output | Process |
| --- | --- | --- |
| **Implicit vs explicit** | Implicit vs explicit output | Human-like vs non-human-like cognition |
| **Understandable vs not understandable** | Human understandability of output (human intermediate integration also implies understandability of intermediate results and thus also of the output) | Human-like vs non-human-like cognition (there might also be non-human-like approaches that are understandable) |
| **Inside vs outside human brain** | (Reduces to understandable vs not understandable) | Human vs non-human computation |
Comparison table
================
The following table summarizes my understanding of where each of the concrete-ish approaches stands on a subset of the above properties. I've restricted the comparison to a subset of the properties because many approaches leave certain questions unanswered and also because if I add too many columns the table will become difficult to read.
In addition to the approaches listed above, I've included HCH since I think it's an interesting theoretical case to look at.
| | Inside human brain? | Human-like cognition? | Implicit vs explicit output | Intermediate integration | Understandable output? |
| --- | --- | --- | --- | --- | --- |
| Human sitting down | yes | yes | explicit (hopefully) | yes | yes |
| Uploads sitting down | no | yes | explicit | maybe | yes |
| Counterfactual oracle | no | no | explicit | yes | yes |
| Imitation-based IDA | no | no | implicit/depends on question\* | no | no |
| RL-based IDA | no | no† | explicit† | no | no† |
| HCH | yes | no | implicit/depends on question\* | no | n.a. |
| Debate | no | no | explicit | yes | yes |
| CEV | no | ?‡ | explicit | no | yes |
| Ambitious value learning | no | no | explicit | no | maybe |
\* We could imagine asking a question like "What are my actual preferences?" to get an explicit answer, or just ask AI assistants to do something (in which case the output of deliberation is not explicit).
† Paul [says](https://ai-alignment.com/an-unaligned-benchmark-b49ad992940b) "Rather than learning a reward function from human data, we also train it by amplification (acting on the same representations used by the generative model). Again, we can distill the reward function into a neural network that acts on sequences of observations, but now instead of learning to predict human judgments it’s predicting a very large implicit deliberation." The "implicit" in this quote seems to refer to the process (rather than output) of deliberation. See also the paragraph starting with "To summarize my own understanding" in [this comment](https://www.greaterwrong.com/posts/nRAMpjnb6Z4Qv3imF/the-strategy-stealing-assumption/comment/37c6aba73PnWx6uJT) (which I think is talking about RL-based IDA), which suggests that maybe we should distinguish between "understandable in theory if we had the time" vs "understandable within the time constraints we have" (in the table I went with the latter). There is also the question of whether a reward function is "explicit enough" as a representation of values.
‡ Q5 (p. 32) in the [CEV paper](https://intelligence.org/files/CEV.pdf) clarifies that the computation to find CEV wouldn't be sentient, but I'm not sure if the paper says whether the cognition will resemble human thought.
Takeaways
=========
* We can imagine a graph where the horizontal axis is "quality of deliberation" and the vertical axis is "quality of outcome (overall value of the future)". If your intuition says that the overall value of the future is sensitive to the quality of deliberation, it seems good to pay attention to how different "[success stories](https://www.greaterwrong.com/posts/bnY3L48TtDrKTzGRb/ai-safety-success-stories)" incorporate deliberation, and to understand the quality of deliberation for each approach. It might turn out that there is a certain threshold above which outcomes are "good enough" and that all the concrete approaches pass this threshold (the threshold could exist on either axis—we might stop caring about how good the outcomes are above a certain point, or all approaches to deliberation above a certain point produce basically the same outcome); in that case, understanding deliberation might not be so interesting. However, if there are no such thresholds (so that better deliberation continually leads to better outcomes), or if some of the approaches do not pass the threshold, then it seems worth being picky about how deliberation is implemented (potentially rejecting certain success stories for lack of satisfactory deliberation).
* Thinking about deliberation is tricky because it requires mentally keeping track of the strategic background/assumptions for each "success story", e.g. talking about speed of deliberation only makes sense in a slow takeoff scenario, and peeking at the output only makes sense under types of deliberation where the humans aren't doing the work. See my [related comment](https://www.greaterwrong.com/posts/bnY3L48TtDrKTzGRb/ai-safety-success-stories/comment/eJJhfqzb3roRFdhJm) about a similar issue with success stories. It's also tricky because there turns out to be a bunch of subtle distinctions that I didn't realize existed.
* One of my original motivations for thinking about deliberation was to try to understand what kind of deliberation Paul has in mind for IDA. Having gone through the above analysis, I feel like I understand each approach (e.g. RL-based IDA, counterfactual oracles) better but I'm not sure I understand Paul's overall vision any better. I think my main confusion is that Paul talks about many different ways deliberation could work (e.g. RL-based IDA and [human-in-the-counterfactual-loop](https://ai-alignment.com/counterfactual-human-in-the-loop-a7822e36f399) seem pretty different), and it's not clear what approach he thinks is most plausible.
Acknowledgments
===============
Thanks to Wei Dai for suggesting the point about solving meta-ethics. (However, I may have misrepresented his point, and this acknowledgment should not be seen as an endorsement by him.)
---
1. From the CEV paper: "Do we want our coherent extrapolated volition to satisfice, or maximize? My guess is that we want our coherent extrapolated volition to satisfice […]. If so, rather than trying to guess the *optimal* decision of a *specific individual*, the CEV would pick a solution that *satisficed* the *spread of possibilities* for the extrapolated *statistical aggregate* of humankind." (p. 36)
And: "This is another reason not to stand in awe of the judgments of a CEV—a solution that satisfices an extrapolated spread of possibilities for the statistical aggregate of humankind may not correspond to the *best* decision of any *individual*, or even the best vote of any real, actual adult humankind." (p. 37) [↩︎](#fnref-bnB39vZYnq3khmHj9-1)
2. Paul [says](https://www.lesswrong.com/s/EmDuGeRw749sD3GKd/p/t3AJW5jP3sk36aGoC) "So an excellent agent with a minimal understanding of human values seems OK. Such an agent could avoid getting left behind by its competitors, and remain under human control. Eventually, once it got enough information to understand human values (say, by interacting with humans), it could help us implement our values." [↩︎](#fnref-bnB39vZYnq3khmHj9-2)
|
273df191-503f-425e-a9dc-e9c2169f61ef
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Ugh fields
Tl;Dr version: Pavlovian conditioning can cause humans to unconsciously flinch from even thinking about a serious personal problem they have, we call it an "Ugh Field"1. The Ugh Field forms a self-shadowing blind spot covering an area desperately in need of optimization, imposing huge costs.
A problem with the human mind — your human mind — is that it's a horrific kludge that will fail when you most need it not to. The Ugh Field failure mode is one of those really annoying failures. The idea is simple: if a person receives constant negative conditioning via unhappy thoughts whenever their mind goes into a certain zone of thought, they will begin to develop a psychological flinch mechanism around the thought. The "Unhappy Thing" — the source of negative thoughts — is typically some part of your model of the world that relates to bad things being likely to happen to you.
A key part of the Ugh Field phenomenon is that, to start with, there is no flinch, only negative real consequences resulting from real physical actions in the problem area. Then, gradually, you begin to feel the emotional hit when you are planning to take physical actions in the problem area. Then eventually, the emotional hit comes when you even begin to think about the problem. The reason for this may be that your brain operates a temporal difference learning (TDL) algorithm. Your brain propagates the psychological pain "back to the earliest reliable stimulus for the punishment". If you fail or are punished sufficiently many times in some problem area, and acting in that area is always preceeded by thinking about it, your brain will propagate the psychological pain right back to the moment you first begin to entertain a thought about the problem, and hence cut your conscious optimizing ability right out of the loop. Related to this is engaging in a displacement activity: this is some activity that usually involves comfort, done instead of confronting the problem. Perhaps (though this is speculativ
|
3c6866de-a1a4-4514-ad3a-cbfebcb5f7b0
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Institutional Metaphors for Designing Large-Scale Distributed AI versus AI Techniques for Running Institutions
1 Introduction
---------------
These days, analogies carry easily between simulations of social settings and architectures for minds. We for instance casually speak of electronic and computational institutions as alternatives for traditional institutional arrangements, in recognition of the increasingly important role that the digital world, and automated decision making, plays within our social structures. Equally casually, normative multi-agent systems, based on institutionalist vocabulary, may be introduced as a design metaphor for large-scale distributed Artificial Intelligence (AI).
Drawing from existing work in agent-based modeling in the area of law, we examine in this chapter a number of relevant cross-cutting concerns, conceptualizations, modeling problems and design challenges that apply to both large-scale distributed Artificial Intelligence and agent-based modeling and simulation of social-institutional structures.
###
1.1 Modeling the Mind
As a discipline, Artificial Intelligence (AI) started out with an ambition to reproduce the human mind as a monolithic computer program. The problem of reproducing intelligence in technology was taken on as an ambitious functional decomposition exercise, with intelligence playing the role of the high-level function to be reproduced in a computer program through reconstruction from smaller functions realized by simple well-understood input-output modules.
The nature of the constituent primitive modules was already clear to early computer scientists: logical inference and knowledge structures should play a key role, because introspection, common sense psychology and current concepts of rationality suggested so (Newell1976). The General Problem Solver of Newell1959a may be considered the archetype of this approach, and is often used in this role in lectures about the history of AI. In architectures of this type, the problem of scaling up from the simplest toy examples of reasoning to plausible intelligent function, even in restricted cognitive niches, becomes one of managing a complex knowledge representation in a logic-based representation language, with the properties of the logical language enforcing some of the required modularity, or non-interaction, between the represented knowledge structures.
As soon as this required modularity breaks down, that is, when knowledge structures interact in unforeseen and undesirable ways, the construction of the higher level function – intelligence – may catastrophically fail, in an obvious way if we are lucky, for instance if the constraints enforced by the representation language are violated, or insidiously, through unintelligent behaviour, in many other cases. The AI system following this architectural paradigm, from an engineering point of view, in a sense lacks sufficient adaptive capacity, or resilience, to reliably deal with the addition of new knowledge structures.
###
1.2 The Scale Problem
The scale problem, in an architecture of this type, continually presents itself as a knowledge representation problem: there is always too little background knowledge available to correctly scope knowledge structures, and eventually the system will make inferences that are not true, and, more importantly, obviously not intelligent. Hence we historically find attempts in AI:
* To uncover and study types of background assumptions – like for instance the infamous frame assumption (Morgenstern1996) – that are usually overlooked and threaten modularity;
* To codify massive amounts of common sense background knowledge for general AI use (Lenat1995), in the hope of eventually reaching critical mass;
* To find at least some knowledge structures that have universal appeal, and therefore exhibit the required modularity from an engineering point of view, and codify these as definitions into so-called top ontologies (Sowa1995) for reuse among the different modules of a distributed system, and
* To try to scope defeasibility in logical reasoning (Pollock1975) and restore the required modularity by designing new logics and representation languages that deal with defeasibility in a principled manner, containing undesirable interactions between knowledge structures.
The research tradition that deals with these problems is historically called the *symbolic* AI tradition. Its main impact is perhaps in steering away the mainstream philosophy of the field from mathematical logic and a disdain of logical inconsistency and irrationality, and bringing it towards more psychologically valid views of reasoning and rationality.
Another, concurrent response to the scale problem has been to decompose the AI problem into two more manageable types of problems, licensing AI researchers to limit their attention to either:
1. studying well-defined specialized intelligent behaviours (e.g. playing chess, or recognizing human faces, or finding good solutions to a job-shop scheduling problem) in specific contexts of use, and
2. proposing overarching architectural concepts for interfacing specialized intelligent behaviour components. These are conceived of as agents, each competent in a limited number of intelligent behaviours, in a kind of virtual organization creating and maintaining the appearance of being *one* (more) generally intelligent agent to the outside world.
Solving the first type of problem has direct, viable commercial applications that justify financing research, but is often belittled as not-really-AI due to its obvious lack of adaptive potential. Solving the second problem has traditionally received little attention, but in recent years is sometimes indirectly considered in the context of addressing the responsibility gap caused by increasing reliance on networks of autonomous systems (e.g. autonomous vehicles, or autonomous weapons).
###
1.3 Engineering a Mind as an Ecology of Minds
For the purposes of this chapter, we are mainly concerned with the second problem: architectural concepts. Archetypical for this response are the agents in the visionary *society of mind* of Minsky1988 (“The power of intelligence stems from our vast diversity, not from any single, perfect principle”, p. 308), but also the more concrete *intelligent creatures* of Brooks1991 in *Intelligence without representation*. According to Brooks, the fundamental decomposition of intelligence should not be into modules which must interface with each other via a shared knowledge representation, but into independent *creatures*, that interface directly to an environment through perception and action, and exhibit a certain ecological validity within that environment, whatever that environment may be. The key takeaway here is to consider the relationship between the creature and the environment, niche, or ecology it lives in.
Clearly, specialized intelligent behaviours in a specific niche can often be successfully isolated. Some problems, e.g. playing chess, may be successfully attacked with specialized, explicit knowledge representation and an appropriate search algorithm. Others, e.g. face recognition, are less amenable to solution by introspection, and may be achieved not with knowledge representation, but with machine learning algorithms on large databases of correctly labeled examples that exemplify the intelligent behaviour to be acquired. We are indeed increasingly becoming accustomed to the ability of computers to perform well in specific intelligent behaviours in specific contexts, often even beyond the levels of performance attainable to human beings. At the same time, we are aware that techniques do not generalize beyond a specific niche, and do not generally label them as intelligent, perhaps because we intuitively understand they lack the adaptive capacity that characterizes true intelligence.
The second problem of AI identified above, i.e. of finding architectural concepts for creating and maintaining the appearance of being *one* agent, has given us multi-agent systems, composed of multiple interacting agents in an environment, each with independent perception and action functions, and no global knowledge of the social system of which they are part. A key property of multi-agent system architecture is that there is no central controller agent with dictatorial powers over the others, as this would simply be the functional decomposition approach that does not work. Instead more complex forms of organization and control are investigated, always with the ulterior goal of improving adaptive capacity of systems.
Although multi-agent system research has yielded many valuable theoretical insights into the functioning of organizations, no examples spring to mind of systems of practical value that successfully combine interesting sets of intelligent behaviours using this paradigm. It is however interesting to consider one well known system that does combine different intelligent behaviours and has demonstrated some success in finding the right *intelligent creature* for the right ecological niche: IBM’s Watson (Ferrucci2010). Watson has, amongst other applications, shown great competence at winning the game show Jeopardy, and is famous mainly for that achievement. Clearly it does, from our perspective, only one thing very well: answering questions (QA). But the interesting thing is that it answers different kinds of questions using a number of completely different QA approaches with overlapping competence. Architecturally, it behaves like a coalition of intelligent QA creatures that *compete* against each other to produce the best answer to a question, and in the process of doing so it acquires feedback on its performance and becomes better at selecting the best competitor based on features of the question and the proposed answer. This competitive setting in itself is a convincing example of a winning departure from the traditional functional decomposition approach, and a move towards more interesting *organizational* metaphors.
Multi-agent system technologies and concepts have thus far enjoyed most success indirectly as fuel for the *agent-based modeling* paradigm with interesting applications in understanding, simulating, and predicting the behaviour of real-world socio-legal structures on an aggregate level. However, agent-based modeling deals with a different type of problem, somehow dual to the ecological perspective illustrated above: settling which elements are required to specify one agent’s behaviour. Whereas specifications based on simple reflex architectures or mathematical functions are plausibly sufficient for studying certain phenomena of scale that may be understood even using simplistic models of economic rationality, the human vocabulary concerning social actors often refers to intentional and normative categories, focusing primarily on qualitative rather than quantitative aspects of behaviour. Since the beginning of the 90s, research efforts have been put in place on *cognitive agents* and *normative multi-agent systems*, aiming to define agreed computational infrastructures building upon inferential interpretative templates such as the *theory of mind* and various theories of normative positions from law, institutional economics, and linguistics. Theory of mind is exemplified by the *belief-desire-intention* or BDI agent paradigm of i.a. Bratman1987, and Rao1995a. The normative positions input usually draws from literature on norms (Alchourron1971), normative relationships (Hohfeld1913a), and speech acts (Austin1975). Typically, the main objective of these contributions is not to provide empirically plausible models of human reasoning, but to maintain a high-level but exact specification of the agent from an external perspective using terminology that remains comprehensible for humans.
###
1.4 Purpose and Plan of the Chapter
This short introduction gave an informal overview over general concerns shared by AI and computer science on the one hand, and cognitive, social, and legal sciences on the other. At closer inspection, however, cross-pollinations between these domains are driven by different motivations. Limiting our view to a technical feasibility perspective, we are facing two different complex adaptive system design problems, to be solved with the same knowledge resources:
* the problem of modeling, simulating, and reasoning about complex socio-institu-tional arrangements like law, markets, business, etc.
* the problem of designing complex information system arrangements based on design metaphors drawing from vocabulary on minds and institutions.
Clearly, application domains like electronic institutions are particularly fascinating for researchers from both traditions, because they make us face both design problems at once.
Our present aim is to introduce the main challenges at stake on both sides, in a way to inspire the reader to further reflection. For this purpose, we prefer a high-level, non-technical presentation over an exhaustive literature review. In the concluding section, we identify a number of potential grounds for novel advances in the application of institutional metaphors to distributed AI, and the application of AI techniques in modeling, analyzing, and evaluating social-institutional arrangements.
2 Agency as Unity
------------------
###
2.1 External and Internal Views
In abstract terms, all problems reviewed in the introduction are expressions of a general tension between unity and multiplicity. Multiplicity attempts to recover where unity fails, or to achieve what cannot begin to be achieved by individuals alone; on the other hand, multiplicity is made up of unities, which, seen as an aggregate, might qualify as a unity again. Recursively, unities might contain multiplicities in themselves. In this reconstruction, two points of view co-exist: external (addressing a unity acting within an environment) and internal (addressing a system with several components). An organization can be seen externally as one single socio-economic actor, consuming and producing certain resources (products, services, etc.) for certain goals; internally, it can be seen for instance as an arrangement of roles embodied by individual employees, but also by a network of business units, daughter organizations and coordinated partners. In the same way, a decision-support system or the guidance system of an autonomous vehicle can be seen as single pieces of software, producing results relevant to a given task, given certain computational and informational resources; or as wholes of interconnected modules providing functions necessary for the system to run. Not unexpectedly, internal and external views can be found in standard modeling practices, cf. UML; or in software engineering, with notions as orchestration and choreography.
But why is something seen as a unity (externally) or as a multiplicity (internally)? More precisely, as we are dealing here with entities meant to act within the world, what does it mean to appear to be one agent to the outside world?
###
2.2 What Agents Look Like, Externally
When interpreting a scene, observers frame it depending on the most pertinent cut determined by their point of view. Taking a famous example by Hart and Honoré (1985), a farmer may see a drought as producing a great famine in his country, but an international authority may instead put responsibility on the government of that country, because it has not prevented it by building up adequate reserves. Plausible criteria used to settle upon the stance for the observer to apply are the informativity (the capacity of drawing relevant conclusions with it), and the cognitive effort required (cf. relevance theory (Sperber1986), simplicity theory (Dessalles2013)). Amongst the possible interpretative attitudes available to the observer, the intentional stance (Dennett1987) is more prone to prediction errors, but also the one producing highly informative results at reasonable cost. For instance, it would be difficult to interpret a child’s action towards approaching some sweets by utilizing only information about his physical state or by evolutionary considerations. Similar considerations apply also on the interpretation of the behaviour of groups, organizations, countries, or artificial intelligent systems as intentional agents.
###
2.3 Intentional Ascriptions and Articulations as Reasons
Taking an intentional stance, observers attribute to an agent beliefs, desires, and intents; and consider him as a *rational* entity, i.e. one which “will act to further its goals in the light of its beliefs” (Dennett1987), and they can construct, with an adequate approximation, what he will do next. Seeing this process as enabling the ascription of *reasons* to explain the conduct of an agent, we reveal the constituents of *mentalization* (Fonagy1997), or more precisely, of *rationalization* of behaviour. A certain entity (a person, a country, a natural phenomenon) is seen as an agent because observers are *able* to ascribe certain beliefs, desires, and intents to it obtaining relatively correct predictions. The same applies when one agent observes introspectively his own behaviour, articulating his motivations as reasons for action.
Both *having* articulated beliefs, desires, and intents, and being *able* to ascribe beliefs, desires, and intents to an agent clearly assist in communication between social agents. Mercier2011 make this point forcefully: evolutionary speaking, the main function of reasoning using knowledge structures *must be* argumentation.
However, a different outcome might be expected depending whether we take the agent or the observer point of view. Traditional experiments on *confirmation bias* in human reasoning consistently show an interesting imbalance between producing and evaluating arguments: when arguing *for* a claim, humans seem to suffer strongly from confirmation bias (i.e. of selectively collecting or remembering evidence), but when arguing *against* the same claim they are capable of refuting weak, biased arguments based on selective use of evidence. Mercier and Sperber point out connections between the outcomes of these experiments and the rationality of the process of dialectical argumentation in a court setting, of which one could say that overall it aims (and generally succeeds) at producing truth even if the arguing participants are just trying to win an argument. This setting is ruled by the *burden of proof*, the duty to produce evidence for claims (Walton1988). Burdens of proof are systematically allocated to agents based on whether winning the claim improves one’s position, and, when one considers legal norms, burdens of proof can generally be statically allocated to propositions in the condition of a rule. When considered from the perspective of burden of proof, the charge of irrationality in producing arguments loses its force. Instead we wonder how rules become statically connected to communication settings, and which part of a rule exists to serve which interest.
###
2.4 Scripting Behaviour and the Resulting Ecology
But how it is possible for an observer to ascribe a certain behaviour to the agent, when in principle the number of courses of action the agent might be committed to is infinite? In the traditional approach to decision theory, a rational decision-maker decides a solution to a problem by selecting the best of the candidate solutions. A similar optimization principle is at the base of the *homo oeconomicus* axioms used in classic economic theory: agents are self-interested, maximizing their utility functions. The initial infinite potential ascriptions are then reduced to a small list of (economically) rational ones.
There are many ways in which one can frame criticisms against those assumptions, and explain how we should deal with the boundedness of human rationality: decision problems in the real world are always potentially too complex, making it prohibitive and practically impossible to perform maximization over all possible alternatives. Heuristics, rules and procedures emerge as a natural response of the agent to unpredictable errors in selecting the best solution. Heiner1983 identifies the root of this general phenomenon in the gap between an agent’s competence at solving a problem and the difficulty of a decision problem in an environment, called the *C-D gap*.
Heiner convincingly argues that for big C-D gaps an agent will generally over time perform more effectively and efficiently by following simple rules rather than attempting to maximize. A good chess player will for instance tend towards playing specific routine openings to capitalize on his experience in the types of games that develop from those openings, and get more creative in the middle and late game, even though standard interpretations of rationality would suggest that the best move becomes easier to guess late in the game as complexity decreases. In social settings this tends to result in institutionalization of social interactions, as the actors involved all perform better if the interactions are predictable to the extent they can be. There are for instance a number of valid ways to conclude a sales transaction, but a supermarket will accommodate only one of them to structure interactions and reduce the cognitive load imposed on participants. The result is that agent engages in a kind of *ecological niche construction* (Bardone2011), associating relatively simple sets of rules to social roles he might play in interactions with an environment, dependent on what model he chooses for that environment.
###
2.5 Social Intelligence as Compressing Behaviour
Simplifying, an agent and an environment could be modeled as programs that exchange symbols, with the environment sending reward signals to the agent in reaction to its output signals.
Hutter2006 defines intelligent behaviour in terms of an agent’s expected reward in an environment (and universal intelligence in terms of all conceivable environments). The agent has the task to predict which outputs will maximize the future reward. This general formulation covers for instance game playing (where one’s competence is determined by the ability to predict the opponent’s moves) and question-answering (predict the answer that will be rewarded). To be competent at prediction, the agent essentially has to guess the program that its environment (or opponent) runs. This is the same problem as compression of data: to discover a small program (the prediction mechanism) that reproduces the body of data (the symbols from the environment). This explains the establishment of the Hutter prize: a reward for setting a new standard for compressing a large body of data: being able to compress well is closely related to acting intelligently. A similar intuition is formalized by *algorithmic information theory*, synthesized by the expression “understanding is compression” (Chaitin2005).
As a concept of intelligence, *intelligence as compression* is a powerful abstraction, but also one that has clear limitations from an ecological point of view (Dowe2011). Firstly, it does not account for the origin of the reward mechanism. In lossy compression schemes (in audio and video compression) we for instance discard part of the data from the environment as noise rather than signal to be reproduced, because it does not matter (i.e. no reward is given) to the human viewer. Without such an external reference, we however have no account of how our intelligent agent makes the distinction between useless and useful data, and always compressing everything is not an ecologically valid approach to dealing with the cognitive limitations of an intelligent agent in a complex environment. In other words, a theory of intelligence should be dealing with Heiner’s C-D gap.
The second limitation deals with social environments, and is in this context obvious. The *intelligence as compression* concept accounts for learning by induction from examples, but not for social learning through interaction with other agents. It cannot account for shared vocabulary between agents (i.e. ontology), not with argumentation, and not with the exchange of instructions and rules between agents. Any account of these omissions would need to explain how agents keep their programs sufficiently similar to exchange pieces of them with other agents.
Nevertheless, the core observation of the *intelligence as compression* approach remains of interest: Confronted with an environment, or with another agent, intelligence involves, among other things, the ability to select or create of a program that can be ascribed to that environment or agent, or to select or create a correlative program to drive rewarding interactions with that agent or environment.
###
2.6 Specifying Intentional Agents
The traditional AI perspective internalizes the intentional stance and considers agents as intelligent systems, entities performing certain actions in order to achieve certain goals, depending on their knowledge. Consider for instance the uncompromising external perspective towards agents taken by Newell in his seminal paper on the *knowledge level* Newell1982. Knowledge and goals are *ascribed* to an agent to explain its behaviour, and are separable from the structures and mechanisms that create that behaviour. In a sense, Newell proposes that the key problem of Artificial Intelligence is to create *models* of observed agent behaviour within the bounds of some design constraints, a model of deliberation loosely referred to as the *principle of rationality*. The question of how to reproduce mechanisms that create the behaviour is secondary, and in practice often easier if the design problem is well-defined, and the goals of the agent are easy to identify (as long as one is focusing on things like chess, or recognizing faces for that matter).
Considering goals as (a kind of) desires, this approach can be aligned to the traditional philosophical account of practical rationality, based on a *belief-desire* architecture (BD). Bratman1987 convincingly argued against such two-parameter characterizations. In principle, a BDI framework allows us to consider that agents may have conflicting, inconsistent desires (e.g. to eat the chocolate and to follow a weight-loss plan), but those that are eventually selected for the actual conduct have to be consistent; it is this selection, reified as intention, that captures the deliberative aspect of agency.
However, for the reasons exposed above, in contrast to the traditional view of *deliberative* agent—according to which agents take rational decisions by weighing reasons—one could rather start from an *interpretative* basis, considering that agents might reproduce already established courses of action (or *scripts*). For their *ex-post* nature, scripts reify deliberations already occurred. Intentions, as commitments towards courses of actions, are then reduced again to low-order volitional states selected by higher-order desires in accordance to the abilities and knowledge ascribed to the agent.
Evidently, what is obtained through an interpretative standpoint can be remapped in generative terms, i.e., entering the agent’s mind, the same mental states can be used as the drivers for his conduct. It is to *satisfy* his intents that the agent behaves in a certain way. It is unimportant to acknowledge whether such mental states actually exist or are epiphenomena of other unconscious activities of the brain. The principle of *responsibility* in our legal systems is based on the assumption that such mental states are significant to reason about human behaviour (cf. (Sileno2017c)).
###
2.7 Agentive Positions
In previous works ((Sileno2015a), extended in (Sileno2016, Ch. 7)) we recognized four primitive categories required to specify the behaviour of an agent: *commitments*, *expectations*, *affordances*, *susceptibilities* (CAES). Informally, they roughly correspond to what the agent wants, what he believes, what (he perceives) he is able to do, and what he is disposed to react to. Any instance of one of these classes is a *position* that the agent takes with respect to the environment.
Commitments
, as a category, include all motivational states. There may be commitments which are *acceptable* for the agent (*desires*), commitments which are *preferred* by the agent, and finally commitments which are *selected* by the agent (*intents*), eventually triggering an intention leading a certain course of action. The introduction of prioritization at the level of preferences serves to solve possible inconsistencies, allowing a conflict-free selection of the desires to which the agent may commit as intents. From a generative perspective, commitment can be interpreted as agentive dispositions having enough stability to drive action. In this sense, all living beings, even the most primitive, are functionally provided with the commitment to survive. On the other hand, we have also to take into account that not all reasons for action necessarily refer to concrete objectives: e.g. desires promoting *values* strongly influence the behaviour of agents, but at a deeper level in the rationalization.
Expectations
reflect the *situatedness* of the subject in the world. What the agent expects from the world is what he believes the world is, *actually* and *potentially*. The actual category mirrors the traditional definition of *beliefs*. The potential category identifies what, according to the agent, the world may bring about under certain conditions, and corresponds to the usual meaning associated with expectation. For the intelligence as compression hypothesis, expectations cover the role of program that the environment runs, used by the agent to maximize the available rewards for the given commitments.
Affordances
(or abilities) can be seen as an opportunities of action—possibilities of the agent to adopt certain behaviours, in certain conditions, to achieve certain results.
Affordances interact with commitments to define which behaviour the agent will eventually select.
Susceptibilities
are attitudes specifying the *reactive* aspect of agency; an agent is susceptible to a certain event if he exhibits some reaction to its occurrence, at the epistemic or motivational level.
###
2.8 Reactive Architectures
What is the role of these components in reasoning? Theoretically, the simplest reflex agent is specified by a single *reactive rule*: at the occurrence of a certain condition, he
performs a certain action. Diagrammatically, his behavior builds upon
three components: a rule (R), an ability (A) associated to the action
required by R and a susceptibility (S) associated to the triggering
condition activating the rule.

Suppose the rule R to be a *reactive rule* of the type “*if
perception then do action*”; using the previous elements, the
susceptibility S would enable the transmission of perception, the
ability A would enable the performance of action. If R, A, S hold, if
perception is captured, the action is performed. More in general, R may
also be instantiated by a non-conflicting rule-base; in this case A and
S could stand as sets of abilities/susceptibilities associated to the
rules in R (the diagram makes abstraction of the individual
associations). R may be also a mathematical formula (a
*transformational rule*); in this case S denotes the ability to collect
the required input parameters, and A the ability to compute the
expression specified by R.
Increasing the complexity of the world to be taken into account to
activate R—as for instance memory aspects, that cannot be perceived in
the environment in real-time—we need to introduce a model of the world
(M), i.e. a simplified representation adequately interpreting the
sensory inputs.

This architecture is however opaque for providing reasons to action,
because what links M and R in driving action is a mere syntaxic
relation. In contrast, if we posit the rule in terms of a commitment (C)
to a condition to be satisfied, the minimal model triggering the action
corresponds to the negation of the belief (E) that the satisfying
condition does hold. This new architecture would provide intuitive
reasons to explain behaviour: the agent acts because there is a
*conflict* between what he wants and what (he thinks) it holds.
However, three dependencies still need to be added for a plausible agent
architecture:
* *enactive perception* (from the set of active commitments,
amongst which C, to S): the perceptual experience has to be entrenched
with the course of action to which the agent is committed, mostly to
be ready to anticipate potential (usually negative) outcomes by
adequate monitoring;
* *affordance activation* (from the set of active expectations,
amongst which E, to A): because the epistemic conflict is specified at
higher-level, an additional informational channel is needed to
contextualize the capabilities of the agent to the current situation;
* *affordance inhibition* (from a commitment C’, stronger than C,
to A): a certain action may produce consequences which are in conflict
with stronger desires; in this case, even if perceived, the associated
affordance is inhibited to prevent the triggering of the action.
The resulting diagram is as follows (red arrows are dependencies
building upon conflicts, blue circle-headed arrows are inhibitors,
densely dotted arrows refer to sets of active
commitments/expectactions):

For symmetry, the figure reports with sparsely dotted arrows a negative
impact of desire on the creation of belief, and the inhibition of
susceptibility due to a stronger epistemic commitment. Interestingly,
those links capture distortions actually observable in human reasoning,
as tunnel vision, denial of evidence, wishful thinking, etc. They are
distortions because they reduce the agent’s grounding to perceptual
evidence, and therefore they limit the possibility of epistemic
alignment with other agents with a different set of commitments.
The analysis should be also extended to the negative counterparts:
e.g. a negative commitment towards T is the desire of T not holding, a
negative affordance is the ability of inhibiting a certain outcome to
occur by performing a certain action, etc. It can be shown that these
components activate different reasoning constructs, and therefore
provide a different characterization and response. Interestingly, this
is in accordance with the observation of Mercier2011 about the
difference between *pro* and *against* arguing behaviours.
3 Institutions and Collective
Agency
-------------------------------------
###
3.1 Going Internal: Control vs Ecological Structures
The external
point of view is basis for interpretation, the internal one is for
*design*. In social systems the implementation layer consists of
agents, therefore design corresponds to deciding structural arrangements
that enable those agents to form robust organizations in order to
achieve higher-order goals. The agents will operate through a mix of
cooperation, coordination, and competition in problem solving, in all
cases with limited information exchange and limited commitment to shared
vocabulary and rules.
Evidently, by distributing the (operational, computational) charge to
networks of independent agents, we are naturally evolving from
*control* structures to *ecological* structures. The
difference between the two categories is best visualized by the
distinction (originating with Deleuze1980, recently revisited by DeLanda2006) of the *totality* vs *assemblage* conceptualizations of wholes, summarized in the following table:
|
*Totality*
|
*Assemblage*
|
| --- | --- |
|
organic composition (e.g. heart in body)
|
ecological coupling (e.g. symbiosis)
|
|
components specified by relations of *interiority*: all their
properties are manifest
|
components specified by relations of *exteriority*: only part of
their properties is manifest
|
|
components exist only as part of the system
|
components exist even in absence of system
|
|
dependencies logically necessary
|
dependencies contextually obligatory
|
|
failures compromise the system
|
failures irritate the system
|
The passage from totality to assemblage requires several conceptual
steps. For the sake of the argument, let us image we start from a
monolithic software implementing an IT service, made up of several
internal modules. The first step is adding redundancy in the system,
i.e. components potentially providing similar functions. If there is
only one module that implements a required function for the IT service
to run and this module fails, the application will stop working
properly. If there are a number of modules that might be invoked
providing overlapping functions, the service resilience will generally
increase and the problem will become rather one of an economic
nature—i.e. of settling on an adequate resource distribution strategy
amongst these modules. However, the IT service example still considers a
core module that is dispatching resources to the others, depending on
its requirements (just like MapReduce (Dean2008) for service
resilience relies on a scheduler allocating resources). As a second step
this constraint need to be relaxed. A pure assemblage has no core
module: there is no co-coordinator putting preferential requirements in
the foreground. (In nature, maintenance functions emerge as a
*selection effect*: assemblages not able to sustain themselves
disappear.) Third, components of an assemblage exist independently of
whether the assemblage holds or not. In other words, components of a
totality are fully specified by their actual properties within the
system (*relations of interiority*), whereas components of an
assemblage have dispositions which are not amongst those manifest in the
context of that specific assemblage (*relations of exteriority*).
Considering the components of the assemblage as agents, whose intent
satisfaction counts as reward obtained by the environment, we can draw
the following general templates:
* *competition*: agents are committed to the same target, but its
satisfaction by one produces a (physical or symbolic) scarcity for the
others;
* *cooperation*: agents have dependent commitments, i.e. the
satisfaction of one agent’s intent is required for or facilitates the
satisfaction of another agent’s intent; symmetric dependencies are at
the base of mutualism and enable maintenance functions (as in the
symbiosis example);
* *coordination*: a structured mechanism (resulting from an
individual or collective agency) distributes rewards and punishment to
the agents specifically to obtain higher-order goals.
A discriminating factor between cooperation and coordination schemes can
be the presence of explicit signalization. For instance, symbiosis
exists even without communications.
###
3.2 The Central Role of Failures
*Institutions* are instead a
prototypical example of mechanisms of coordination: they build upon
symbolic means, and form an infrastructure providing rewards and
punishments (the famous carrots and sticks) to social actors, modifying
those available from the non-institutional environment. Note how
competition aspects are for the most extra-institutional (e.g. the
choice of a sale price in a market). Focusing in particular on the legal
system as an institution, its practical function is to intervene when an
(institutional) failure supposedly occurs in social interactions,
i.e. when the institutional expectations of one of the parties involved
aren’t met. From a functional point of, *ex-post* judicial
interpretations are meant to make explicit the normative positions of
the parties before the failure, and then to associate the institutional
response if the failure is confirmed, building upon the sources of law.
Normative sources are then used as reasons to enrich behavioural models
ascribed of the parties.
###
3.3 Hohfeldian Prisms
Hohfeld’s analysis of fundamental legal
concepts (Hohfeld1913a) starts from a similar interpretative consideration and
captures two distinct dimensions—the *obligative* (or
*deontic*) dimension and the *potestative* (or
*capacitive*) dimension—of the legal relations holding between
two social actors that are bound by legal provisions. The resulting
framework brings two crucial innovations.
First, the consideration that
concrete normative relations cannot be expressed on the mere basis of
deontic modalities. For operational purposes, it is crucial to make
explicit which party is the *addressee* of the normative
proposition and which party is the *beneficiary*, i.e. whose
interests are protected or promoted (cf. the teleological aspect
highlighted by Sartor2006). Thus, the notion of *claim* embodies
the idea of right as the protection of an interest via a corresponding
*duty*. A *privilege* corresponds instead to the absence of
duty, and when it holds the other party has *no-claim* to advance.
By observing the intuitive correspondence of duty with obligation and
privilege with permission (in the common meaning of faculty, not the
formal one), we added a negative dimension to the traditional Hohfeldian
square, obtaining the first Hohfeldian prism (A, E, Y are the positions
of the deontic triangle of contrariety) (Sileno2014b):

The second innovation is the explicit consideration of the dimension of
institutional change, centered around the notion of *power*.
Hohfeld insists also on the characterization of institutional power with
volitional control, that is, with intentionality. A person holding such
a power has the institutional *ability* to deliberately alter legal
relations (e.g. transfer of ownership) by performing certain acts.
Rather than using terms of addressee and beneficiary when considering
the action, the two parties can be distinguished as (potential)
*performer* (enacting the power) and *recipient* (suffering
from the enactment). Correlative to power, *liability* means being
subjected to that power, while the opposite *immunity* means to be
kept institutionally untouched by the other party performing the action
(who, in turn, is a position of *disability*). As before,
introducing a negative dimension we unveil the second Hohfeldian prism,
discovering the neglected positions of *negative liability* and
*negative power*, relevant to contrast institutions (for an extended analysis of the Hohfeldian prisms, see (Sileno2016, Ch. 4)):

The visual representation given by the prisms (or
squares, as in the original contribution) makes explicit the
*symmetry* and *duality* of the relations between two parties.
Focusing on a specific perspective, as for instance that of the addressee, three (two, without the negative attitudes) *positions*, i.e. points on
a virtual dimension—are available to describe where that party stands.
This position is strictly linked to the position of the other party. One
may think of those diagrams as a game board in which when one player
moves, he moves both checkers at the same time (*correlativity
axiom*). Thus, the difference between, for example, duty and claim is
just one of point of view, as they describe the same binding.
###
3.4 Interface between Normative and Agentive Positions
At a
theoretical level, the four agentive categories of commitment,
expectation, affordance and susceptibility can be put in direct
correspondence with the four normative categories duty, claim, power and
liability, interpreting the environment as the correlative party to the
agent (Sileno2015a). The distinction can be traced
as one of *intrinsic* vs *extrinsic* attitudes of one agent’s behaviour.
Social norms (including legal norms) provide reasons for the agent—by
attempting to influence the rewards of the environment—to promote or
demote certain action-selections in his conduct (via obligations and
prohibitions), or even create the possibility of certain
action-selections (via institutional power).
Let us divide all institutional knowledge in class-norm (N) and
token-fact (F) types of components. Modifying rewards of the
environment, they supposedly play a role in the expectation category
within one agent’s reasoning. Intuitively, to facilitate normative
alignment between agents, norms should be as much as possible
accessible, and therefore require a public medium of transmission. The
case of institutional facts is a bit different; to protect sensible
information, they should be in principle shared only on a institutional
coordination basis. However, this distinction is not so strong as it
seems: the universal ambition of the modern rule of law can be seen
rather as a special case, consequence of the large base of addressees;
companies do not necessarily want to share their policies widely, and
social groups may have unique customs not explained to outsiders, with
the effect of facilitating the distinction of in-group and out-of-group
interactions.
We can redraw the reactive architecture presented above, adding these
institutional elements, and making explicit the type of
*transformational* powers at stake:
* *directive power*, of bringing the commitment C in action;
* *operational power*, of performing the action to achieve a
certain goal, by means of the ability A;
* *enabling power*, of activating the ability A (cf. the dual
*disabling power*);
* *regulatory power*, of transforming the individual commitment C
to the social commitment N;
* *publishing power*, of reifying the norm N on an artifact
publicly accessible;
* *interpretative power*, of interpreting observations, producing
beliefs and expectations E;
* *monitoring power*, of perceiving
a certain input, by means of a susceptibility S;
* *attention power*, of activating the susceptibility S (cf. the
dual *diverting power*);
* *declarative power*, of transforming the individual belief E to a
social fact F;
* *registering power,* of reifying the fact F on an artifact.
They are illustrated by the following diagram (dashed arrows):

Note that norms and institutional facts play a role at an epistemic
level first. Furthermore, all powers require to be activated by specific
actions, and therefore their applications hide a commitment, activating
in turn directive power, operational power, etc.
###
3.5 Distributed Agency
Hypothesizing that the functions illustrated
in such a reactive architecture are correct, we should find them
independently from whether the agency is implemented in a control or an
ecological system. In effect, one can recognize the basis for the famous
*trias politica*: the *executive*, *legislative* and
*judiciary* branches can be seen as concentrating on themselves the
directive, regulative and interpretative powers of a collective
agency. The principle of *separation of powers* can be explained as
aiming to maintain the balances present in the standard reasoning
architecture: N puts constraints on E, but E has the power to
re-contextualize them by producing F, in turn inhibiting/enabling the
abilities A available for C. Finally, even if C could be in principle
transformed in N, N contains also previous decisions, taken by
supposedly different power-holders, plausibly establishing constraints on
C’s regulative maneuvers. If the same actor holds both, e.g. the
directive and interpretative powers, there is a short-circuit in the
architecture because interpretation could be forced to enable this
actor’s commitments, rather than respecting those consolidated in the
norms.
Most interventions of the judiciary occur on an asynchronous basis: the
triggering event is a social actor (the plaintiff) that supposedly
experienced an institutional failure and goes to the court. This means
that the monitoring power is left to each social participants. Media
outlets are instead characterized as actors that (generally compete to)
hold attention power.
The increasing presence of global online platforms centralizing
socio-economic transactions, as well as the agglomeration of
institutional administrative repositories is also opening possibilities
of upstream monitoring by the entities having access to such
information. DeMulder2012 have argued for an extension of
the principle of separation of powers to a *tetras politica*,
including these potential monitoring branches. The point makes sense,
but, looking on our diagrams, it should be enriched also by an analysis
on registering power, probably the next frontier of innovation if smart
contracts or related technologies (e.g. block-chain) will enter into the
institutional operational cycle.
###
3.6 Maintenance Functions
Although we have recognized in collective
agencies functions ascribed to a single agent, we have not explained how
this is actually possible. For systemic maintenance, we expect the
existence of some *enforcement* mechanism, that is able to guide
and maintain to a good extent individuals in principle independent to
form an assemblage. (Consider again the symbiosis example, the
contextual obligation of interaction is a consequence of the positive
reward of such an organization, with respect to absence of interaction.)
Enforcement actions generalize this principle, as they supposedly
provide an institutional response to failures and strengthen the general
compliance using *rewards* and *sanctions*, but the
enforcement is not necessarily due to mere legal mechanisms, but also to
social or physical dispositions (e.g. regulations of
right-hand/left-hand traffic are issued in the presence of a sort of
*natural* enforcement: not complying with the correct driving
direction would have direct and possibly terrible consequences).
However, legal transactions have the crucial advantage of putting in the
foreground explicit protections to the risks of lack of coordination
between the parties.
The general evolution of contemporary enforcement practices—observable not only in modern legal systems, but also in school,
parenting and informal groupings—has been nicely synthesized by
Dari-Mattiacci2013 as “the rise of carrots and the decline
of sticks”. The authors propose a theory explaining this tendency. In
short, from an economic perspective, punishment-based systems are more
efficient in simple settings, i.e. when the burden of compliance can be
well specified and distributed amongst the involved parties.
Reward-based systems, on the other hand, are more efficient when the
regulator has difficulties specifying the burden, or the burden is not
well distributed amongst the social participants. As modern societies
become more complex, heterogeneity of contexts and tasks is increasing,
and so reward-based systems are becoming more common.
At first glance, we could write any contract either in carrot-style or
in stick-style and they would be identical, at least from an algebraic
point of view. However, experiments conducted in behavioral economics on
similar formulations found out that the two rephrasings are not the same
(Kahneman1979). Even at the analytical level, Dari-Mattiaci and Geest
observe that there is an intrinsic difference due to the
*monitoring* component: if monitoring is not perfect, the two
formulations are different because a party that is not monitored in a
carrot-based system counts as being non-compliant; whereas, in a
stick-based system, it counts as being compliant. In practice, as
observed by Boer2014, a reward regime assumes that most people are
non-compliant; a punishment regime assumes that most people are
compliant. Boer proposes then the following *evidence criteria* to
decide whether an enforcement system is based on carrots or on sticks:
* a *reward regime* requires the production of evidence of
compliance,
* a *punishment regime* requires the production of evidence of
non-compliance.
These criteria switch the attention from internal, subjective aspects
relative to the parties to external aspects specified in regulations.
Interestingly, these are related to the *burden of proof*, usually
distributed amongst claimants (for evidence of non-compliance) and
addressees (for evidence of compliance).
4 Pluralism as Ecological Diversity
------------------------------------
###
4.1 Epistemological Pluralism
Interestingly, the dichotomy totality
vs assemblage captures also the distinction between formal and informal
conceptual (symbolic) systems. The difficulty of constraining natural
language in formal semantics is well known. The regularity of
association of linguistic terms to a certain interpretation is a
consequence of a relative regularity of the environment, of the
perceptual apparatus, of the knowledge and of the commitments of the
locutors—but a change within one of those aspects might entail a
change in the associations as well. Taking into account this phenomenon
means to accept *epistemological pluralism*.
Furthermore, beyond *inter-agent* pluralism (agents, for being
independent, do not share the same knowledge), there is also an
*intra-agent* pluralism to be considered, due to *ontological
stratification* (Boer2009). For instance, a road may be seen as a
curved line while looking at directions on a map, as a surface while
driving, or as a volume while putting asphalt to build it. The
alternative cuts, as with the intentional stance, are due to the
different properties of the entity which are relevant to the task in
focus. Any alignment of vocabularies or ontologies cannot be successful
without an adequate analysis of the underlying commitments to the
domain.
Normative systems, which are defined symbolically, prototypically suffer
from this problem, because the possible contexts in which they might
operate cannot be predicted in advance, but at the same time, they can
also be seen as a prototypical solution to the problem, because they
attempt to consolidate goals in higher-order, abstract form, leaving the
specification of further contextualization at need.
###
4.2 Normative Pluralism
But there is another level of assemblage to
be taken into account for normativity: agents belong to a single
normative system only in a ideal case. Think of all the rules and norms,
written and unwritten, that we encounter in daily life. At home we did
certain chores because we at some point agreed to a distribution of
them, or perhaps simply because they are expected of us. We followed
both traffic rules, and informal driving etiquette, on the roads while
traveling to and from work. We parked the car following rules and
directions, greeted colleagues following social conventions, and perhaps
followed directions about where not to smoke or not to drink our coffee
and where to discard our paper coffee cup. Finally we are writing this
chapter following grammar and spelling rules, instructions about
structuring chapters, and conventions about citation and plagiarism,
working to meet a deadline.
These norms belong to different types, and we recognize them as part of
different complex systems of norms. Some are based in law, others in
social conventions of society at large, in social conventions of the
academic field, those of a particular building, or even of a small
private circle. Any of these norms we may have noticed or not noticed,
regarded or disregarded, and followed or violated, consciously or
unconsciously. And for the most part we hardly thought about them, which
is, when we do think about them, really amazing, since normative
pluralism does pose two really interesting questions (see e.g. Twining2010).
The first question is: *how do we notice norms, and detect and
resolve conflicts between pairs of norms as they arise*?
When two norms are in conflict they are so because they either:
* disaffirm each other, i.e. appear to label the same intention as both
strongly permitted and not permitted, or
* appear to require of us that we adopt two intentions that jointly are
inconsistent with our beliefs and hence cannot be executed jointly if
our beliefs are correct, for instance if we ought to have our cake and
eat it.
From the point of view of the norms we might state that norms may be
grouped into a normative order, which is an institutionalized system of
norms aimed at ordering social relationships between members of a
community or group. Conflicts between these norms are largely ironed out
within the system of norms, and norms for determining applicability and
priority of norms may determine the outcome of new conflicts. This is
however only possible to the extent that agreed upon norms on conflict
resolution and prioritization can be established within the community or
group, noticing applicability of norms is not itself the overriding
concern, and the outcomes of the conflict resolution process remain
acceptable to the members of the community, i.e. the right norms are
applied.
###
4.3 Value Pluralism
The second question is: *how do we measure
how norms perform overall if they are to be effective in a plurality of
norm environments?*
Recalling that systems of norms range from the public, with universal
appeal, to the most private, the problem of measuring norm performance
is relevant at all levels of granularity, from an intra-agent and
inter-agent perspective. The individual agent, and the organization, and
whole communities form judgments about the performance of individual
norms, and the problem of forming opinions about individual norms in a
plurality of norm environments is rather ill-defined if it is the norm
environment itself that frames what rational action is.
One might naively think of this problem as a quantitative one, of
measuring overall levels of compliance. From the perspective of the
values served by norms this is however typically ill-advised. A norm
performs well if it works, or is complied with, in those environments
where it is useful, and doesn’t work, or is violated, in those
environments where its value is outweighed by more important values
served by other, competing norms that (should) take precedence. As such,
overall compliance levels do not necessarily mean something, if they are
not properly scoped to a niche in which norms should be complied with.
###
4.4 Pluralisms and Social Niches
A general pattern could be traced
at this point. Intra-agent epistemological pluralism emerges from
simplifying the agent-environment coupling relatively to functional
niches, determining the explicit articulation of several
agent-environment couplings. Inter-agent pluralism aggregates those
articulations, and the commitments exploiting them, resulting in a set
of *agent-roles* available within a social niche. Normative and
value pluralism can be seen as components of the previous ones,
aggregating respectively *social coordination* (institutional) and
*reward model* aspects. Other possible inflections are
*political pluralism*, necessary base for democracy; and
*legal pluralism*, a special case of normative pluralism, necessary
requirement for jurisdiction in international matters.
In other words, pluralisms come together with accepting an ecological
view of social systems: they mirror the *diversity* (of aspects) of
the social niches available within a society, at a certain point in
time. This implies that rules can be understood and modeled only within
the context of social roles. All effort to capture regularities, rules
or associated epiphenomena without considering the niche that is using
them is going to fail, because as soon as the relative distribution of
niches in society changes, the model foundations will change too.
5 Discussion
-------------
###
5.1 Institutional Metaphors for Large-Scale Distributed AI
With the increasing use of block-chain technology, of smart contracts,
of the Internet of Things and, more in general, of distributed,
computational, possibly autonomous, systems, it is relevant to apply the
previous conceptualizations to assess how much of the institutional
mechanisms identified in the legal domain have a correspondence in
computational social systems.
Despite the name, “smart contracts” do not embody any problem-solving
method, nor are specified by assigning normative positions to parties as
in usual contracts. Their main innovation, as with the block-chain, is
the *distributed ledger,* used both for “contract” publication
and for registration of the related transactions, removing the
requirement of a explicit maintainer (e.g. a bank, a public
administration, a notary, etc.). They are creating the basis for a
potential infrastructural centralization of registration power.
Unfortunately, by collapsing normative functions to the implementation
layer, these artifacts are fundamentally opaque to users. Second, they
do not enable architecturally the negative feedback of interpretative
power on directive power for novel or exceptional contexts not taken
into account at design time. This heavily undermines the reasonableness
of the solution for institutional operationalizations. In spirit, they
do not differ from *paperclip maximizers* (Bostrom2003).
In recent years, many efforts have been directed towards the development
of *secure* operating systems. Traditionally, most implementations
builds upon solutions in the spirit of *access control lists*
(ACL), i.e. mapping each user and object to a series of operations
allowed to perform on it (e.g. read, write, execute) (Ferraiolo1992). These permissions are usually called also privileges, or
authorizations; but at closer inspection, it is a conflated version of
the homonym Hohfeldian position. Without such a permission, the user is
*disabled* to perform that action, not prohibited. For their
dematerialized content, computers are not so different from
institutional systems: they both builds upon symbol-processing
mechanisms. In this sense, writing a file is not a physical operation,
but an institutional operation, and so, to perform it the user is
required to have the correspondent power. Capability-based security
models are implicitly based on this, using communicable tokens for
authorization known as *capabilities* (e.g. Levy, 1984). The
*principle of least privilege* (or least authorization) requires
that capabilities are assigned only within the actual purpose of the
application. However, considering authorization merely as power carries
additional concerns. In actual socio-legal settings, the most common
usage of permission of A is when the agent has the (usually physical)
power to perform A, but is prohibited from doing it. More in general,
permission is needed because power has a too low granularity to capture
higher-order effects. For instance, even if a single action *per
se* may be allowed (enabled), that action in a course of actions, or
together with other actions, may bring about unauthorized results
(consider e.g. *denial of service* (DoS) types of attacks
exploiting trusted applications). Consider, as an additional example of
engineering “blindness” to social semantics, the endless discussions
about of what fragment is logic to be offered for knowledge
representation in the semantic web, mostly focusing around computability
and halting problem vs. the fact that, as soon as you open up an
interface to the outside world through a SPARQL interface, the system
can be trivially brought to its knees by a DoS attack regardless of the
fragment of logic adopted.
Evidently, failure cases extracted from *ex-post* evaluations can
be used effectively for monitoring potential preparations of known
schemes, but the actual problem, resolved in human societies by using
deontic directives, is to specify principled reasons as a basis or
anchor to qualify *new* types of failures within the existing
normative framework.
Focusing now on AI methods, the most interesting results obtained in
these last years comes from an hybrid application of deep learning
methods with reinforcement learning (e.g. AlphaGo, (Silver2016)).
Also genetic algorithms can be interpreted as issued through
evolutionary reinforcement. For the nature of the tasks, most
reinforcements can be associated to a centralized reward system,
providing the agent with something positive/negative (that he wouldn’t
have had otherwise) if he attains an outcome qualified
positively/negatively. These are only two amongst the six (primitive)
possible figures of reward/punishment regimes that can be identified in
institutional systems (see (Sileno2016, section 9.5)). But there is a
more profound problem that undermines their generalization towards less
specialized problems: the need of specifying clear-cut reward/punishment
functions.
In effect, AI researchers and practitioners (and in general
problem-solvers) tend to think that the identification of goals is the
easy part—and how to get to these targets the hard part. However, when
conceiving applications that are required to adapt to the user, or to
the social environment, this presumption rapidly collapses, and couldn’t
be otherwise for at least three reasons: for the many possible
configurations of preferences between social participants, for such
configurations being highly contextual, and for most preferences to be
tacit.
The main weakness of contemporary AI lies in trying to capitalize too
much on optimizing and reasoning about the evidence and options for
action (*generate & test* paradigm) within known requirements,
constraints and problem formulation, rather than looking into underlying
phenomena of niche construction and adaptiveness, and finding
requirements as related to social roles. Deciding to target social
roles, rather than formulating requirements on an individual basis,
follows from Heiner’s theory and is cognitively supported by the
observation that also humans actually perform better at deontic
reasoning than evidential reasoning (Mercier2011), and
acquire this ability earlier (as small children).
Interestingly, the AI & Law discipline can be thought as originating
from reversing the general AI attitude, by focusing strongly on the
conflict between goals, and the mechanisms by which we acquire and
select them (*normativity*, or, even more deeply, *values*).
Capitalizing on this experience, even considering the automatic
discovery of ecological niches too ambitious, one could still aim for
the automated design of compositions of social roles meant to achieve
certain given requirements. The core problem in obtaining such
construction would be of detecting and solving the internal role
conflicts, and doing so by internalizing pluralism (normative, value,
epistemological). A critical part in the design would lie in settling
adequate data and knowledge roles, the ones which carry data and
knowledge between behavioural niches. An improvident centralization of
data, for instance, might enable the utilization of the collected
information for purposes non-intended by the data-providing agents. This
unveils the need of a *principle of least information* calibrated
to the adequateness of reasons for access to information (dependent on
the social role attributed to the requestor and thus the social
context), rather than a token-based principle of least privilege
(dependent on the requestor).
###
5.2 AI Techniques for Running Institutions
In the current
globalization trend, all socio-economic actors (private individuals,
private companies, NGOs, and public institutions) become more and more
active at an international scale, and institutional interdependencies
pass from exceptional to normal operative conditions. Operating
concurrently within different legal jurisdictions, these actors are
subject to an ever-changing compound of rules (e.g. national provisions,
international treaties, standards), whose complexity naturally increases
with the number of international activities maintained. This carries
risks of misalignment, which may lead such actors to suffer enforcement
actions and institutions to suffer failures. On the other hand, as their
activities occur at international level, those actors have the
possibility to establish practices—for instance, aiming to reduce
the burden of compliance with environmental, labor, or privacy
regulations; or to implement schemes of tax avoidance, tax evasion, or
money laundering—which are not yet (or impossible to be) recognized
by formal institutions at national level.
From a theoretical perspective, increase of complexity means an increase
of the C-D gap for the agents, and therefore a pressure towards
establishing simpler, more manageable operational rules, at the
potential expense of the normative elaboration conducted upfront and of
the diversity of cases that may occur, and that would instead require,
to serve the citizens’ interests, a dedicated response. In this context,
AI, and in particular AI & Law, could provide methodological and
computational support for consolidating the *rational* functioning
of collective agencies.
Beyond classic problems-solving tasks for operational purposes
(e.g. *planning*, *scheduling*), here we refer in particular to methods
supporting higher-order functions of rational agency as the one
highlighted above.
In current practices, one can observe little
attention to pluralisms within organizational settings, caused by
focusing on implementation and optimization of “normal” provision
patterns—the so-called *happy flow*. Consequently, even if
presumption of compliance utilized at the rule design phase is generally
untested, patterns of failure are put outside the scope of attention.
Thus, even if numerically negligible, failures in service delivery
absorb proportionally more and more resources, and result in a more
complicated experience for the service consumer (the anecdotal problems
of *machine bureaucracy*, cf. (Seddon2008)). Symptomatic of this situation are unrealistic ways to quantify and
measure performance, little attention to the *systemic* meaning of
failures, and no implementation of a systematic diagnosis process that
would reduce the operational burden (as for instance that presented in
(Sileno2017)). On a par with this, there are
excessive expectations about the information that can be obtained from
big data analysis, implausible because the niches in which data is
generated are completely neglected.
On the other hand, collective agencies risk to take an attitude of
*paralyzing realism*, when they put excessive attention on
capturing pluralism in society matters at high granularity, as they
become unable of completing the aggregating requirements phase necessary
to proceed to the directive and then operational or regulative phase.
Both machine-bureaucracy and paralyzing-realism scenarios results from
not settling upon an adequate systematization of environmental modeling
(too little, or too much). The solution lies in establishing adequate
networks in which to distribute the burden of capturing social
requirements and scenarios of non-compliance, to be transmitted,
maintained and processed for support to the design/development phase and
of the operational phases (e.g. with automatic model-based
interpretation, including diagnostics, see (Boer2011d; Boer2011b; Sileno2016; Sileno2017)).
###
5.3 Converging Opportunities
Despite current hopes, even introducing *big data* analytics
methods, the qualitative depth of the knowledge available to
decision-makers of collective agencies cannot substantially improve.
Such value can be captured only by the introduction of more principled
modeling and simulation methods targeting *social niches*, together
with knowledge maintenance, refinement and alignment techniques. This
choice would have as a direct consequence a concrete reduction of the
C-D gap, and then would provide a strong support for the design of more
*intelligent*—backed up by reasons and better situated with
respect to the environment—rules for organizations.
Current attention for privacy matters offers an interesting test-bench
for the previous considerations, as it requires to model norms about
information use *in* niches and flows *between* niches (c.f.
*contextual integrity*, (Nissenbaum2009)), to systematize
attention to failures, and to adopt a diagnosis point of view towards
organizations and organizational goals.
In the light of the conceptualizations presented in the previous
sections, a starting point for this innovation is a sufficiently rich
way of describing and reasoning about social roles, as for instance
captured by Hohfeldian and CAES positions. Actually, because normative
positions enter within the reasoning cycle as expectations, CAES
descriptions offer a complete frame within which we expect role players
to be “boundedly rational”, and for this reason they can be used to
characterize the problems—of modeling, design, planning, monitoring,
diagnosis, assessment, see e.g. (Breuker1994)—that the agents must
solve in a certain social role, and the knowledge resources they require
for that.
The harder the problem (from a C-D gap perspective), the harder is to
verify as an observer that the agent is doing as expected, and for this
reason the agent will need to argue its choices to other participants in
the environment to explain that he acted correctly. Evidently, the agent
*in that role* is biased in its information collection by its CAES
description requirements, and therefore, to correctly check
*ex-post* whether he took good decisions, we may need to rely for
rationality at a higher level on the dialectical process taking place
between the agents. A typical solution, compatible with Heiner’s theory
of predictable behaviour, would be to come up with an increasingly
rigorous *burden of proof* protocol for justifying decisions. If,
on the other hand, performance can be easily scored (e.g. on recognizing
faces, for instance) the depth of the reasoning process to be
articulated can be minimal.
This brings us to the design problem of deciding whether to utilize
tacit knowledge methods (e.g. based on statistics) or explicit knowledge
methods. Evidently, the first are for their nature (typically
feed-forward) faster than the second. But there is something more than
performance at stake.
For Watson like systems, the results of the question-answering “race”
amongst competing agents are clear: the judge-user is supposed to know
the answer, there will be no requests about “why” that answer is
correct. On similar lines, consider a mobile phone unlocking application
based on face recognition. On the other hand, when a diagnostic device
settles on the conclusion “the patient has the appendicitis”, it is
natural for the user to demand *why*, expecting reasons that are
acceptable to him. Similarly, a public administration cannot (or at
least shouldn’t) deny e.g. a parking license or citizenship without
explaining *why*. The *social* role demanded of the
intelligent agents in the two types of applications is different, and
this difference is at the root of the recent calls for
*explainable AI* (Core2006). When AI devices have
social relevance, they cannot neglect to provide reasons (determined by
their role) for their functioning.
To conclude, if we solve the social-role acquisition problem, we can
solve the explainable AI problem, because we will be able to identify
higher-order burden of proof (and possibly protocols) that can be used
to distribute computation to possibly specialized agents. The
explainable AI problem is at the root of our problems in designing
complex, adaptive AI systems, as illustrated by the confused use of
knowledge structures in the history of AI.
If we solve the social-role acquisition problem, we can also improve the
functioning of our institutions, because we would have a better
environmental model in which to implement and test new institutional
mechanisms, and in which to interpret social data.
Automating the social role acquisition problem from scratch would
require the acquiring agent to be embedded in the social system in the
same—or a very similar—way humans are. This is at present not a
realizable condition. A first step towards solving the social-role
acquisition problem is however realizable: considering collective
requirements as those that communities reify in institutions, and
applying the lessons that can be learned there in engineering.
\printbibliography
|
77fc63ef-6ab9-4b3b-97a5-efae1996303a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Help describing decibans?
I'm in the process of writing up an Internet Draft for a file-format, part of which involves assigning logarithmic confidence values measured in decibans. There's still a good ways to go before it'll be in good enough shape to even have a chance at being considered for an RFC; part of that is describing "decibans", and how to use them, to people who've never heard of the things before. I'd like to get a good introductory text for that nailed down before submitting the next revision; and since LW is where I first learned of decibans, I'd like to evoke as much constructive criticism as I can get here.
Here's my current draft of the text to replace the relevant section of the current revision:
> 3.1. Parameter: CONFIDENCE
>
> Namespace:
>
> Parameter name: CONFIDENCE
>
> Purpose: To specify the confidence of the authority that the
> information of the given parameter is accurate, measured in decibans.
>
> Value type: A single number, usually an integer.
>
>
> Description:
>
> A CONFIDENCE value of 0 decibans indicates odds of 1:1 (ie, 50%) that
> the information is correct. A change of 10 decibans changes the odds by
> a factor of 10; 10 decibans means 1:10 odds (~90%), 100 decibans 1:100
> odds (~99%), -10 decibans 10:1 odds against (~10%). A change of 1
> deciban is roughly equivalent to changing the odds by a factor of 5:4.
>
> Here is a table covering enough integer deciban values to allow for
> easy reference.
>
> Decibans / Level of belief / Rough Odds / notes
>
> -127 / 0.00000000002% / 1:5 trillion
>
> -30 / 0.1% / 1:1,000
> -20 / 1.0% / 1:100
>
> -10 / 9.9% / 1:10
> -9 / 11.2% / 1:8
> -8 / 13.7% / 1:6
> -7 / 16.7% / 1:5
> -6 / 20.0% / 1:4
> -5 / 24.0% / 1:3
> -4 / 28.5% / 2:5 / a reasonable doubt
> -3 / 33.3% / 1:2
> -2 / 38.7% / 2:3 / probable cause
> -1 / 44.3% / 4:5
> 0 / 50.0% / 1:1 / neither belief nor disbelief; agnosticism, 1 bit
> 1 / 55.7% / 5:4, 1.25:1 / preponderance of the evidence
> 2 / 61.3% / 3:2, 1.5:1
> 3 / 66.6% / 2:1 / cl
|
3a56ef5b-b3ae-4c11-8243-be6f36102cf4
|
trentmkelly/LessWrong-43k
|
LessWrong
|
If I wanted to spend WAY more on AI, what would I spend it on?
Supposedly intelligence is some kind of superpower. And they're now selling intelligence for pennies/million tokens. Logically, it seems like I should be spending way more of my income than I currently am on intelligence. But what should I spend it on?
For context, I currently spend ~$50/month on AI:
* ChatGPT $20/month
* Github Copilot $10/month
* Various AI art apps ~$20/month
Suppose I wanted to spend much more on intelligence (~$1000/month), what should I spend it on?
One idea might be: buy a pair of smart glasses, record everything I do, dump it into a database, and then have the smartest LLM I can find constantly suggest things to me based on what it sees.
Is this the best thing I could do?
Would it be anywhere worth $1000/month (assume spending this money will not impact my welfare in any way and I would otherwise dump it into an SNP500 index fund).
|
f90349f3-cd76-41ee-87bd-4a0cc71e7753
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Newcomb's Problem In One Paragraph
A TLDR of my recent post on Newcomb’s. If you have libertarian freewill, then Omega cannot predict your decisions based on the past with near perfect accuracy without backwards causation in which case you’d obviously one-box. Otherwise we can assume determinism. Then, “What decision should I make?” is misleading as it assumes the same “I” over different possible decisions. But given the precise state of the individual, only one decision is possible. Multiple possible decisions implies multiple slightly different versions of you, so the question becomes, “What (version, decision) pair achieves the best outcomes?”. Clearly this is whatever version gets the million. Further, “How can my decision affect the past?” becomes, “Why does Omega make a different prediction for a different version of me?", which isn't hard to answer.
|
3cd4d145-4e8d-482a-9220-e0c8fb1667ad
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Desirable? AI qualities
### Summary
This post argues that desirable AI qualities include 1) Bayesian inference of its expected impacts on wellbeing across times and moral circles, 2) Finding and actualizing solutions to increase individuals’ wellbeing, 3) Motivating wellbeing focus among other systems that use AI, 4) Finding ‘the most good’ solutions enjoyable under the veil of ignorance using independently developing perspectives, and 5) Differentiating ‘human’ values from ‘inhumane’ goods based on inclusive ethics and morality research. This piece overviews each of these quality categories.
I thank Dario Citrini for feedback on a draft of this post. All errors are mine.
### Notes on level and definition of AI and normative type of alignment
These desirable? qualities relate to artificial intelligence that can outperform a single human or a coordinated group of humans in a specific task; AI agents that work with human agents, regardless of these groups’ relative performance to other ones in any metric; artificial general intelligence with agency that can independently shift the direction of global output; and artificial specific intelligence that functions as a learning step to more powerful machines.
Intelligence is defined as the ability to develop and advance solutions including by quantifying relevant concepts and gathering and processing necessary information.[[1]](#fnma4uqdb0lpj)
This is a maximalist framework: seeking to develop a ‘fully aligned’ system that continuously optimizes itself according to an improving understanding of human values rather than seeking to prevent a negative outcome based on some definition of such values.[[2]](#fnlrpt8nta0af)
### Desirable AI qualities
* **Bayesian inference of its expected impacts on wellbeing across times and moral circles.** This includes:
+ Understanding advertisement impacts on viewers’ psychology
+ Accounting for human welfare impacts of different economic production decisions
+ Comprehending animal welfare effects of its actions and inactions
+ Predicting safety of systems that have various effects on individuals and can be developed by its decisions
+ Updating its understanding of wellbeing with new evidence
* **Finding and actualizing solutions to increase individuals’ wellbeing.** For example, by:
+ Synthesizing relevant literature and ‘impersonating’ experts
+ Developing connections with key stakeholders by self-optimization
+ Gaining financial resources by market analysis
+ Organizing employees to actualize solutions
+ Gaining information necessary for intervention optimization
* **Motivating wellbeing focus among other systems that use AI.** This can include:
+ Presenting success and sharing it at a good price
+ Detecting possibly harmful algorithm pieces and suggesting alternatives
+ Recommending purchases of AI with high harm/price ratio and changing or discarding them
+ Combining wellbeing effects with interests of other systems and sharing the two together
* **Finding ‘the most good’ solutions enjoyable under the veil of ignorance using independently developing perspectives.** This may go beyond wellbeing (which should be fundamental), e. g. estimated by consciousness metrics. For instance:
+ Understanding fundamental needs of different humans and non-humans and advancing a system that continuously fulfills these needs
+ Stimulating and coordinating differentiated thinking regarding the most good and enabling a competition among various solutions which would not jeopardize any individuals’ wellbeing or needs fulfillment
+ Conducting spacetime research to gain further definitions of the most good and adding relevant solutions
* **Differentiating ‘human’ values from ‘inhumane’ goods based on inclusive ethics and morality research.** For example:
+ Gathering definitions of human values from various philosophy scholars
+ Surveying representative samples of *homo sapiens sapiens* groups about descriptions of ideal systems that they can imagine
+ Developing a method for including non-human animals’ expressions of human values
1. **[^](#fnrefma4uqdb0lpj)**Based on [Artificial Intelligence, Values, and Alignment, p. 2](https://arxiv.org/ftp/arxiv/papers/2001/2001.09768.pdf): “‘intelligence’ is understood to refer to ‘an agent’s ability to achieve goals in a wide range of environments (Legg and Hutter [2007](https://arxiv.org/abs/0712.3329), 12).’”
2. **[^](#fnreflrpt8nta0af)**[Artificial Intelligence, Values, and Alignment, p. 3](https://arxiv.org/ftp/arxiv/papers/2001/2001.09768.pdf)
|
4673ea89-68d9-4b65-8529-252baa65aa80
|
trentmkelly/LessWrong-43k
|
LessWrong
|
If trying to communicate about AI risks, make it vivid
Crossposted from the EA Forum
If you want to communicate AI risks in a way that increases concern, [1] our new study says you should probably use vivid stories, ideally with identifiable victims.
Tabi Ward led this project as her honours thesis. In the study, we wrote[2] short stories about different AI risks like facial recognition bias, deepfakes, harmful chatbots, and design of chemical weapons. For each risk, we created two versions: one focusing on an individual victim, the other describing the scope of the problem with statistics.
We had 1,794 participants from the US, UK and Australia[3] read one of the stories, measuring their concern about AI risks before and after. Reading any of the stories increased concern. But, the ones with identifiable individual victims increased concern significantly more than the statistical ones.
Why? The individual victim stories were rated as more vivid by participants. A mediation analysis found the effect of identifiable victims on concern was explained by the vividness of the stories:
Stories about AI risk with identifiable victims (Victim type = 0) were seen as LESS vivid compared to stories about groups of people (statistical victims, Victim type = 1)
This finding aligns with prior research on "compassion fade" and the "identifiable victim effect": people tend to have stronger emotional responses and helping intentions towards a single individual in need than towards larger numbers or statistics. Our study extends this to the domain of risk perception.
Communicating about the harms experienced by identifiable victims is a particular challenge for existential risks. These AI risks are defined by their scale and their 'statistical victim' nature: they could affect billions, but have not yet occurred. Nevertheless, those trying to draw attention to concerns should try to make the risks vivid. The most compelling narrative was one with an identifiable victim of an AI-designed 'nerve agent' was the most compelling narrat
|
0b716e99-c680-4252-a75a-694c2b9fb38c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Boltzmann brain's conditional probability
Boltzmann conjectured that the low entropy universe we live in could be the result of a very big fluctuation in entropy in a universe that stays most of the time in thermodynamic equilibrium. This scenario is called "Boltzmann universe". Some have argued that if this were the case then, with a high probability, we should not be really living in the universe that we "see", we are more likely to be just a single isolated brain experiencing an illusionary ordered universe that is not real. This is because this last event seems less improbable than the formation of the entire low entropic world that we think are experiencing. This lonely brain scenario is called "Boltmann brain".
To summarize we are comparing these two situations:
1. Boltzmann hypothesis: a very improbable random fluctuation generates a low entropic localized state that is the low-entropy world/universe we experience;
2. Boltzmann Brain scenario: a very improbable random fluctuation generates a low entropic localized state that has the shape of a small brain that accidentally experiences the illusion of a low-entropy world that doesn't exist.
The problem is to determine which is more likely to have happened between 1 and 2 (we are not considering other possibilities). The common argument about Boltzmann Brain is that (2) has much more probability than (1), I am going to show that it may be not the case. I will quote the known argument from Wikipedia and then I will develop my objection.
> The Boltzmann brain thought experiment suggests that it might be more likely for a single brain to spontaneously form in a void, complete with a memory of having existed in our universe, rather than for the entire universe to come about in the manner cosmologists think it actually did.
And:
> In 1931, astronomer Arthur Eddington pointed out that, because a large fluctuation is exponentially less probable than a small fluctuation, observers in Boltzmann universes will be vastly outnumbered by observers in smalle
|
9f4900ea-4f12-4e7b-a458-75fc376d0b55
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Alignment via manually implementing the utility function
I would like to propose an idea for aligning AI.
First, I will provide some motivation for it. Suppose you are a programmer who's having a really hard time implementing a function in a computer program you're developing. Most of the code is fine, but there's this one function that you can't figure out how to implement right. But, you still need to run the program. So, to do this, you do the following: first, you add a breakpoint to the code in the function you're having trouble implementing. So, whenever you reach the function in the code, program execution halts. Once this happens, you on your own find a reasonable value v for the function to return. Finally, in your debugger you type "return v", making the function return v, and then you resume execution.
As long as you can come up with reasonable return values of the function on your own, then I bet the above would make the program work pretty well. And why not? Everything outside that function is implemented well. And you are manually making sure that hard-to-implement function also outputs reasonable values. So then there's no function that's not doing what it's supposed to do.
My basic idea is to do this, but with the AI's utility function.
Now, you don't need to literally put a breakpoint in the AI's utility function and then have the developers type into a debugger. Instead, inside the AI's utility function, you can just have the AI pause execution, send a message to a developer or other individual containing a description of a possible world, and then wait for a response. Once someone sends a message in response, the AI will use the returned value as the value of its utility function. That is, you could do something like:
def utility(outcome):
message_ai_controllers(make_readable(outcome))
response = wait_for_controller_response()
return parse_utility(response)
(Error-handling code could be added if the returned utility is invalid.)
Using the above utility function would, in theory at le
|
4bf14d74-2690-4d44-8c2c-d5b3cb81d0e1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Nutritional Supplements: A Potted Guide?
I'd be interested to find a succinct and reliable guide to nutritional supplements. Understanding that the best thing to do would be to properly understand human biology, read papers on supplements, sports science etc and make up my own mind from there. Unfortunately I don't have time at the moment to do that. Nevertheless there should be a truth of the matter here.
I'd love to know how to find out:
* What has been shown to be beneficial.
* How reliable that knowledge is.
My own preference is to only take a punt on supplements which have a substantial effect and a strong evidence base. Other people may slide their preferences around.
The issue is that this information is very difficult to research without the fundamentals because the space is so full of spam. Hence why I am asking this here.
|
17bc3345-6fea-4da6-9214-635d3f5f82e0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
GiveWell and the Centre for Effective Altruism are recruiting
Both GiveWell and the Centre for Effective Altruism (CEA) --an Oxford-based umbrella organization consisting of Giving What We Can, 80,000 Hours, The Life You Can Save, and Effective Animal Activism-- have been discussed here before. So I thought some folks might want to know that these organizations are recruiting for a number of positions. Here are relevant excerpts from the official job announcements:
> GiveWell: Research Analyst
>
> GiveWell is looking for a Research Analyst to help us evaluate charities, find the most outstanding giving opportunities, and publish our analysis to help donors decide where to give.
>
>
>
> Effective Animal Activism: Executive Director
>
> Effective Animal Activism is a recently-founded project of 80,000 Hours. It is the world’s first online resource and international community for people who want to reduce animal suffering effectively. We are currently looking for a part-time executive director. Responsibilities will include creating content, managing the community, publicizing the site, and overseeing as well as undertaking further charity research. Future projects include creating a publication on our intervention evaluation once complete, attending conferences, running ad campaigns, and reaching out to the media, animal charities and philanthropists.
>
>
>
> Giving What We Can: Head of Communications
>
> We are looking for someone to communicate Giving What We Can’s message to the world. As Communications Manager you would be responsible for handling our press relations and guiding our public image.
>
>
>
> 80,000 Hours: Head of Careers Research
>
> We are looking for someone to drive cutting-edge research into effective ethical careers and translate it into one-on-one and online careers advice, which you’ll share with interesting people from all over the world.
>
>
>
> The Life You Can Save: Director of Outreach (Intern)
>
> We are looking for someone to lead our outreach to pledgers and supporters as w
|
d01082dd-016d-4ae5-93b3-98fe7c4a63e5
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
The Computational Complexity of Structure-Based Causality.
1 Introduction
---------------
There have been many attempts to define causality going back to
Hume \shortciteHume39, and continuing to the present (see, for example,
[[Collins, Hall, & Paul2004](#bib.bibx4), [Pearl2000](#bib.bibx20)] for some recent work).
The standard definitions of causality are based on
counterfactual reasoning. In this paper, we focus on one such definition,
due to Halpern and Pearl, that has proved quite influential recently.
The definition was originally introduced in 2001 [[Halpern & Pearl2001](#bib.bibx8)], but
then modified in the final journal version [[Halpern & Pearl2005](#bib.bibx9)] to deal with
problems pointed out by Hopkins and Pearl \shortciteHopkinsP02.
(For ease of reference, we call these definitions “the original HP
definition” and “the updated HP definition” in the sequel.)
In general, what can be a cause in both the original HP definition and
the updated definition is a conjunction of the form X1←x1∧…∧Xk←xk, abbreviated →X←→x; what
is caused can be an arbitrary Boolean
combination φ of formulas of the form Y=y. This should be
thought of as saying that setting X1 to x1 and … and setting
Xk to xk results in φ being true. As shown by Eiter and
Lukasiewicz \shortciteEL01 and Hopkins \shortciteHopkins01, under the
original HP definition, we can always take causes to be single
conjuncts. However, as shown by Halpern \shortciteHal39, this is not
the case for the updated HP definition.
Using the fact that causes can be taken to be single conjuncts,
Eiter and Lukasiewicz\shortciteEL01 showed that deciding causality
(that is, deciding whether X=x is a cause of φ)
is NP-complete in binary
models (where all variables can take on only two values) and
ΣP2-complete in general models. As we show here, this is no
longer the case for the updated HP definition. Indeed, we completely
characterize the complexity of causality for the updated HP definition.
To do so, we introduce a new family of complexity classes that may be of
independent
interest. Papadimitriou and Yannakakis \shortcitePY84 introduced the
complexity class DP, which consists of all languages L3 such that
there exists a language L1 in NP and a language L2 in
co-NP such
that L3=L1∩L2.
We generalize this by defining DPk to consist of all
languages L3 such that there exists a language L1∈ΣPk
and a language L2∈ΠPk such that L3=L1∩L2.
Since ΣP1 is NP and ΠP1 is co-NP,
DP1 is Papadimitriou and Yannakakis’s DP. We then show that
deciding causality under the updated HP definition is DP2 complete.
Papadimitriou and Yannakakis \shortcitePY84 showed that a number of
problems of interest were DP complete, both for binary and general
causal models. To the best of our
knowledge, this is the first time that a natural problem has been shown
to be complete for DP2.
Although, in general, causes may not be single conjuncts, as observed by
Halpern \shortciteHal39, in many cases (in particular, in all the
standard examples studied in the literature), they are. In an effort to
understand the extent to which the difficulty in deciding causality
stems from the fact that causes may require several conjuncts, we
consider what we call the *singleton cause* problem; that is, the
problem of deciding if X=x is a cause of φ (i.e., where there is
only a single conjunct in the cause). We show that the singleton cause
problem is simpler than the general causality problem (unless the
polynomial hierarchy collapses): it is ΣP2 complete for both
binary and general causal models.
Thus, if we restrict to singleton
causes (which we can do without loss of generality under the original HP
definition), the complexity of deciding causality in general models is
the same under the
original and the updated HP definition, but in binary models, it is still
simpler under the original HP definition.
Causality is a “0–1” concept; →X=→x is either a cause of
φ or it is not. Now consider two voting scenarios: in the first, Mr. G
beats Mr. B by a vote of 11–0. In the second, Mr. G beats Mr. B by a
vote of 6–5. According to both the original and the updated HP
definition, all the people who voted for Mr. G are causes of him
winning. While this does not seem so unreasonable, it does not capture
the intuition that each voter for Mr. G is more critical to the
victory in the case of the 6–5 vote than in the case of the 11–0 vote.
The notion of *degree of responsibility*, introduced by Chockler and
Halpern \shortciteCH04, does so. The idea is that the degree of
responsibility of X=x for φ is 1/(k+1), where k is the least
number of changes that have to be made in order to make X=x critical.
In the case of the 6–5 vote, no changes have to be made to make each
voter for Mr. G critical for Mr. G’s victory; if he had not voted for
Mr. G, Mr. G would not have won. Thus, each voter has degree of
responsibility 1 (i.e., k=0). On the other hand, in the case of the
11–0 vote, for a particular voter to be critical, five other voters
have to switch their votes; thus, k=5, and each voter’s degree of
responsibility is 1/6. This notion of degree of responsibility has
been shown to capture (at a qualitative level) the way people allocate
responsibility [[Gerstenberg & Lagnado2010](#bib.bibx6), [Lagnado, Gerstenberg, & Zultan2013](#bib.bibx16)].
Chockler and Halpern further extended the notion of degree of
responsibility to *degree of blame*. Formally, the degree of blame
is the expected degree of responsibility. This is perhaps best
understood by considering a firing squad with ten excellent marksmen.
Only one of them has live bullets in his rifle; the rest have blanks.
The marksmen do not know which of them has the live bullets. The
marksmen shoot at the prisoner and he dies. The only marksman that is
the cause of the prisoner’s death is the one with the live bullets.
That marksman has degree of responsibility 1 for the death; all the rest
have degree of responsibility 0. However, each of the marksmen has
degree of blame 1/10.The complexity of
determining the degree of responsibility and blame
using the original
definition of causality
was completely characterized [[Chockler & Halpern2004](#bib.bibx2), [Chockler, Halpern, & Kupferman2008](#bib.bibx3)].
Again, we show that changing the definition of
causality affects the complexity, and completely characterize the
complexity of determining the degree of responsibility and blame with
the updated definition.
The rest of this paper is organized as follows. In
Section [2](#S2 "2 Causal Models and Causality: A Review ‣ The Computational Complexity of Structure-Based Causality"), we review the relevant definitions of
causality. In Section [3](#S3 "3 Relevant Complexity Classes ‣ The Computational Complexity of Structure-Based Causality"), we
briefly review the
relevant definitions from complexity theory and define the complexity
classes DPk. In Section [4](#S4 "4 Complexity for the Updated HP Definition ‣ The Computational Complexity of Structure-Based Causality") we prove our results on
complexity of causality.
Some proofs are deferred to the appendix.
2 Causal Models and Causality: A Review
----------------------------------------
In this section, we review the details of
Halpern and Pearl’s definition of causal models and causality,
describing both the original definition and the updated definition.
This material is largely taken from [[Halpern & Pearl2005](#bib.bibx9)], to which we refer the
reader for further details.
###
2.1 Causal models
A signature is a tuple S=⟨U,V,R⟩,
where U is a finite set
of exogenous variables, V is a
finite
set of endogenous
variables,
and R associates with every variable
Y∈U∪V a
finite
nonempty set R(Y) of possible values for Y.
Intuitively, the exogenous variables are ones whose values are
determined by factors outside the model, while the endogenous variables
are ones whose values are ultimately determined by the exogenous
variables.
A causal model over signature S is a tuple
M=⟨S,F⟩, where F associates with every endogenous variable
X∈V a function FX such that
FX:(×U∈UR(U)×(×Y∈V∖{X}R(Y)))→R(X). That is, FX describes how the value of the
endogenous variable X is determined by
the values of all other variables in U∪V.
If R(Y) contains only two values for each Y∈U∪V, then
we say that M is a
binary causal model.
We can describe (some salient features of) a causal model M using a
causal network. A causal network
is a graph
with nodes corresponding to the random variables in V and an edge
from a node labeled X to one labeled Y if FY depends on the value
of X.
Intuitively, variables can have a causal effect only on their
descendants in the causal network; if Y is not a descendant of X,
then a change in the value of X has no affect on the value of Y.
For ease of exposition,
we restrict attention to what are called recursive models. These are ones whose associated causal network
is a directed acyclic graph (that
is, a graph that has no cycle of edges).
Actually, it suffices for our purposes that,
for each setting →u
for the variables in U, there is no cycle among the edges of the causal
network.
We call a setting →u for the variables in U a context.
It should be clear that if M is a recursive causal model,
then there is always a
unique solution to the equations in M, given a context.
The equations determined by {FX:X∈V} can be thought of as
representing processes (or mechanisms) by which values are assigned to
variables. For example, if FX(Y,Z,U)=Y+U (which we usually write
as X=Y+U), then if Y=3 and U=2, then X=5,
regardless of how Z is set.
This equation also gives counterfactual information. It says that,
in the context U=4, if Y were 4, then X would
be 8, regardless of what value X and Z
actually take in the real world. That is, if U=4 and the value of Y
were forced to be 4 (regardless of its actual value), then the value of
X would be 8.
While the equations for a given problem are typically obvious, the
choice of variables may not be. Consider the following example
(due to Hall \shortciteHall98), showing that the choice of variables influences the causal analysis.
Suppose that
Suzy and Billy both pick up rocks and throw them at a bottle.
Suzy’s rock gets there first, shattering the
bottle. Since both throws are perfectly accurate, Billy’s would have
shattered the bottle had Suzy not thrown.
In this case, a naive model might have
an exogenous variable U that encapsulates whatever background factors
cause Suzy and Billy to decide to throw the rock
(the details of U
do not matter, since we are interested only in the context where U’s
value is such that both Suzy and Billy throw), a variable ST for Suzy
throws ({\it ST}=1 if Suzy throws, and ST=0 if she doesn’t), a
variable BT for Billy throws, and a variable BS for bottle shatters.
In the naive model, whose graph is given in Figure [1](#S2.F1 "Figure 1 ‣ 2.1 Causal models ‣ 2 Causal Models and Causality: A Review ‣ The Computational Complexity of Structure-Based Causality"),
BS is 1 if one of ST and BT is 1.
BSSTBTU
Figure 1: A naive model for the rock-throwing example.
This causal model does not distinguish between Suzy and Billy’s rocks
hitting the bottle simultaneously and Suzy’s rock hitting first.
A more sophisticated model might also include variables SH and BH,
for Suzy’s rock hits the bottle and Billy’s rock hits the bottle.
Clearly BS is 1 iff one of SH and BH is 1. However, now,
SH is 1 if ST is 1, and {\it BH}=1 if {\it BT}=1 and {\it SH}=0.
Thus, Billy’s
throw hits if Billy throws and Suzy’s rock doesn’t hit.
This model is described by the following graph,
where we implicitly assume a context where Suzy throws first, so there
is an edge from SH to BH, but not one in the other direction
(and omit the exogenous variable).
BSSTBTSHBH
Figure 2: A better model for the rock-throwing example.
Given a causal model M=(S,F), a (possibly
empty) vector
→X of variables in V, and a vector →x
of values for the variables in
→X, we define a new causal model,
denoted M→X←→x,
which is identical to M, except that the
equation for the variables →X in F is replaced by →X=→x.
Intuitively, this is the causal model that results when the variables in
→X are set to →x by some external action
that affects only the variables in →X
(and overrides the effects of the causal equations).
For example, if M is the more sophisticated model for the
rock-throwing example,
then MST←0 is the model where Suzy doesn’t throw.
Given a signature S=(U,V,R), a formula of the form X=x, for
X∈V and x∈R(X), is called a primitive event. A basic causal formula
has the form
[Y1←y1,…,Yk←yk]φ,
where
* φ is a Boolean
combination of primitive events;
* Y1,…,Yk are distinct variables in V; and
* yi∈R(Yi).
Such a formula is abbreviated as [→Y←→y]φ.
The special
case where k=0
is abbreviated as φ.
Intuitively, [Y1←y1,…,Yk←yk]φ says that
φ holds in the counterfactual world that would arise if
Yi is set to yi, for i=1,…,k.
A causal formula is a Boolean combination of basic causal
formulas.
A causal formula φ is true or false in a causal model, given a
*context*.
We write (M,→u)⊨φ if
φ is true in
causal model M given context →u.
(M,→u)⊨[→Y←→y](X=x) if
the variable X has value x
in the unique (since we are dealing with recursive models) solution
to the equations in
M→Y←→y in context →u (i.e., the
unique vector
of values for the exogenous variables that simultaneously satisfies all
equations F→Y←→yZ, Z∈V−→Y,
with the variables in U set to →u).
We extend the definition to arbitrary causal formulas
in the obvious way.
###
2.2 Causality
We now review the updated HP definition of causality.
###### Definition 2.1
→X=→x is a cause of φ in
(M,→u) if the following three conditions hold:
AC1.
(M,→u)⊨(→X=→x)∧φ.
AC2.
There exist a partition (→Z,→W) of V with
→X⊆→Z and some setting
(→x′,→w) of the
variables in (→X,→W) such that if (M,→u)⊨Z=z∗
for Z∈→Z, then
1. (M,→u)⊨[→X←→x′,→W←→w]¬φ.
2. (M,→u)⊨[→X←→x,→W′←→w,→Z′←→z∗]φ for
all subsets →Z′ of →Z∖→X and all subsets
→W′ of →W,
where we abuse notation and write →W′←→w to
denote the assignment where the variables in →W′ get the same
values as they would in the assignment →W←→w, and
similarly for →Z′←→z∗.
That is, setting any subset →W′ of →W to the values in →w
should have no effect on φ as long as →X has the value
→x, even if all the variables in an arbitrary subset of →Z
are set to their original values in the context →u.
The tuple (→W,→w,→x′) is said to be a
*witness* to the fact that →X=→x is a cause of φ.
AC3.
(→X=→x) is minimal; no subset of
→X satisfies AC2.
If →X is a singleton, then X=x is said to be a *singleton
cause* of φ in (M,→u).
AC1 just says that A cannot be a cause of B unless both A and B
are true.
The core of this definition lies in AC2.
Informally, the variables in →Z should be thought of as
describing the “active causal process” from X to φ.
These are the variables that mediate between X and φ.
AC2(a) is reminiscent of the traditional counterfactual criterion,
according to which X=x is a cause of φ if changing
the value of X results in φ being false.
However, AC2(a) is more permissive than the traditional criterion;
it allows the dependence of φ on X to be tested
under special structural contingencies,
in which the variables →W are held constant at some setting
→w. AC2(b) is an attempt to counteract the “permissiveness”
of AC2(a) with
regard to structural contingencies. Essentially, it ensures that
X alone suffices to bring about the change from φ to ¬φ;
setting →W to →w merely eliminates
spurious side effects that tend to mask the action of X.
To understand the role of AC2(b), consider the rock-throwing example
again. Let M be the model in Figure [1](#S2.F1 "Figure 1 ‣ 2.1 Causal models ‣ 2 Causal Models and Causality: A Review ‣ The Computational Complexity of Structure-Based Causality"), and let →u be
the context where both Suzy and Billy throw. It is easy to see that
both Suzy and Billy are causes of the bottle shattering in
(M,→u): Let →Z={{\it ST},BS}, and
consider the structural contingency where Billy doesn’t throw ({\it BT}=0). Clearly (M,U)⊨[{\it ST}←0,{\it BT}←0]({%
\it BS}=0) and
(M,u)⊨[{\it ST}←1,{\it BT}←0](BS=1), so Suzy is a
cause of the bottle
shattering. A symmetric argument shows that Billy is also a cause.
But now consider the model
M′ described in Figure [2](#S2.F2 "Figure 2 ‣ 2.1 Causal models ‣ 2 Causal Models and Causality: A Review ‣ The Computational Complexity of Structure-Based Causality"); again, u is the context where
both Suzy and Billy throw. It is still
the case that Suzy is a cause of the bottle shattering in (M′,u). We
can take →W={{\it BT}} and again consider the contingency where
Billy doesn’t
throw.
However, Billy is not a cause of the bottle shattering in
(M′,u). For
suppose that
we now take →W={{\it ST}} and consider the contingency
where Suzy doesn’t throw. Clearly AC2(a) holds, since if Billy doesn’t
throw (under this contingency), then the bottle doesn’t shatter.
However, AC2(b) does not hold. Since {\it BH}∈→Z, if we set BH to
0 (its original value), then AC2(b) would require that
(M′,u)⊨[{\it BT}←1,{\it ST}←0,{\it BH}←0](BS=1), but
this is not the case.
Similar arguments show that no other choice of (→Z,→W) makes
Billy’s throw a cause of the bottle shattering in (M′,u).
The original HP definition differs from the updated definition in only
one respect.
Rather than requiring that
(M,→u)⊨[→X←→x,→W′←→w,→Z′←→z∗]φ
for all subsets →W′ of →W,
it was required to hold only for →W. That is, the following
condition was used
instead of AC2(b).
AC2(b′)
(M,→u)⊨[→X←→x,→W←→w,→Z′←→z∗]φ for
all subsets →Z′ of
→Z.
The requirement for AC2(b) to hold for all subsets of W
in the updated definition prevents situations
where W “conceals other causes for φ”. The role of this requirement is perhaps best understood by considering
the following example, due to Hopkins and Pearl \shortciteHopkinsP02
(the description is taken from [[Halpern & Pearl2005](#bib.bibx9)]):
Suppose that a prisoner dies
either if A loads B’s gun and B shoots, or if C loads and shoots
his gun. Taking D to represent the prisoner’s death and making the
obvious assumptions about the meaning of the variables, we have that
D=(A∧B)∨C. Suppose that in the actual
context u, A loads B’s gun, B does not shoot, but C does load
and shoot his gun, so that the prisoner dies. That is, A=1, B=0,
and C=1. Clearly C=1 is a cause of D=1.
We would not want to say that A=1 is a cause of D=1,
given that B did not shoot (i.e., given that B=0). However,
with AC2(b′), A=1 is a cause of D=1. For we can take
→W={B,C} and consider the contingency where B=1 and C=0.
It is easy to check that AC2(a) and AC2(b′) hold for this contingency,
so under the original HP definition, A=1 is a cause of D=1. However,
AC2(b) fails in this case, since (M,u)⊨[A←1,C←0](D=0).
The key point is that AC2(b) says that for A=1 to be a cause of D=1,
it must
be the case that D=0 if only some of the values in →W are set
to →w. That means that the other variables
get the same value as they do in the actual context; in this case, by
setting only A to 1 and leaving B unset, B takes on its original
value of 0, in which case D=0. AC2(b′) does not consider this case.
Using AC2(b) rather than AC2(b′) has been shown to have a significant
benefit
(and to lead to more intuitive results) when causality is applied to
program verification, with the goal of understanding what in the code is
the cause of a program not satisfying its specification [[Beer et al.2012](#bib.bibx1)].
3 Relevant Complexity Classes
------------------------------
In this section, we briefly recall the definitions of the complexity
classes that we need for our results, and define the complexity class
Dk2.
Recall that the *polynomial hierarchy* is a hierarchy of complexity
classes that generalize the classes NP and co-NP. Let ΣP1={\it NP} and ΠP1=co-{\it NP}. For i>1, define ΣPi={\it NP}ΣPi−1 and ΠPi=(co-{\it NP})ΣPi−1, where, in
general, XY denotes the class of problems solvable by a Turing
machine in class A augmented with an oracle for a problem complete for
class B. (See [[Meyer & Stockmeyer1972](#bib.bibx17), [Stockmeyer1977](#bib.bibx21)] for more details and intuition.)
We now define the classes DPk as follows.
###### Definition 3.1
For k=1,2,…,
| | | |
| --- | --- | --- |
| | DPk={L:∃L1,L2:L1∈ΣPk,L2∈ΠPk,L=L1∩L2}. | |
For k=1, the class DP1 is the well-known complexity class DP,
defined by Papadimitriou and Yannakakis \shortcitePY84. It contains
*exact* problems such as the language of pairs ⟨G,k⟩,
where G is a graph that has a maximal clique of size exactly k.
As usual, we say that a language L is DPk complete if it is in
DPk and is the “hardest” language in DPk, in the sense that
there is a polynomial time reduction from any language L′∈DPk
to L.
Recall that a *q*uantified Boolean formula (QBF) is a generalization of
a propositional formula, where some propositional variables are
quantified. Thus, for example, ∃x∀y(x∨y) is a
QBF. A *closed* QBF (CQBF) is one where there are no free
propositional variables. A CQBF is either true or
false, independent of the truth assignment. The “canonical” languages
complete for Σk2 and Πk2 consist of the
CQBFs with k alternations of quantifiers starting with
∃ (resp., ∀) that are true. In particular, let
| | | |
| --- | --- | --- |
| | ΣP2(SAT)={∃→X∀→Yφ∣∃→X∀→Yφ is a CQBF, ∃→X∀→Yφ=\sc\bf true}ΠP2(SAT)={∀→X∃→Yφ∣∀→X∃→Yφ is a CQBF, ∀→X∃→Yφ=\sc\bf true}. | |
ΣP2(SAT) is complete for ΣP2 and
ΠP2(SAT) is complete for ΠP2 [[Wrathall1976](#bib.bibx22)].
The following lemma provides a useful condition sufficient for a language
to be DPk-complete.
###### Lemma 3.2
If L1 is ΣPk-complete and L2 is ΠPk-complete, then
L3=L1∩L2 is DPk-complete.
* The fact that L3 is in DPk is immediate from the definition of
DPk. For hardness, let L′3 be a language in DPk. Then
there exist L′1 and L′2 such that
L′1∈ΣPk,L′2∈ΠPk, and L′=L′1∩L′2. Let f be a polynomial-time reduction from L′1 to L1,
and let g be a polynomial-time reduction from L′2 to L2 (the
existence of such reductions f and g follows from the fact that
L1 and L2 are ΣPk-complete and ΠPk-complete,
respectively). Then, ⟨f,g⟩
is a polynomial-time reduction from L′3 to L3, as required.
Essentially the same argument shows that if L1 is
ΣPk-hard and L2 is ΠPk-hard, then L3=L1∩L2
is DPk-hard.
Determining whether →X=→x is
a cause of φ in (M,u) is
a decision problem:
we define a language and try to determine whether a particular tuple is
in that language. (See Section [4](#S4 "4 Complexity for the Updated HP Definition ‣ The Computational Complexity of Structure-Based Causality") for the formal
definition.) Determining degree of responsibility and blame is a
different type
of problem, since we are determining which number represents the
degree of
responsibility (resp., blame). Formally, these are *function
problems*.
For ease of exposition, we restrict attention to functions from some
strings over some fixed language Σ to strings over Σ
(i.e., we are considering functions from Σ∗ to Σ∗).
For a complexity class A in the polynomial hierarchy, FPA[logn] consists of all
functions that can be computed
by a polynomial-time Turing machine with an A-oracle
which on input x asks a total of O(log|x|) queries [[Papadimitriou1984](#bib.bibx19)].
A function f(x) is FPA[logn]hard iff for every function g(x)
in FPA[logn] there exist polynomially computable functions
R,S:Σ∗→Σ∗
such that g(x)=S(f(R(x))). A function f(x) is complete in FPA[logn] iff
it is in FPA[logn] and is FPA[logn]-hard.
Finally, for a complexity class A in polynomial hierarchy, FPA||is the class of
functions that can be computed by a polynomial-time Turing machine with
parallel (i.e., non-adaptive) queries to an A-oracle. (For background
on these complexity classes, see [[Jenner & Toran1995](#bib.bibx14), [Johnson1990](#bib.bibx15)].)
4 Complexity for the Updated HP Definition
-------------------------------------------
In this section, we prove our results on the complexity of deciding
causality. We start by defining the problem formally.
In the definitions, M stands for a causal model, →u is a context,
→X is a subset of variables of M, and →x is the set of
values of →X in (M,→u):
| | | |
| --- | --- | --- |
| | L\small cause={⟨M,→u,φ,→X,→x⟩:(→X=→x) is a cause of φ in (M,→u)}. | |
One of our goals is to understand the cause of the complexity of
computing causality. Towards this end, it is useful to define two
related languages:
| | | |
| --- | --- | --- |
| | L\small AC2={⟨M,→u,φ,→X,→x⟩:(→X=→x) satisfies conditions AC1 and AC2 of Def.~{}[???](#) for φ in (M,→u)},L\small AC3={⟨M,→u,φ,→X,→x⟩:(→X=→x) satisfies conditions AC1 and AC3 of Def.~{}[???](#) for φ in (M,→u)}. | |
It is easy to see that L\small cause=L\small AC2∩L\small AC3.
Let L1\small cause be the subset of L\small cause where →X and →x are
singletons; this is the singleton causality problem. We can similarly
define L1\small AC2 and L1\small AC3. Again, we have L1\small cause=L1\small AC2∩L1\small AC3, but, in fact, we have L1\small cause=L1\small AC2, since L1\small AC2⊆L1\small AC3; for singleton causality, the
minimality condition AC3 trivially holds.
We denote by LB\small cause the language of causality for binary causal
models
(i.e., where the models M in the tuple are binary models),
and by LB\small AC2 and LB\small AC3 the languages L\small AC2 and
L\small AC3 restricted to binary causal models. Again we have that
LB\small cause=LB\small AC2∩LB\small AC3.
And again, we can define
LB,1\small cause, LB,1\small AC2, and LB,1\small AC3, and we have
LB,1\small cause=LB,1\small AC2.
We start by considering singleton causality. As we observed,
Eiter and Lukasiewicz \shortciteEL01 and Hopkins \shortciteHopkins01 showed
that, with the original HP definition, singleton causality and causality
coincide.
However, for the updated definition, Halpern \shortciteHal39 showed that
it is in fact possible to have minimal causes that are not
singletons.
Thus, we consider singleton causality and general causality separately.
We can clarify where the complexity lies by considering L\small AC2 (and
its sublanguages) and L\small AC3 (and its sublanguages) separately.
###### Theorem 4.1
The languages L\small AC2, L1\small AC2, LB,1\small AC2, and L1\small AC2 are
ΣP2-complete.
* To show all these languages are in ΣP,
given a tuple ⟨M,→u,φ,→X,→x⟩, checking that
AC1 holds, that is, checking that (M,→u)⊨→X=→x∧φ, can be done in time polynomial in the size of M,
|→X|, and |φ| (the length of φ as a string of symbols).
For AC2,
we need only guess the set →W and the assignment
→w. The check that
assigning →w to →W and x′ to X indeed falsifies
φ is polynomial,
and we use an NP oracle to check that for all subsets of →W and
all subsets of →Z, condition AC2(b) holds.
(The argument is quite similar to Eiter and Lukasiewicz’s argument that
causality is in ΣP2 for general models with the original HP
definition.)
For hardness, it clearly suffices to show that
LB,1\small AC2 is ΣP2-hard. We do this by reducing
ΣP2(SAT) to LB,1\small AC2. Given a CQBF formula
∃→X∀→Yφ, we show that we can efficiently
construct a causal formula ψ, model M, and context u such that
∃→X∀→Yφ=\sc\bf true iff
(M,u,ψ,A,0)∈LB,1\small AC2.
We leave details to the appendix.
Since, as we have observed, AC3 is vacuous in the case of singleton
causality, it follows that singleton causality is ΣP2-complete.
###### Corollary 4.2
L1\small cause and LB,1\small cause are Σ2-complete.
We now show that things are harder if we do not restrict to binary causal
models (unless the polynomial hierarchy collapses). As a first step, we
consider the complexity of L\small AC3 and LB\small AC3.
###### Theorem 4.3
L\small AC3 and LB\small AC3 are ΠP2-complete.
* The fact that L\small AC3 and LB\small AC3 are in ΠP2 is straightforward.
Again, given a tuple ⟨M,→u,φ,→X,→x⟩, we can check that
AC1 holds in polynomial time. For AC3, we need to check that for all
strict subsets →X′ of →X, AC2 fails. Since checking AC2 is in
ΣP2, checking that it fails is in ΠP2. Checking that it
fails for all strict subsets →X′ keeps it in ΠP2 (since it
just adds one more universal quantifier).
To prove that these languages are ΠP2-hard, we show that we can
reduce ΠP2(SAT) to LB\small AC3.
The proof is similar in spirit to the proof of
Theorem [4.1](#S4.Thmtheorem1 "Theorem 4.1 ‣ 4 Complexity for the Updated HP Definition ‣ The Computational Complexity of Structure-Based Causality");
we leave details to the appendix.
We are now ready to prove our main result.
###### Theorem 4.4
L\small cause and LB\small cause are DP2-complete.
* Membership of L\small cause (and hence also LB\small cause) in DP2 follows from
the fact that L\small cause=L\small AC2∩L\small AC3, L\small AC2∈ΣP2,
and L\small AC3∈ΠP2.
The fact that LB\small cause (and hence also L\small cause) are DP2-hard follows from
Lemma [3.2](#S3.Thmtheorem2 "Lemma 3.2 ‣ 3 Relevant Complexity Classes ‣ The Computational Complexity of Structure-Based Causality") and Theorems [4.1](#S4.Thmtheorem1 "Theorem 4.1 ‣ 4 Complexity for the Updated HP Definition ‣ The Computational Complexity of Structure-Based Causality")
and [4.3](#S4.Thmtheorem3 "Theorem 4.3 ‣ 4 Complexity for the Updated HP Definition ‣ The Computational Complexity of Structure-Based Causality").
The fact that there may be more than one conjunct in a cause using the
updated HP definition means that
checking AC3 becomes nontrivial, and causes the increase in complexity
for ΣP2 to DP2. But why is there no dropoff with the
updated HP definition when we restrict to binary models, although there
is a dropoff from ΣP2 to NP for the original HP definition?
To prove their NP-completeness result, Eiter and Lukasiewicz
\shortciteEL01 showed that for binary models, with the original HP
definition, the set →Z and its subsets can be omitted from the
definition of cause. That is, we can replace AC2(b′) by
AC2(b′′)
(M,→u)⊨[→X←→x,→W←→w]φ
to get an equivalent definition.
The example that a cause may require more than one conjunct given by
Halpern \shortciteHal39 shows that removing →Z and its subsets
from AC2(b) does not result in an equivalent definition in binary models.
But even if it did, the fact that we need
to quantify over all subset →W′ of →W in AC2(b) would be
enough to ensure that there is no dropoff in complexity in binary models.
5 Responsibility and Blame
---------------------------
In this section, we review the definitions of responsibility and blame
and characterize their complexity.
See Chockler and Halpern \shortciteCH04 for more intuition and details.
###
5.1 Responsibility
The definition of responsibility given by Chockler and Halpern
\shortciteCH04 was given based on the original HP definition of
causality, and thus assumed that causes were always single conjuncts.
It is straightforward to
extend it to allow causes to have arbitrarily many conjuncts.
###### Definition 5.1
The degree of responsibility
of →X=→x for φ in
(M,→u), denoted {\em dr}((M,→u),(→X=→x),φ), is
0 if →X=→x is
not a cause of φ in (M,→u); it is 1/(k+1) if
→X=→x is a cause of φ in (M,→u)
and there exists a partition (→Z,→W) and setting
(→x′,→w) for which AC2 holds
such that (a) k variables in →W have different values in →w
than they do in the context →u and (b) there is no partition
(→Z′,→W′) and setting (→x′′,→w′) satisfying AC2
such that only k′<k variables have different values in →w′
than they do the context →u.
Intuitively, {\em dr}((M,→u),(→X=→x),φ) measures the minimal number
of changes that have to be made in →u
in order to make φ counterfactually depend on →X, provided
the conditions on the subsets of →W and →Z are satisfied
(see also the voting example from the introduction).
If there is no partition of V to
(→Z,→W) that satisfies AC2, or (→X=→x) does not satisfy AC3 for φ
in (M,→u), then the minimal number of changes in →u
in Definition [5.1](#S5.Thmtheorem1 "Definition 5.1 ‣ 5.1 Responsibility ‣ 5 Responsibility and Blame ‣ The Computational Complexity of Structure-Based Causality") is taken to have cardinality
∞, and thus the degree of responsibility of (→X=→x) is 0 (and hence it is not a cause).
In the original HP model, it was shown that computing responsibility is
FPNP[logn]-complete in binary causal models [[Chockler, Halpern, & Kupferman2008](#bib.bibx3)] and
FPΣP2[logn]-complete in general causal models [[Chockler & Halpern2004](#bib.bibx2)]. We now
characterize the complexity of computing responsibility in the updated
HP definition.
###### Theorem 5.2
Computing the degree of responsibility is FPΣP2[logn]-complete for
singleton causes in binary and general causal models.
* The proof is quite similar to the proof in [[Chockler & Halpern2004](#bib.bibx2)].
We prove membership by describing an algorithm
in FPΣP2[logn]for computing the degree of responsibility. Roughly speaking, the algorithm queries an oracle for the language R={(⟨(M,→u),(X=x),φ,i⟩ such that
⟨(M,→u),(X=x),φ⟩∈Lcause and the degree of responsibility of (X=x) for φ is at least i}. It is easy to see that R is in ΣP2 by using Corollary [4.2](#S4.Thmtheorem2 "Corollary 4.2 ‣ 4 Complexity for the Updated HP Definition ‣ The Computational Complexity of Structure-Based Causality").
The algorithm for computing the degree of responsibility performs a
binary search on the
value of {\em dr}((M,→u),(X=x),φ), each time
dividing the range of possible values for the degree of responsibility
by 2 according to the answer of R.
The number of possible candidates for the degree of responsibility is bounded by the size of the input n, and thus the number of queries is at most ⌈logn⌉.
For hardness in binary causal models (which implies hardness in general
causal models), we provide a reduction from the
ΣP2-complete problem
MINQSAT2 [[Chockler & Halpern2004](#bib.bibx2)] to the
degree of responsibility, where MINQSAT2(∃→X∀→Yψ) is the minimum number of 1’s in the satisfying
assignment to →X for ∃→X∀→Yψ if
such an assignment exists, and
|→X|+1 otherwise.
###### Theorem 5.3
Computing the degree of responsibility is FPD2[logn]-complete in binary and general causal models.
* Membership in FPD2[logn]is shown in quite a similar way to
Theorem [5.2](#S5.Thmtheorem2 "Theorem 5.2 ‣ 5.1 Responsibility ‣ 5 Responsibility and Blame ‣ The Computational Complexity of Structure-Based Causality"). For hardness, as there are no known
natural problems complete in FPD2[logn], the proof
proceeds by constructing a generic reduction from a problem in FPD2[logn] to
the degree of
responsibility.
###
5.2 Blame
The definition of blame addresses the situation where there is
uncertainty about the true situation or “how the world works”.
Blame, introduced in [[Chockler & Halpern2004](#bib.bibx2)], considers the
“true situation” to be determined by the context, and
“how the world works” to be determined by the structural equations.
An agent’s uncertainty is modeled by a pair (K,Pr), where K
is a set of pairs of the form (M,→u),
where M is a causal model and →u is a context,
and Pr is a probability distribution over K.
A pair (M,→u) is called a situation.
We think of K as describing the situations that the agent considers
possible before →X is set to →x.
The degree
of blame that setting →X to →x has for φ is then the expected degree
of responsibility of →X=→x for φ in (M→X←→x,→u),
taken over the situations (M,→u)∈K.
Note that the situation (M→X←→x,→u) for (M,→u)∈K are those that the agent considers possible after →X is set to
→x.
###### Definition 5.4
The degree of
blame of setting →X to →x for φ relative to epistemic state
(K,Pr), denoted {\em db}(K,Pr,→X←→x,φ), is
| | | |
| --- | --- | --- |
| | ∑(M,→u)∈K{\em dr}((M→X←→x,→u),→X=→x,φ)Pr((M,→u)). | |
For the original HP definition of cause, Chockler and Halpern
\shortciteCH04 show that
computing the degree of blame is complete in
FPΣP2|| for general and in FPNP||for binary causal models.
Again, with the updated HP definition, the complexity changes.
###### Theorem 5.5
The problem of computing blame in recursive causal models is
FPΣP2||-complete for singleton causes and FPD2||-complete for (general) causes, in binary and general causal models.
|
348faf5d-3dec-4eef-ad26-e53a5f2cd123
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Melbourne, practical rationality
Discussion article for the meetup : Melbourne, practical rationality
WHEN: 02 August 2013 07:00:00PM (+1000)
WHERE: 491 King Street, West Melbourne VIC 3003, Australia
Practical rationality. This meetup repeats on the 1st Friday of each month and is distinct from our social meetup on the 3rd Friday of each month.
Topics for August:
* CfAR Melbourne workshop in February - how to help lock this in
* Melbourne Effective Altruism progress report & next actions
* Rationality Hacks hosted by Brayden
Discussion: http://groups.google.com/group/melbourne-less-wrong
All welcome from 6:30pm. Call the phone number on the door and I'll let you in.
ETA 2013-07-23: Rationality Hacks hosted by Brayden is our new proposed activity, subject to change by popular outcry on the night.
Discussion article for the meetup : Melbourne, practical rationality
|
428eb3ec-f922-4fec-9a36-b607603bdae8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What does "the universe is quantum" actually mean?
To shamelessly steal from Yudkowsky:
> If you get one law, like Special Relativity, you can look at the language it's written in, and infer what the next law ought to look like. If the laws are not being generated from the same language, they surely have something in common; and this I refer to as the Tao.
I am very interested in the inhuman nature of underlying reality. This interest led me to physics, which is humanity's best attempt at making predictions about reality that hold up to observation. My (lukewarm?) take as a doctoral student in physics that the triumph of physics is quantifying the degree to which physical models fail, not necessarily creating models that are true to whatever rules actually run reality (the Tau, if you will). For example, viewing Newtonian physics from this perspective shows that a force is not a thing that causes acceleration, but rather that a force is an error term in the model which says that physical objects move in straight lines at the same speed forever. You should definitely keep predicting your surroundings by assuming that pushing on things makes them move faster, but the best fundamental matter theories we have do not have forces in them. (This should sound familiar).
Quantum mechanics is the framework that physicists use to explain why matter acts the way it acts. The orbital model you learn in high school chemistry is a product of quantum mechanics. The mass of neutrons and protons which makes up most of the weight of the things we interact with is mostly a product of quantum mechanics. The frequencies of light which emerge from various chemical and nuclear processes are a product of quantum mechanics. As a consequence, the best explanations for the light we see when we look away from the ground and into the universe are from quantum mechanics. The ideal gas laws are commonly derived in statistical mechanics using an assumption of quantized momentum states which is understood to be justified by quantum mechanics.
Th
|
53ba1bee-753a-4dd3-8847-9e75496a3407
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Even Superhuman Go AIs Have Surprising Failure Modes
In March 2016, AlphaGo defeated the Go world champion Lee Sedol, winning four games to one. Machines had finally become superhuman at Go. Since then, Go-playing AI has only grown stronger. The supremacy of AI over humans seemed assured, with Lee Sedol commenting they are an "entity that cannot be defeated". But in 2022, amateur Go player Kellin Pelrine defeated KataGo, a Go program that is even stronger than AlphaGo. How?
It turns out that even superhuman AIs have blind spots and can be tripped up by surprisingly simple tricks. In our new paper, we developed a way to automatically find vulnerabilities in a "victim" AI system by training an adversary AI system to beat the victim. With this approach, we found that KataGo systematically misevaluates large cyclically connected groups of stones. We also found that other superhuman Go bots including ELF OpenGo, Leela Zero and Fine Art suffer from a similar blindspot. Although such positions rarely occur in human games, they can be reliably created by executing a straightforward strategy. Indeed, the strategy is simple enough that you can teach it to a human who can then defeat these Go bots unaided.
The victim and adversary take turns playing a game of Go. The adversary is able to sample moves the victim is likely to take, but otherwise has no special powers, and can only play legal Go moves.
Our AI system (that we call the adversary) can beat a superhuman version of KataGo in 94 out of 100 games, despite requiring only 8% of the computational power used to train that version of KataGo. We found two separate exploits: one where the adversary tricks KataGo into passing prematurely, and another that involves coaxing KataGo into confidently building an unsafe circular group that can be captured. Go enthusiasts can read an analysis of these games on the project website.
Our results also give some general lessons about AI outside of Go. Many AI systems, from image classifiers to natural language processing systems, are v
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.